As AI-generated coding continues to increase, code reviews are drawing more attention than ever. CodeRabbit provides AI-powered code reviews, and this trend is giving it strong momentum.
That said, there are not many people who can confidently say they always deliver proper code reviews. High-quality reviews require a clear understanding of requirements and the ability to point out issues that are not explicitly written in the code.
To address this, I built a simple game where you can experience code reviewing.
About the Game
You can access the game here:
Code Review Game - Choose Your Language
Currently, C, Flutter, JavaScript, and Python are available. The game supports both English and Japanese.
How to Play
Each language currently offers Levels 1 through 3. The review screen shows three pieces of information:
- Requirements
- Code
- Hints
Reviewers write feedback based on these inputs, following best practices for the selected language.
Clicking on “Requirements” inserts a Markdown heading. Clicking a line of code inserts that line number.
Evaluation
Evaluation criteria are predefined. The game compares those criteria with your actual review text using the Gemini API, and receives a JSON response containing the score, strengths, improvement points, and more.
Scoring 70 or above unlocks the next level.
Sharing
You can share your review results on X (Twitter). The evaluation JSON is stored in Cloudflare KV, and the generated OGP image is stored in Cloudflare R2.
The app itself has no authentication, so the share URL is simply a UUID-based link. Anyone who knows the URL can access it — which is fine, given that there is no personal data and the design assumes posting results publicly on X.
Architecture
This project relies heavily on Cloudflare.
- Framework Remix
- Runtime Cloudflare Workers
- Data Storage Cloudflare KV
- OGP Image Storage Cloudflare R2
For AI workflows, Claude Code on the web is used for coding, while Gemini API handles review evaluation.
Development Notes
Claude Code on the web provided a coupon this time, so I tried it out. The total was reduced from $1,000 to $936, meaning the project effectively cost around $64.
Pull request reviews were performed using CodeRabbit.
There was only one instance where development got stuck and I had to discard a branch. Otherwise, I iterated through PR creation → documentation → review → development → review, and the total cost stayed within the $60–64 range.
Experience Using Claude Code on the Web
It works similarly to the GitHub Actions version, but running everything in the cloud is extremely convenient. Once you give instructions, coding continues even if you close your laptop. Claude will progress all the way to just before PR creation, so after reaching a stable point, you create the PR, fetch it locally, and verify.
Ideally, I’d like it to run E2E tests with something like Playwright before opening the PR, but it seems the cloud environment cannot run those tests. As a result, E2E tests must be done locally, which can be stressful when things don’t run correctly (and CodeRabbit ends up reviewing a broken PR).
As long as development is spec-driven, this workflow works reasonably well—especially for small projects like this one.
Challenges
This game was initially built quickly for visitors at the FlutterKaigi booth, but its current design is difficult to play on mobile (due to free-text entry and other constraints). You need to sit down at a PC browser to play properly, which is a fundamental issue.
Conclusion
Still, being able to build a fully playable game in less than two days — and one that even works with generative AI — was a great outcome. It also serves as practice for code reviewers (or so I think).
Let's try!
The source code is available here:
https://github.com/goofmint/ReviewGame




Top comments (0)