Anthropic launches new tool to review AI-generated code: Check details
Anthropic launches Code Review tool in Claude Code to help check AI-generated software code.
AI agents review pull requests to find bugs, errors, and possible issues.
Available for enterprise and team users, with GitHub integration and usage-based pricing.
Anthropic has recently introduced a new AI feature in its Claude Code platform known as Code Review. It is designed to review the software code before it enters a company’s codebase. The launch is followed by a growing challenge in software development as AI tools generate large volumes of code faster than teams can manually check it. Many companies, along with Anthropic, say the increase in AI-generated code created a review bottleneck. However, by automating parts of the review process, Anthropic hopes to help developers catch bugs and logical errors earlier. The company further said that the new system focuses on practical fixes and clear explanations so engineers can quickly understand what went wrong and how to correct it.
SurveyWhy is Claude Code Review necessary?
When developers submit changes, they typically do so via pull requests. These pull requests need manual verification before they are merged into the codebase. According to Cat Wu, Anthropic’s head of product, Claude Code has significantly increased the number of pull requests teams need to review.
Software development has been reshaped with the launch of AI-driven code generation, or vibe coding. Developers now explain what they want a program to do using simple language, and AI writes the code for them. This makes development faster, but it can also create problems. The code may have more bugs, security risks, or parts that developers don’t fully understand.
Also read: Samsung’s glass-free 3D monitor will support 120 games, company says
How Claude Code Review works
According to the company, once a pull request is created, the new feature deploys multiple AI agents to evaluate the code. Each agent has a different work, like some detect logical errors that could break functionality, while others look for edge cases where the software might fail under specific conditions. Lastly, there is an aggregation layer that compiles the findings, removes duplicates, and ranks issues by importance.
The AI tool also provides comments directly on the relevant lines of code, which explain what the problem is, why it is important, and how it can be resolved by the development team. Problems are also colour-coded depending on their level of severity, making it easy for the development team to know what needs urgent attention. According to the company, unlike other style checker tools, Claude Code Review is focused on logical and functional issues.
Also read: Apple reportedly delays smart home display launch as it waits for new AI-powered Siri
Results and availability
Anthropic says that an average code review takes roughly around 20 minutes. However, the larger or more complex pull requests can require more time. Internal testing by the company also revealed that the tool added useful review comments on pull requests and helped find bugs before the code was merged.
The Code Review feature is currently available to users running the enterprise and team subscriptions and integrates directly with GitHub repositories. Anthropic charges for the service based on token usage, with each review typically costing between $15 and $25.
Bhaskar Sharma
Bhaskar is a senior copy editor at Digit India, where he simplifies complex tech topics across iOS, Android, macOS, Windows, and emerging consumer tech. His work has appeared in iGeeksBlog, GuidingTech, and other publications, and he previously served as an assistant editor at TechBloat and TechReloaded. A B.Tech graduate and full-time tech writer, he is known for clear, practical guides and explainers. View Full Profile