After introducing a number of features, Anthropic has added a new Auto Mode to its Claude Code platform. With this new feature, the AI company hopes to reduce manual interruptions while maintaining safety controls during coding tasks. The feature is currently available as a research preview for Team plan users and is expected to be released to Enterprise customers and API users soon.
The new mode is created to address a key limitation in Claude Code’s current workflow, which requires users to manually approve every file modification or command execution. While this method ensures safety, it may disrupt longer or more complicated tasks. Auto mode provides a middle ground by allowing the system to make certain decisions independently while still implementing safeguards.
In auto mode, each AI-initiated action is evaluated using an internal classifier before being executed. This system detects potentially harmful activities such as mass file deletions, sensitive data exposure and the execution of malicious file code. Tasks that are safe are completed automatically, while more dangerous actions are restricted. In situations like this, the AI is prompted to consider alternative approaches or, if necessary, seek user approval.
Also read: Nvidia CEO Jensen Huang says we have achieved AGI: Here is what it means for us
Even with the safeguards, the company warns that the feature does not completely eliminate risks. It suggests using auto mode in controlled or isolated environments, as the system may occasionally misclassify actions, allowing risky operations or blocking legitimate ones due to limited context.
The addition of auto mode comes with some minor drawbacks, including slightly higher computational cost and potential latency during task execution due to additional safety checks. Anyway, the developers can enable the feature through command-line options or by using supported environments such as desktop applications and code editors. Administrators will be able to disable the feature through organisational settings.