3 ways ChatGPT maybe used by US in Iran conflict

HIGHLIGHTS

ChatGPT can suggest targets and actions, making decisions faster but raising concerns about too much reliance on AI.

AI can help soldiers understand threats quickly, though mistakes or wrong information could be dangerous.

AI also supports logistics and planning, quietly influencing military operations while increasing dependence on technology.

Over the past few weeks, the United States has moved closer to using advanced AI tools inside real military operations. This AI integration has raised serious questions about accountability, speed, and human control. The recent deal between OpenAI and the Pentagon was criticised by many, as it allows ChatGPT models to operate in sensitive environments, yet the limits of that use remain unclear. While officials claim humans will remain in charge, the increasing reliance on AI in target analysis and decision-making makes it difficult to make such a promise. Meanwhile, there are still tensions with Iran, and AI is already playing a part in how data, surveillance, and defence are handled. This is a new world where AI can play a part in life-and-death decisions on the battlefield. In this article we’ll try to break down some ways ChatGPT may be used by the US in the Iran conflict.

1. ChatGPT as a target advisor, not just an analyst

The US military has long trusted AI and implemented it in its operations, while it was not in the capacity to identify and lock targets; rather, it has trusted its large volume of sensitive information to be processed by AI. For years, systems were used to scan drone footage, detect unusual patterns, or flag possible threats. What is changing now is the step that comes after analysis, as, with the models like ChatGPT entering the picture, AI is no longer limited to highlighting information; rather, it can now suggest actions.

In a potential Iran conflict scenario, this could look like a human analyst feeding a list of possible targets into an AI system. The model could review satellite images, past intelligence reports, supply routes, and even weather conditions. It could then rank which targets are most critical or most vulnerable. It might also suggest the timing of a strike or highlight logistical constraints.

This kind of assistance promises speed, as the decisions that once took hours could be made in minutes, especially in a fast-moving conflict; that matters. However, speed is not the only factor that matters. The real issue is how much influence the AI has over the final decision.

While U.S. military officials emphasise that humans remain in control, the effectiveness of this oversight is highly questionable. If data has already been filtered, options ranked, and a recommendation provided by an AI system, it is possible for the human operator to be less likely to critically think about the results, a situation often referred to as automation bias.

Furthermore, a recent study has also found that AI systems in simulated war scenarios have recommended extreme actions, including the use of nuclear weapons, with an ease that is quite alarming, as casually as ordering chocolates for a loved one. These recommendations were not limited to a single AI system. Rather, across multiple scenarios and systems, AI consistently escalated toward nuclear options instead of prioritising diplomatic solutions.

Other than that, a recent study from MIT has also shown that when humans work with AI systems, they tend to develop a certain degree of trust in the outputs provided by the AI system. When time is limited and the situation is critical, humans tend to follow the recommendations of the AI system without fully analysing them. This means that in a war scenario, the AI system is not only helping the decisions made but is also actually affecting them as well.

Moreover, who is to be blamed or held accountable for a missile strike made on an AI recommendation that was a failure? Is it the person who used the AI recommendations or the people who created the AI system?

Also read: Google Messages get live location sharing on Android: How it works

2. AI in drone defence and battlefield awareness

Artificial intelligence systems such as ChatGPT can assist in the fight against drones and raise awareness in the war zone. In the current wars, including the war in Iran, drones have been used in a number of ways. Defence systems already use AI to study data from these drones, but adding conversational AI changes how soldiers interact with that information. 

Rather than having to decipher complex screens, the soldier can simply ask, ‘What’s coming towards us?’ or ‘What should we do next?’ and receive clear and understandable responses.

If added, this could prove to be highly beneficial, as it could save time and make the information easier to understand. Along with that, it would also provide the soldier with enough time to make informed decisions which are suitable for the scenario. While it may seem like a trivial addition, I believe, if installed, it could save the lives of many and make a big difference.

However, it also introduces new risks, as the way information is framed can influence decisions made by the person receiving it. Let’s say if an AI system describes a threat as urgent or high-risk, it may push a faster or more aggressive response. If it misses context or misinterprets data, the consequences could be serious.

There is also the question of reliability, as the AI systems are not perfect. Even the companies that make them admit that the AI could make mistakes. For example, if an AI misinterpreted an image, identified something incorrectly, or did not grasp unusual situations.

While the mistakes can be corrected in a controlled environment, in a real war situation, one might not have enough time to check if the information being used is correct, and if a person acts based on incorrect information, they might not be able to correct the action later.

Moreover, the stakes are much higher in a real-life scenario when compared to a stimulated environment, as the drone strikes result in actual casualties, and the shortcomings of the detection system can result in real-time consequences. If we consider the involvement of conversational AI in the system, the strengths and weaknesses of the system would be reflected in the outcomes.

However, there exists a general concern regarding the amount of responsibility placed on the system. Even if the AI system is assisting, the distinction between assistance and dependency can change. As the system advances, humans might start depending more on the system, thereby reducing their involvement in processing raw data. These are just not vague words, as the studies have actually proved that humans tend to depend on AI more and more as they work with it on a daily basis.

Also read: Microsoft finally gives Windows 11 a long-requested feature

3. ChatGPT running the military’s back office, quietly influencing war

While the points mentioned above show how AI tools can serve as frontliners, a more practical perspective comes from the context of the US and Iran conflict. AI, like ChatGPT, may not appear on the battlefield. Instead, it works in the background, handling administrative and logistical tasks.

The Pentagon has already started using AI tools like ChatGPT to draft documents, manage contracts, and support logistics. These tasks may seem far from combat, but they are essential to how military operations function. AI affects how supplies reach the front lines, defines the types of equipment procured through contracts, and helps create planning documents. Even if it is indirect, AI in these areas can have a significant effect on the outcome of a war.

For example, ChatGPT can improve supply routes, contracting, and ways to distribute resources. Each of these improvements is small on its own, but as a whole, they can give a significant edge.

There is a chance for over-reliance. For example, a plan is generated by AI, and people are using it without understanding how it is generated. This can lead to a situation where people will no longer be able to understand the assumptions behind it. A study by MIT on interacting with AI is applicable here. The study shows that as people get used to working with AI, they may be less likely to question its results. This is less concerning in administrative roles than in combat, but it is still important to consider.

Additionally, the military is experiencing a cultural shift by integrating AI into daily tasks. As artificial intelligence is normalised at all levels of its operations, it essentially becomes a standard procedure. Once this change is made, it is easier to implement AI in more sensitive and complex functions in the future.

Also read: Apple iPhone 17 Pro price drop: Get over Rs 8,000 discount, here’s how

Questions still remain?

Artificial intelligence is not something new, yet many of us may wonder about the pace at which the US military is integrating it across various processes, whether it be administration or targeting systems.

How does relying on a technology that hasn’t been fully tested affect performance, responsibility, and oversight? 

Other than the abovementioned issue, a bigger question that arises is, as AI takes on more tasks, what happens to the role of humans in decision-making?

Feel free to share your thoughts on these questions with me at bhaskar.sharma@timesgroup.com.

Bhaskar Sharma

Bhaskar is a senior copy editor at Digit India, where he simplifies complex tech topics across iOS, Android, macOS, Windows, and emerging consumer tech. His work has appeared in iGeeksBlog, GuidingTech, and other publications, and he previously served as an assistant editor at TechBloat and TechReloaded. A B.Tech graduate and full-time tech writer, he is known for clear, practical guides and explainers.

Connect On :