Total Pageviews

Tuesday, 20 February 2024

Military Leadership in Times of AI PART 1-Maj Gen NITIN P GADKARI FEB 19

 

Recently an article penned by myself was published in the CASS ( Centre for Advance Strategic Studies), Pune quarterly Journal (Vol 11, No 3, ISSN 2347- 9191, Oct-Dec 2023). The article was a thought towards understanding leadership changes in the Military in times of AI. Since AI is a new buzz word, it was essential that its effect on leadership and man-management be studied. The article had no precedent hence, no research material was available ( It may exist on the net, but none was curated) The military is all about man, machine and added to this is the new dimension of technology. The interplay of AI and its effect on the environment would necessitate a change in thought process for the leadership.

When an article is published in a journal, the feedback mechanism is the first casualty. Even if it is read, it is rarely commented upon. I could have summarised and put an extract, but the readers would benefit more if they read it in its entirety. Hence I have reproduced the article below. (CASS Journal is available to paid members, and Libraries). If readers could contribute to the implications of AI on Military Leadership, it would enrich the knowledge base and the debate to be raged in future. The article reproduced below is a nascent attempt to start the debate. Hope, some of the readers would contribute their thoughts on the subject. Thank You, and happy reading.

Military Leadership in Times of AI

AI (Artificial Intelligence) is the new normal. It is in the news every day for one reason or the other. Most of the time, it is for regulatory fears. The saga of Sam Altman, the CEO of ‘Open AI’, signals a new era in our relationship with AI. Sam was reinstated as the CEO of the Open AI company after his public firing by the company board. ‘Open AI’ invented the Chat GPT under the leadership of Sam Altman. The inside information about ‘Open AI’ cracking the AGI (Alternative General Intelligence) appears to have spooked the board of Open AI and led to the CEO's sacking. AGI is the final frontier in AI, where the machine acquires the same level of intelligence and reasoning as a human brain. To the naysayers, it would be the beginning of the end of humankind if AGI becomes a reality. Few others expressed the commercialisation of AI as the real cause of Sam Altman's sacking. Irrespective of whether people love or hate AI, one thing is sure: it will be part of our future. AI will grow due to its utility to the industry and our day-to-day activity, and its influence will grow as technology improves.

What is AI? 

AI can be defined in several ways. The simplest explanation is that AI is a machine's ability to think and do as humans do (mimic human intelligence and actions). Why would machines do what humans do? To simplify and make work more efficient. Humans have limitations of time and energy. They tend to lose efficiency with time. Also, the human brain has cognitive limitations of intellect and memory. An AI can do routine jobs more efficiently and for more extended periods. An AI machine can mine millions and millions of data sets and come up with the requisite answers to the problem by studying the pattern in the information buried deep into data. Yet the question of how an AI system thinks and acts like a human intrigues many.

The answer lies in AI’s ability to understand patterns in extensive data. Supercomputing speeds can enable a computer to decipher patterns by digging into millions and millions of datasets to understand a given problem. An appropriate example would be face recognition software. If the police wish to see connections between a criminal and his associates, they could run a series of searches where people seen with the said criminals are identified suppose more than three times by face recognition software on computers. The computers are backed up with data feeds from thousands of street surveillance video cameras. When limited to finding the names, this exercise is a simple computer processing exercise. However, suppose the face recognition software signals an arrest of the individuals involved by automatically generating a warrant and informing the police stations in proximity of the associates; the machines take the dimension of an AI. 

AI is thus an integrated system where hardware and software are intricately linked, where sophisticated algorithms ride powerful hardware to emulate human cognitive and adaptive capability. The difference between an automated system and an AI-driven system is that hardware works as programmed with minimal flexibility. In AI, a hardware and connected system have the power to think and recalibrate based on the latest information received and take appropriate action. 

One of the biggest fears of AI is fuelled by Hollywood movies, where machines become so intelligent that they want to destroy any competition and wish to rule the world, for example, the AGI, as explained above. In reality, the AGI is not yet a reality, but fears persist. What we see proliferating is called Generative AI, which can learn based on the algorithm that runs it and relearn based on experience and discerning new patterns. It has applications in all spheres, and the future will have immense scope, including in medicine, where operations through robotic arms supervised by a surgeon would become the norm. AI applications in the military are growing by leaps and bounds. From lifting ammunition through a robotic arm to selecting and targeting critical targets by drones, all have AI applications. AI has been used extensively in the current Israel- Hamas conflict in Gaza and the Russia-Ukraine war. Automated systems to AI-driven systems have shortened the timings of the kill chain. (‘Kill Chain’ is the process of identifying an enemy target and destroying it) Satellites, drones and radars provide enormous amounts of information to the command centres (Where leaders taking decisions sit). AI-enabled systems help reduce the time spent picking and engaging enemy targets.   

In future, a lot will change with increasing dependence on AI systems. How would it affect the organisations or institutions that use AI? Would organisations change? How would the leadership of any organisation handle the change and transition? How would AI affect the armed forces and its leadership roles? The paper deals with the questions mentioned above, which continue to lurk in people who study and deal with AI.  

No comments:

Post a Comment