Future of AI: Open or bust?

Future of AI: Open or bust?

When we look back at 2023 from the future, it will no doubt be regarded as one of the biggest inflection points for AI, when the proverbial genie of many wonders well and truly escaped the bottle. Where AI finally became mainstream, thanks to ChatGPT and a thousand other LLM-inspired apps and services. At last, AI wasn’t something only computer scientists, uber-nerds or us lowly technology hacks were obsessed with — finally everyone got in on the act.

Also read: Nvidia is probably the most important company in tech right now: Here’s why

rogue_AI

From doom-n-gloom approximations of how AI would render swathes of human workforce across various industries jobless in a blink to exaggerated existential threats straight from a sci-fiction novel, it’s safe to say that our collective attitudes towards this big technological breakthrough have been nothing short of hyperbolic. Writers like yours truly, and others around the world, may have to take the blame for this and with good reason. But then, how do you not get excited and alarmed by this rapid infusion of AI into our lives, society, and civilisation?

With nothing to lose, startups like OpenAI are bringing AI innovations faster to market than the likes of traditional big tech giants: Amazon, Apple, Google and Microsoft don’t have the same luxury, of course, being answerable to legal eagles, customers and shareholders. Speed of getting in on the AI boom can’t come at the cost of breaking potential laws, which is why big tech CEOs have been busy harping the AI safety and regulatory tune for some time now. When you read everyone from Sundar Pichai to Satya Nadella invoking lawmakers to exercise their regulatory powers to enforce thoughtful AI restraint, it can be inferred that perhaps big tech companies are seeking governmental guidance to ensure their AI product roadmaps are eventually falling on the right side of the law. 

In 2023, global governments took significant steps to regulate AI safety, as seen during the Global AI Safety Summit in the UK. Its key outcome was a declaration endorsed by 28 countries (including the likes of USA and China) to better understand and regulate the risks associated with “frontier AI,” containing advanced, general-purpose AI models at the cutting edge of technology which can trigger issues related to AI’s environmental impact, algorithmic bias, and the misuse of generative AI. A central strategy seems to be emerging based on international cooperation, where these risks will be jointly studied scientifically to ultimately develop risk-based policies for AI safety. But will there be enough emphasis on ensuring a balanced approach that doesn’t impede innovation while mitigating risks?

Also read: AI education vital to overcome AI-related job loss fears, says Digit Survey

Of course, don’t get me wrong, safety should be paramount for any AI product being released into the market, something that has the potential to disrupt not just the mundane but the critically essential as well. However, one can’t help but wonder if regulation is a tool that big tech players can use to slow down the pace of rival disruptive innovators from emerging, and subsequently giving themselves more time to load the deck in their favour. If regulation can be hijacked by big tech, then what could be the alternative?

Future of AI

Some of the best AI minds in the world, a vocal minority at this point, seem to think open source is the way to go. Over 70 signatories, including renowned figures like Yann LeCun from Meta and Andrew Ng from Google Brain, supported an open letter published by Mozilla in November 2023. This letter emphasises the need for openness and transparency in AI development as crucial for mitigating future harms.

The open letter argues against the notion that tight, proprietary control over AI is the sole way to prevent societal harm. It advocates for public access and scrutiny, stating that these measures make technology safer. The letter outlines three benefits of openness in AI: fostering independent research, enhancing public scrutiny and accountability, and lowering entry barriers for new players in the AI field. Hasty regulation that could concentrate power and stifle competition and innovation should be avoided, the letter argues, highlighting that open models are vital for informed debate and effective policymaking.

Also read: Why Google Bard’s fiasco is embarrassingly good

Needless to say, the discussion around the future of AI is intensifying – should it be open-source or proprietary? This dichotomy has long existed in software development and will no doubt continue to be a key issue in AI’s future into 2024 and beyond.

This column was originally published in the December 2023 issue of Digit magazine. Subscribe now.

Jayesh Shinde

Jayesh Shinde

Executive Editor at Digit. Technology journalist since Jan 2008, with stints at Indiatimes.com and PCWorld.in. Enthusiastic dad, reluctant traveler, weekend gamer, LOTR nerd, pseudo bon vivant. View Full Profile

Digit.in
Logo
Digit.in
Logo