Welcome to Lexappealdeals!

AI Like ChatGPT Won’t Face Regulation in India

[ad_1]

India planted its flag in the sand of the global artificial intelligence race on Tuesday as a major agency said the government has no plans to regulate the technology.

The bold proclamation comes just one week after more that 500 AI experts signed an open letter urging AI labs to pause development of new GPT4 style large language models amid increased security of the tech by lawmakers in the US and Europe.

In a statement on Tuesday, India’s Ministry of Electronics and Information Technology acknowledged numerous ethical concerns around bias and transparency that could arise with AI’s rapid expansion but explicitly said the Indian government “is not considering bringing a law or regulating the growth of artificial intelligence in the country.” The ministry instead referred to AI as a “kinetic enabler of the digital economy,” which it believes will strengthen entrepreneurship and business and play an important strategic role for the country moving forward.

The ministry did say that officials were working to standardize responsible AI guidelines to steer AI’s development and promote healthy growth in the industry but notably shied away from expressing the same level of alarm raised by a growing number of policymakers in the US and Europe.

AI fears reach new heights

The sudden explosion of generative AI tools into the mainstream thanks to OpenAI’s ChatGPT and Google’s Bard models has led powerful voices to call on AI makers to hit the brakes. Last week, hundreds of AI experts including Elon Musk and Apple cofounder Steve Wozniak signed an open letter calling on AI labs to pause development of powerful new large language models. The letter says systems like GPT4 could pose “profound risks to society and humanity,” if allowed to advance without sufficient safeguard. If companies refuse to agree to a voluntary pause, the letter’s signatories urged lawmakers to move forward with a forced moratorium.

“Let’s be clear: the risk they are referring to here is the loss of human control over the world and our own future, much as gorillas have lost control over their own future because of humans,” professor of computer science at Berkeley University and letter signatory Stuart Russell told Gizmodo in an interview.

Top AI researchers, however, are divided on the scale of AI’s threat. Many of the open letter’s signatories are genuinely concerned AI systems could outsmart humans and pose a fundamental existential risk on part with nuclear weapons and climate change. Other experts agree AI pose could make misinformation and bias much worse, but scoffed at the idea of attributing human-level intelligence to what are essentially glorified chatbots. Either way, both camps generally agree stricter regulations are desperately needed.

President Joe Biden even weighed in on the issue this week in remarks where the potential benefits and pitfalls associated with the tech. Biden said AI could help address tough issues like climate change and disease discovery but cautioned it also poses “potential risks to our society, to our economy, to our national security.” The president went on to say tech companies have a “responsibility” to ensure their AI systems are safe before releasing them to the public. When asked if he believed AI was dangerous, Biden responded, “It remains to be seen. It could be.”

Forget “What about China?” What about India could be next.

For years, technologists, friendly policymakers, and other AI advocates have tried to convince lawmakers to opt for a light touch when approaching AI regulation. When confronted with concerns over AI’s potential to spread misinformation or amplify deep-rooted biases, many have turned to a simple, but enduring pitch. If the US doesn’t push forward, China will. Proponents of this “What about China?” argument, which notably includes former Google CEO Eric Schmidt, argue the US is engaged in an AI arms race with China whether it wants to or not.

Schmidt, who co-headed the National Security Commission on AI during the Trump administration, believes the US must do “whatever it takes,” to win against China. Failure, according to Schmidt, could result in trillions of dollars worth of lost revenue headed to the US rival and rejiggering of geopolitics where China could use its AI dominance to swing US allies into its orbit. Other proponents of the argument are less subtle.

“If the democratic side is not in the lead on the technology, and authoritarians get ahead, we put the whole of democracy and human rights at risk,” former U.S. ambassador to the U.N. Human Rights Council Eileen Donahoe said in a recent interview with NBC News.

Those same fears usually aren’t extended to India, even though the country’s recent declaration actually appears to make them more committed to a hands-off AI approach than China. In recent years, China’s top regulators have cracked down on some of its largest tech companies and their billionaire founders and even moved to pass European GDPR-esque data privacy protections. That increasingly cautious regulatory environment has led to a relative slowdown in AI advancement in China compared to the US. Last year, according to Stanford’s recently released AI Index, US companies reportedly invested $47.4 billion on AI projects. That’s 3.5 times more than China. The chart below similarly shows a lapse in Chinese AI investment in 2021 right as US investment soared.

Screenshot: “The AI Index 2023 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA

India’s new all-in attitude towards AI may aptly stem from the fact that it’s noticeably late to the party. In terms of overall AI investment, India spent just $3.24 on AI last year making them the fifth largest spender on the world stage. That figure is just a fraction of the amount spent by the US and China. And it shows. So far, India has failed to generate anywhere near the same level of attention-grabbing startups or AI labs as its counterparts. The new explicitly pro-AI approach could help change that especially if ambitious new technologists feel hamstrung by a potentially cautious regulatory environment in the US and other countries.

Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.

[ad_2]

We will be happy to hear your thoughts

Leave a reply

Lexappeal
Logo
Compare items
  • Total (0)
Compare
0