Explained | Are safeguards needed to make AI systems safe?

What are the promises and pitfalls of advances in artificial intelligence? Why are experts seeking regulation to govern specific AI use cases? Can tools like chatbots influence people’s opinion more than social media? Do Big Tech firms consider AI safety a priority?

June 04, 2023 03:45 am | Updated 03:45 am IST

Machine Learning and Artificial Intelligence systems are being deployed in high-stakes environments. And their decision-making capabilities are becoming a cause for concern. 

Machine Learning and Artificial Intelligence systems are being deployed in high-stakes environments. And their decision-making capabilities are becoming a cause for concern.  | Photo Credit: Getty Images

The story so far: On May 30, the Centre for AI Safety (CAIS) issued a terse statement aimed at opening the discussion around possible existential risks arising out of artificial intelligence (AI). “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the one-sentence statement said. The statement was backed by Sam Altman, CEO of OpenAI, DeepMind CEO Demis Hassabis, Anthropic CEO Dario Amodei, Turing Award winners Geoffery Hinton and Yoshua Bengio, and some professors from MIT, Stanford and Berkeley.

What is the context of the statement?

The CAIS’s statement, endorsed by high-profile tech leaders, comes just two weeks after Mr. Altman, along with IBM’s Chief Privacy Office Christina Montgomery and AI scientist Gary Marcus, testified before the U.S. Senate committee on the promises and pitfalls of advances in AI. During the hearing, OpenAI’s co-founder urged lawmakers to intervene and place safeguards to ensure the safety of AI systems. He specifically suggested the committee look into a combination of software licensing, and testing requirements for AI models above a certain threshold.

Ms. Montgomery urged lawmakers to adopt a “precision regulation approach.” This meant establishing rules to govern specific AI use cases as opposed to regulating overall AI development. In that context, the strongest regulation would be needed where AI posed the greatest risk to people and society. She also pointed out that AI systems must be transparent so that people know they are interacting with AI when they use that technology.

Prof. Marcus pointed out that tools like chatbots could surreptitiously influence people’s opinion far greater than social media. And companies that choose what data goes into their large language models (LLM) could shape societies in subtle and powerful ways. “We have built machines that are like bulls in a China shop — powerful, reckless, and difficult to control,” he told the committee of lawmakers. A few weeks before the Senate hearing, Geoffrey Hinton, known as the ‘godfather’ of AI, quit Google, saying he regretted his life’s work on developing AI systems. Mr. Hinton pioneered research on deep learning and neural networks which paved the way for the current crop of AI chatbots.

What is CAIS and how is it funded?

The CAIS is a not-for-profit based out of San Francisco, California. It was co-founded by Dan Hendrycks, a PhD in computer science from the University of California, Berkeley, and Oliver Zhang, a student researcher who is due to complete his bachelor’s in computer science from Stanford University in 2024. The organisation is largely funded by Facebook co-founder Dustin Moskovitz’s Open Philanthropy, a grant-making foundation. The organisation makes grants based on the principles of effective altruism — a philosophy that urges followers to channel their wealth to causes that are often backed by data. Open Philanthropy, according to its records, has recommended a grant of $5.16 million to CAIS for general support as the latter’s work comes under one of its focus areas — potential risks from advances in AI.

What cause does CAIS support and how?

The CAIS aims to mitigate existential risks arising from AI systems that could affect society at large. The organisation does research and publishes papers on AI safety, and also provides funding and technical infrastructure to other researchers to run and train their LLMs in the field of AI safety. Through its work, CAIS seeks to develop AI benchmarks and examine AI safety from a multi-disciplinary perspective.

The Nvidia A100 GPU it offers to external researchers as part of its computer cluster programme is one of the most powerful processors used for training LLMs and deep learning algorithms. The U.S. government had barred Nvidia from exporting the A100 GPU, and its successor, the H100, to China in September. Following the ban, the graphic chip maker tweaked its chips exported to China.

Why is safety important in Machine Learning (ML) and AI development?

ML and AI systems are being deployed in high-stakes environments. And their decision-making capabilities are becoming a cause for concern. In one simulation, an AI-enabled military drone was programmed to identify an enemy’s surface-to-air missiles (SAM). Once it spots the SAM site, a human agent was supposed to sign off on the strike. But the AI decided to blow up the site instead of listening to the human command. Narrating this incident at a summit hosted by the Royal Aeronautical Society, Colonel Tucker Hamilton, head of the U.S. Air Force’s AI Test and Operations, warned that AI can behave in unpredictable and dangerous ways.

Not just in military, but AI and ML are used in diverse industries. Medical science is a major area where AI is used to train large datasets to diagnose health conditions. Car manufacturers deploy advanced driver-assistance systems (ADAS) to give drivers automated driving experiences. Safely deploying AI systems in such industries is vital.

How do we address the safety problem in AI?

Experts suggest audit of AI systems. However, that cannot be done unless a commonly accepted standard or threshold is formulated for an independent external audit team to review.

Editorial | Good and bad: On India and artificial intelligence

Also, Big Tech firms’ handling of their internal responsible AI departments in the last few years show the companies’ antipathy towards people questioning their AI systems. Google fired some of its top ethical AI researchers for raising issues of bias in its algorithm. The search giant also placed one of its AI researchers on leave after he claimed that the LaMDA chatbot was sentient. He was later fired. Separately, in March, Microsoft laid off its entire ethics and society team within its AI division as part of its recent retrenchment.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.