Open Letter to Pause Giant AI Experiments
An open letter says the current race dynamic in AI is dangerous, and calls for the creation of independent regulators to ensure future systems are safe to deploy
Hi everybody🙋🏻♂️,
Welcome to
Rise & Shine☀ - Sunday Edition,
Every Sunday, you'll receive an email with helpful information to help you better understand a particular topic.
For more explained articles check riseshine.in
In a joint open letter, nearly 1500 technology leaders, including Elon Musk, are calling for a pause of six months in the development of artificial intelligence. The open letter, “Pause Giant AI Experiments: An Open Letter” is published on the website of the Future of Life Institute.
The letter calls for developers to work instead on making today’s AI systems “more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”
The idea is that AI development should be “planned for and managed with commensurate care and resources.” However, the authors of the letter say that this level of planning is not happening.
This leads to AI systems that are out of control, “even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
If this sounds familiar, these issues were raised 75 years ago by the legendary science fiction author Isaac Asimov when he postulated his three laws of robotics, which were instructions that were intended to be hard-coded into any AI, which at the time were assumed to be robots. The original three laws were later amended to include Law 0, which extended the laws to include all of humanity.
Asimov said in an interview years later at the World Science Fiction Convention in Boston in 1980, that the laws were intended to counter the then-popular view that one-day evil robots would take over the world. At the time, the growth of AI wasn’t known, although it was suspected to be possible.
The rapid growth of AI has already begun to bear out some of Asimov’s concerns.
“We are witnessing rapid advances in generative AI and AGI (artificial general intelligence) development at a breakneck speed with little thought for the implications on society,” said Wasim Khaled, CEO of Blackbird.AI.
“Social media platforms have been around for over a decade, but have failed to moderate human speed discourse and the related harms. Now, with the readily available technology that can generate unlimited narratives and media, we risk warping reality in previously unimaginable ways.”
Khaled said that AI has the risk of great harm to humanity. “Generative AI is posing unprecedented societal and national security risks from a cybersecurity perspective,” he said. “Threat actors now have an incredibly powerful tool in their toolkit, enabling everything from disinformation generation to malicious code creation. Furthermore, the popularity of these tools across all modes of work is resulting in massive exposure to enterprise strategy, intellectual property, and other forms of confidential data being fed into large language models with unknown or ever-changing privacy policies.” Generative AI is the type of artificial intelligence where the AI generates new material and sometimes new instructions.
AI And Threats To Privacy And Security
Along with the risk to privacy, there’s also the risk to cybersecurity from AI. This would be exacerbated by a pause in AI development, especially given the predictions of strong growth in AI development. A recent report, “Aerospace, Defense and Government M&A Review” by market intelligence company HigherGov, predicted strong demand for services, and thus personnel, for several government technology areas, notably space and government cybersecurity.
This is made worse because these tools can alter the view of reality due to choices made in their development. According to Khaled, this requires vigilance.
“If we continue to overlook the influence of AI programming on decision-making processes or the risk of centralized technologies compromising personal data privacy,” Khaled said, “the unchecked use of generative AI tools has the potential to dramatically alter our perception of reality. We must evaluate multiple scenarios and weigh the cost versus benefits of AI disruption to avoid a distorted reality.”
Source: The Verge, Forbes, Fortune.
Thank you for reading our newsletter!🤗
If you enjoyed it, please consider liking and sharing it with your friends and followers on social media.
Every bit of engagement helps us to grow and improve, and we appreciate your support. Thank you again, and we hope to see you in our next edition!