Do you remember this?
https://aistatement.com/
This statement. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”, was issued by AI experts in 2023, including CEOs of AI behemoths such as Open AI and Anthropic, as well as Geoffrey Hinton, who is considered the Godfather of AI.
While some form of AI has existed for decades – Siri came in 2012 - AI in the form of a generative large language model (LLM) was released into the world as Chat GPT in 2022. Other such iterations of AI such as Anthropic and Open AI soon followed.
So all of us were told by AI scientists who worked on the development of AI in establishments like Open AI and Chat GPT, as well as the CEOs, that AI could evolve so that it could kill all Humans at some point. The clearly stated risk is from AI itself, not from the use of AI. They must have known of this risk when they approved the release. Of course, the end of Humanity hasn’t happened yet, and it will never happen courtesy of AI in and of itself, for reasons outlined below. Before dealing with AI’s significant limitations, we need to ask this question…
How was it that something that was considered a risk to the existence of Humanity by its developers, and by those in charge of the related private companies that led its development, was released into the world without massive and sustained testing to make sure, to the degree possible, that this risk could not manifest itself in the actual elimination of all Humans, everywhere?
These companies didn’t have to release generative LLM AI into the world. They rushed to do so to try to beat each other to the promised massive profits that they thought would ensue should the world adopt AI, much as we adopted the internet itself. The answer to this question seems clear…
Generative LLM AI was released into the world by CEO’s of companies that had a clear understanding that they could be risking the existence of Humanity, solely in pursuit of profit.
The term you are looking for is “monster”, although that doesn’t quite do it. This wasn’t just criminal negligence. In fact, we don’t have a crime to fit what these people did. To make money, as far as they knew, they knowingly risked killing all of us.
Incidentally, the board here at Mewetree.blogspot.com asked Chat GPT if there are any examples from history where a single decision made only in pursuit of profit resulted in a risk to the lives of all Humans. Chat GPT could only come up with one example - the release of generative LLM AI. Fair is fair - we here at Mewetree.blogspot.com do hereby give AI a nod for honesty.
It will never happen, and here’s why…
The Fraud:
Artificial Intelligence isn’t intelligent.
Applying the word “intelligence” to AI, and with that, using words like "reason" and "know" to describe what it does, to the extent that this implies that Artificial Intelligence actually is intelligent and that it reasons and knows anything at all, is completely fraudulent.
Words mean things. Here is what “intelligence” means – “the ability to learn, understand, and make judgements or have opinions that are based on reason.”
Let’s look at each bolded and italicized aspect in turn and compare to what AI actually does.
“Learn” means, “to get new knowledge or skill in a subject or activity.” AI doesn’t do this. It accesses the changing and evolving mass of human knowledge available on the internet and spits out data in the form of arranged words that respond to whatever question it has been asked. Learning requires volition (i.e. “to get”.) AI has no volition. Without the questions being asked, it is entirely at stasis.
“Understand” means, “to know the meaning of something that someone says”. The key term here is “to know” which means “to have information in your mind.” AI doesn’t have information in its mind because AI doesn’t have a mind (confirmed by Chat GPT), so it does not, and cannot “know”. Ergo, it cannot “understand”.
“Make Judgements” means, “to make a decision or form an opinion about someone or something after thinking carefully.” The key term here is “thinking” which means “activity of using your mind to consider something.” As we have seen, AI doesn’t have a mind. Ergo, it cannot think, so it cannot make judgments.
“Have opinions” means, to have “a thought or belief about something or someone”. The key terms here are “thought” and “belief”. Thought means thinking, and we have seen that thinking requires a mind. “Belief” means “the feeling of being certain that something exists or is true.” AI does not have a mind so it cannot think, and it also cannot feel so it can’t have beliefs, therefore it cannot have opinions.
“Reason” means, “the ability of a healthy mind to think and make judgments, especially based on practical facts”. As we have seen, AI does not have a mind. It doesn’t do anything based on reason.
AI does literally nothing that constitutes the core elements of the definition of “intelligence”. It is not intelligent. Again, it does not reason and it doesn't know anything. Any suggestion to the contrary is entirely fraudulent.
The Conspiracy:
AI is tremendously helpful if kept within this realm…
"The best uses of AI involve automating mundane tasks, boosting creativity, enhancing data analysis, and improving decision-making across daily life and work, from summarizing emails and generating code to diagnosing diseases, creating art, and tackling global issues like climate change and hunger. AI excels at personalizing experiences (recommendations, chatbots), optimizing complex systems (manufacturing, cybersecurity), and providing powerful tools for education, healthcare (cancer screening, drug discovery), and accessibility for people with disabilities." (Google AI)
When it comes to this stuff, AI has no equal. This is massive progress. AI may not be intelligent, but the people who developed this tool are geniuses. Any intelligence here is Human, not Artificial.
Without an active intelligence, AI on its own will never smite all Humans. The AI developers and CEOs of AI companies must know this. So, they took the supposed risk inherent to all Humanity in releasing AI because they knew there was no such risk, because AI isn’t actually intelligent. There was no crime committed here, but maybe there was something else that was entirely normal.
Query - Why did they issue the dire warning noted at the start of this blog, and many other similar warnings?
AI was released into a world with no regulations whatsoever. A declaration by AI developers and CEOs of AI companies that their product could kill everyone on the planet will get them a very important seat at the table when regulations are being drafted to deal with the supposed threat. Having portrayed themselves both as the experts, as well as the harbingers of doom, were they setting themselves up for a profitable round of regulatory capture?
“Regulatory capture” is this, “Regulatory capture is when a government agency, meant to serve the public interest, starts acting in favor of the industry it's supposed to regulate, often due to close relationships, lobbying, and the "revolving door" of personnel between industry and government, resulting in rules that benefit the industry over consumers or the public. This leads to policies that protect established firms, stifle competition, or ignore public welfare for private gain, as seen with examples like financial or pharmaceutical industries.” (Google AI)
The key points are in bold. Companies that seek regulatory capture try to co-opt government into killing their competition, enabling them to maximize profits.
Is this what the AI companies have been doing? We don’t know yet as AI regulation is in its infancy. If yes, this would be a completely normal activity for dominant firms in a new area of commercial endeavor.
“Learn” means, “to get new knowledge or skill in a subject or activity.” AI doesn’t do this. It accesses the changing and evolving mass of human knowledge available on the internet and spits out data in the form of arranged words that respond to whatever question it has been asked. Learning requires volition (i.e. “to get”.) AI has no volition. Without the questions being asked, it is entirely at stasis.
“Understand” means, “to know the meaning of something that someone says”. The key term here is “to know” which means “to have information in your mind.” AI doesn’t have information in its mind because AI doesn’t have a mind (confirmed by Chat GPT), so it does not, and cannot “know”. Ergo, it cannot “understand”.
“Make Judgements” means, “to make a decision or form an opinion about someone or something after thinking carefully.” The key term here is “thinking” which means “activity of using your mind to consider something.” As we have seen, AI doesn’t have a mind. Ergo, it cannot think, so it cannot make judgments.
“Have opinions” means, to have “a thought or belief about something or someone”. The key terms here are “thought” and “belief”. Thought means thinking, and we have seen that thinking requires a mind. “Belief” means “the feeling of being certain that something exists or is true.” AI does not have a mind so it cannot think, and it also cannot feel so it can’t have beliefs, therefore it cannot have opinions.
“Reason” means, “the ability of a healthy mind to think and make judgments, especially based on practical facts”. As we have seen, AI does not have a mind. It doesn’t do anything based on reason.
AI does literally nothing that constitutes the core elements of the definition of “intelligence”. It is not intelligent. Again, it does not reason and it doesn't know anything. Any suggestion to the contrary is entirely fraudulent.
The Conspiracy:
AI is tremendously helpful if kept within this realm…
"The best uses of AI involve automating mundane tasks, boosting creativity, enhancing data analysis, and improving decision-making across daily life and work, from summarizing emails and generating code to diagnosing diseases, creating art, and tackling global issues like climate change and hunger. AI excels at personalizing experiences (recommendations, chatbots), optimizing complex systems (manufacturing, cybersecurity), and providing powerful tools for education, healthcare (cancer screening, drug discovery), and accessibility for people with disabilities." (Google AI)
When it comes to this stuff, AI has no equal. This is massive progress. AI may not be intelligent, but the people who developed this tool are geniuses. Any intelligence here is Human, not Artificial.
Without an active intelligence, AI on its own will never smite all Humans. The AI developers and CEOs of AI companies must know this. So, they took the supposed risk inherent to all Humanity in releasing AI because they knew there was no such risk, because AI isn’t actually intelligent. There was no crime committed here, but maybe there was something else that was entirely normal.
Query - Why did they issue the dire warning noted at the start of this blog, and many other similar warnings?
AI was released into a world with no regulations whatsoever. A declaration by AI developers and CEOs of AI companies that their product could kill everyone on the planet will get them a very important seat at the table when regulations are being drafted to deal with the supposed threat. Having portrayed themselves both as the experts, as well as the harbingers of doom, were they setting themselves up for a profitable round of regulatory capture?
“Regulatory capture” is this, “Regulatory capture is when a government agency, meant to serve the public interest, starts acting in favor of the industry it's supposed to regulate, often due to close relationships, lobbying, and the "revolving door" of personnel between industry and government, resulting in rules that benefit the industry over consumers or the public. This leads to policies that protect established firms, stifle competition, or ignore public welfare for private gain, as seen with examples like financial or pharmaceutical industries.” (Google AI)
The key points are in bold. Companies that seek regulatory capture try to co-opt government into killing their competition, enabling them to maximize profits.
Is this what the AI companies have been doing? We don’t know yet as AI regulation is in its infancy. If yes, this would be a completely normal activity for dominant firms in a new area of commercial endeavor.
We shall see.
Conclusion:
AI on its own won't kill us, but will someone try to kill us using AI? That's fodder for a different blog. Until then, this...
What would Tom Cruise, on a donkey, wearing a fedora, in the desert look like?
See here...
No comments:
Post a Comment