Keep up with the digital identity landscape.
Can artificial intelligence human extinction soon become a realistic idea?
The erasure of the human race assumes an incomprehensible place in all of our minds. From nuclear war to global pandemics, scientists have hypothesized how our species will come to its demise for centuries. However, as technology advances and digital human innovation is put at the forefront of discussion, AI has been thrown into the ring as a serious threat to our downfall.
British theoretical physicist Stephen Hawking was one of the first prominent figures to sound the alarm on potential artificial intelligence human extinction. The world-renowned thinker explained in a 2014 interview, “The development of full artificial intelligence could spell the end of the human race.” While admitting that the future is unknown and that these systems could prove valuable for humanity, Hawking noted that the advancements in this field could be “the worst events in the history of our civilization.” Although his viewpoints were blunt and shocking to many, the famous scientist was only the first in line to speak out on the hazards of this rapidly growing innovation.
It may sound contradictory, but even the proclaimed “godfather of AI” Geoffrey Hinton constantly warns the public of the future consequences of AI. Previously winning a Turing Award for his extensive work in the field of neural networks, the computer scientist is often revered as a pioneer of the AI world. He now uses his platform to educate others on the severity of rapid AI evolution. In a recent interview with BBC, Hinton explains, “GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way…given the rate of progress, we expect things to get better quite fast. So we need to worry about that.” In fact, just over a month ago, Hinton quit his research position at Google – where he has worked for over a decade – in order to have more time to freely express the growing dangers of current AI developments.
Just within the past two months, global tech juggernauts Elon Musk and Apple co-founder Steve Wozniak signed an open letter calling to pause AI innovation as a whole. The letter states, “We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” claiming that serious consequences will unfold if not. The mention of GPT-4 refers to the newest implementation of artificial tech powerhouse OpenAI’s multimodal language model. This model is the company’s most advanced version yet, with the ability to respond in real time with text, audio, video, or even art – all mimicking that of a human.
The magnitude of OpenAI’s invention, GPT, can be quite difficult to wrap one’s head around. The company released its public chatbot, ChatGPT, on November 30, 2022. In just 5 days after launch, the platform crossed 1,000,000 users. For perspective, here is what it took some notable media powerhouses to achieve this feat:
However, ChatGPT’s momentum continued to steamroll the industry. The model soon broke the world-record for fastest-growing platform with over 100M users after 2 months. This information clearly illustrates that the public has decided that AI is here to stay, and with it comes ease in everyday tasks. Whether it be creative or formal writing, data gathering, art, code, music from generative voice mockups of popular artists – the possibilities are endless to eliminate tedious work from the average individual.
Although this may sound enticing – as it can alleviate stress and help open more time towards other passions – the reasoning behind major technologists and advocacy groups calling out for serious regulation stems from the looming disappearance of separation between natural and artificial. As Hinton mentioned, chatbots and AI models have exponentially more knowledge stored than an average human. As this knowledge develops, it will soon be difficult to decipher authentic from fake in most fields. From original work to AI-generated voices, an individual’s entire identity can be compromised due to their indistinguishable replication online. This is why ethical and professional regulations must be enforced in order to keep the catapulting new invention from reaching a point of no return.
So, will artificial intelligence human extinction really happen? Well, that is truly up to us to decide. The good news is that by most professional accounts, the hypothesis that AI will be able to hold its own consciousness and quickly take over our species is highly unlikely – at least we are very far removed from that becoming a legitimate threat. At its core, human motivation will seek to out-perform its current status. There will always be a collective drive to push beyond what we are currently capable of – to venture into our untapped potential. Artificial intelligence being tied to such a catastrophic idea should serve as a wake-up call for regulation, and constant checking of morality when it comes to the ever-evolving world of AI.
ABOUT IDENTITY REVIEW
Identity Review is a digital think tank dedicated to working with governments, financial institutions and technology leaders on advancing digital transformation, with a focus on privacy, identity, and security. Want to learn more about Identity Review’s work in digital transformation? Please message us at firstname.lastname@example.org. Find us on Twitter.
Keep up with the digital identity landscape.
Bringing together key partners, platforms and providers to build the future of identity.Apply