Zoe KleinmanExpertise editor
BBCMark Zuckerberg is alleged to have began work on Koolau Ranch, his sprawling 1,400-acre compound on the Hawaiian island of Kauai, way back to 2014.
It’s set to incorporate a shelter, full with its personal power and meals provides, although the carpenters and electricians engaged on the positioning have been banned from speaking about it by non-disclosure agreements, in accordance with a report by Wired journal.
A six-foot wall blocked the undertaking from view of a close-by street.
Requested final 12 months if he was making a doomsday bunker, the Fb founder gave a flat “no”. The underground house spanning some 5,000 sq. toes is, he defined, “similar to just a little shelter, it is like a basement”.
That hasn’t stopped the hypothesis – likewise about his choice to purchase 11 properties within the Crescent Park neighbourhood of Palo Alto in California, apparently including a 7,000 sq. toes underground house beneath.
Bloomberg by way of Getty PicturesAlthough his constructing permits confer with basements, in accordance with the New York Occasions, a few of his neighbours name it a bunker. Or a billionaire’s bat cave.
Then there may be the hypothesis round different tech leaders, a few of whom seem to have been busy shopping for up chunks of land with underground areas, ripe for conversion into multi-million pound luxurious bunkers.
Reid Hoffman, the co-founder of LinkedIn, has talked about “apocalypse insurance coverage”. That is one thing about half of the super-wealthy have, he has beforehand claimed, with New Zealand a preferred vacation spot for houses.
So, might they actually be making ready for warfare, the results of local weather change, or another catastrophic occasion the remainder of us have but to find out about?
Getty Pictures InformationIn the previous couple of years, the development of synthetic intelligence (AI) has solely added to that checklist of potential existential woes. Many are deeply nervous on the sheer velocity of the development.
Ilya Sutskever, chief scientist and a co-founder of Open AI, is reported to be considered one of them.
By mid-2023, the San Francisco-based agency had launched ChatGPT – the chatbot now utilized by tons of of tens of millions of individuals the world over – and so they have been working quick on updates.
However by that summer season, Mr Sutskever was turning into more and more satisfied that pc scientists have been getting ready to creating synthetic normal intelligence (AGI) – the purpose at which machines match human intelligence – in accordance with a ebook by journalist Karen Hao.
In a gathering, Mr Sutskever recommended to colleagues that they need to dig an underground shelter for the corporate’s prime scientists earlier than such a robust know-how was launched on the world, Ms Hao stories.
AFP by way of Getty Pictures“We’re undoubtedly going to construct a bunker earlier than we launch AGI,” he is broadly reported to have stated, although it is unclear who he meant by “we”.
It sheds gentle on a wierd truth: many main pc scientists and tech leaders, a few of whom are working arduous to develop a massively clever type of AI, additionally appear deeply afraid of what it might someday do.
So when precisely – if ever – will AGI arrive? And will it actually show transformational sufficient to make odd individuals afraid?
An arrival ‘prior to we predict’
Tech leaders have claimed that AGI is imminent. OpenAI boss Sam Altman stated in December 2024 that it’s going to come “prior to most individuals on the planet suppose”.
Sir Demis Hassabis, the co-founder of DeepMind, has predicted within the subsequent 5 to 10 years, whereas Anthropic founder Dario Amodei wrote final 12 months that his most well-liked time period – “{powerful} AI” – could possibly be with us as early as 2026.
Others are doubtful. “They transfer the goalposts on a regular basis,” says Dame Wendy Corridor, professor of pc science at Southampton College. “It relies upon who you discuss to.” We’re on the cellphone however I can nearly hear the eye-roll.
“The scientific group says AI know-how is superb,” she provides, “however it’s nowhere close to human intelligence.”
There would must be plenty of “elementary breakthroughs” first, agrees Babak Hodjat, chief know-how officer of the tech agency Cognizant.
What’s extra, it is unlikely to reach as a single second. Slightly, AI is a quickly advancing know-how, it is on a journey and there are various firms world wide racing to develop their very own variations of it.
However one cause the concept excites some in Silicon Valley is that it is considered a pre-cursor to one thing much more superior: ASI, or synthetic tremendous intelligence – tech that surpasses human intelligence.
It was again in 1958 that the idea of “the singularity” was attributed posthumously to Hungarian-born mathematician John von Neumann. It refers back to the second when pc intelligence advances past human understanding.
Getty PicturesExtra just lately, the 2024 ebook Genesis, written by Eric Schmidt, Craig Mundy and the late Henry Kissinger, explores the concept of a super-powerful know-how that turns into so environment friendly at decision-making and management we find yourself handing management to it fully.
It is a matter of when, not if, they argue.
Cash for all, without having a job?
These in favour of AGI and ASI are nearly evangelical about its advantages. It’s going to discover new cures for lethal ailments, remedy local weather change and invent an inexhaustible provide of unpolluted power, they argue.
Elon Musk has even claimed that super-intelligent AI might usher in an period of “common excessive earnings”.
He just lately endorsed the concept AI will grow to be so low-cost and widespread that nearly anybody will need their “personal private R2-D2 and C-3PO” (referencing the droids from Star Wars).
“Everybody may have one of the best medical care, meals, residence transport and every little thing else. Sustainable abundance,” he enthused.
There’s a scary aspect, after all. May the tech be hijacked by terrorists and used as an infinite weapon, or what if it decides for itself that humanity is the reason for the world’s issues and destroys us?
AFP by way of Getty Pictures“If it is smarter than you, then we have now to maintain it contained,” warned Tim Berners Lee, creator of the World Huge Net, speaking to the BBC earlier this month.
“We’ve to have the ability to change it off.”
Governments are taking some protecting steps. Within the US, the place many main AI firms are primarily based, President Biden handed an government order in 2023 that required some corporations to share security check outcomes with the federal authorities – although President Trump has since revoked a few of the order, calling it a “barrier” to innovation.
In the meantime within the UK, the AI Security Institute – a government-funded analysis physique – was arrange two years in the past to higher perceive the dangers posed by superior AI.
After which there are these super-rich with their very own apocalypse insurance coverage.
Getty Pictures“Saying you are ‘shopping for a home in New Zealand’ is sort of a wink, wink, say no extra,” Reid Hoffman beforehand stated. The identical presumably goes for bunkers.
However there is a distinctly human flaw.
I as soon as met a former bodyguard of 1 billionaire along with his personal “bunker”, who advised me his safety staff’s first precedence, if this actually did occur, can be to remove stated boss and get within the bunker themselves. And he did not appear to be joking.
Is all of it alarmist nonsense?
Neil Lawrence is a professor of machine studying at Cambridge College. To him, this entire debate in itself is nonsense.
“The notion of Synthetic Normal Intelligence is as absurd because the notion of an ‘Synthetic Normal Car’,” he argues.
“The suitable automobile relies on the context. I used an Airbus A350 to fly to Kenya, I exploit a automotive to get to the college every day, I stroll to the cafeteria… There isn’t any automobile that would ever do all of this.”
For him, discuss AGI is a distraction.
“The know-how we have now [already] constructed permits, for the primary time, regular individuals to straight discuss to a machine and probably have it do what they intend. That’s completely extraordinary… and completely transformational.
“The massive fear is that we’re so drawn in to large tech’s narratives about AGI that we’re lacking the methods during which we have to make issues higher for individuals.”
Getty PicturesPresent AI instruments are skilled on mountains of knowledge and are good at recognizing patterns: whether or not tumour indicators in scans or the phrase almost definitely to come back after one other in a specific sequence. However they don’t “really feel”, nevertheless convincing their responses might seem.
“There are some ‘cheaty’ methods to make a Massive Language Mannequin (the muse of AI chatbots) act as if it has reminiscence and learns, however these are unsatisfying and fairly inferior to people,” says Mr Hodjat.
Vince Lynch, CEO of the California-based IV.AI, can also be cautious of overblown declarations about AGI.
“It is nice advertising and marketing,” he says “If you’re the corporate that is constructing the neatest factor that is ever existed, persons are going to wish to offer you cash.”
He provides, “It is not a two-years-away factor. It requires a lot compute, a lot human creativity, a lot trial and error.”
Getty PicturesRequested whether or not he believes AGI will ever materialise, there is a lengthy pause.
“I actually do not know.”
Intelligence with out consciousness
In some methods, AI has already taken the sting over human brains. A generative AI software will be an professional in medieval historical past one minute and remedy advanced mathematical equations the following.
Some tech firms say they do not at all times know why their merchandise reply the way in which they do. Meta says there are some indicators of its AI programs bettering themselves.
In the end, although, regardless of how clever machines grow to be, biologically the human mind nonetheless wins. It has about 86 billion neurons and 600 trillion synapses, many greater than the substitute equivalents.

The mind would not must pause between interactions both, and it’s always adapting to new data.
“For those who inform a human that life has been discovered on an exoplanet, they are going to instantly study that, and it’ll have an effect on their world view going ahead. For an LLM [Large Language Model], they are going to solely know that so long as you retain repeating this to them as a truth,” says Mr Hodjat.
“LLMs additionally don’t have meta-cognition, which implies they do not fairly know what they know. People appear to have an introspective capacity, generally known as consciousness, that enables them to know what they know.”
It’s a elementary a part of human intelligence – and one that’s but to be replicated in a lab.
Prime image credit: The Washington Submit by way of Getty Pictures/ Getty Pictures MASTER. Lead picture reveals Mark Zuckerberg and a inventory picture of a bunker in an unknown location
BBC InDepth is the house on the web site and app for one of the best evaluation, with recent views that problem assumptions and deep reporting on the most important problems with the day. And we showcase thought-provoking content material from throughout BBC Sounds and iPlayer too. You may join notifications that may provide you with a warning when a BBC InDepth story is revealed – discover out how to sign up here.

