In the 21st century, AI is more than a buzzword; it’s an essential element of many sectors, from education and manufacturing to fashion design and architecture. Now making news headlines and front pages of magazines, this technology is far from novel, with the first mentions of AI and its foundations dating back to the 1950s.
Refined over the years and adapted to the needs of modern consumers, generative programs as we know them create unique texts or vibrant images based on human-written prompts, some more elaborate than others. And though a large part of society believes AI is the start of an entirely new era, about to disrupt without looking back, some experts take a more pragmatic approach.
For Paul Hartley, PhD, CEO of Human Futures Studio (HFS), a deliberately small research consultancy at the intersection of foresight strategy, anthropology, behavioral economics, and design research, the Earth is far from turning into a dystopian fantasy run by artificial intelligence. “Believing that the current tech landscape is revolutionary is more short-sighted than one might think,” he elucidates. “For decades, if not centuries, people have worked for what AI is today. This ‘revolution’ has been a long time coming, and the true revolution will happen once AI starts creating what humans hadn’t even visualized.”
Amidst loud voices striving for AI to become a go-to tool for general intelligence, Hartley’s stance centers on the hidden benefits of applied or weak AI. “General intelligence as a goal makes sense, but we can’t limit ourselves,” he stresses. “The general AI hype is real, and it’s really clouding the full spectrum of what this technology can achieve.”
The modern AI perception, according to HFS, is a result of ‘the general AI myth,’ a narrative stemming from centuries of developments, discoveries, and theories. To illuminate this landscape, the consultancy identified 10 myth components, at the heart of which is the misconception that machines are superior to man. With this statement acting as a pillar, many people believe general AI – if possible – will replace or even threaten human activity.
As Hartley muses, these notions regard AI’s capabilities not as a technology on its own but as people’s intent for its use. Until artificial intelligence becomes fully sentient, it will be nothing but a tool outshined by human control. This conversation trickles into AI-related concerns, with all of them encircling the potential problems it may cause. As AI hasn’t been able to fulfill its full promise yet, the daunting scenarios created by society are nothing but fiction, and – as long as AI remains a tool with little autonomy – they will remain just that.
The two-fold AI hype, or modern-day myths, push the true potential of AI to the back; while the first misconception commends technology for forging a better future through perpetual evolution, the other one is more dystopian, inspired by 20th-century science fiction – a time when fear of the atomic bomb infiltrated society’s mindset, giving technology a bad name.
In a world where technology is blamed for threatening humans, Hartley’s sobering insights make the picture clear: it is humans who control how deeply AI impacts global dynamics. As only a tool, technology on its own poses no risks; who should, however, be taking accountability, are business and thought leaders embracing AI in pursuit of increased productivity and profits.
“AI-informed or not, people do have a choice,” he adds. “The complex technology dynamic needs three parties: user, tool, and task. AI doesn’t just separate itself from humans, and that’s what most AI concerns pertain to.”
The hows and what-fors of technology are the real issues, according to Hartley. To bridge the gaps between global perception and true AI nature, HFS promotes adequate AI design based on regulations and strict policing. With accurate and positive developmental directions, the power of AI will transcend weaponized technologies, spotlighting its full range of benefits. A better approach will allow humans to manage AI like we do with nuclear energy, poison gas, and other more dangerous technologies, ensuring they stay in the hands of good people.
In a true anthropologic nature, Hartley concludes: “The truth is, we really don’t know what will happen. But one thing is for sure: people control the algorithm, not the other way around.”