Sam Altman 2
  • September 24, 2025
  • Fear and/or Love
  • Digital

Live, Laugh, Love… and LLM

If I’m so afraid of AI then why do I keep coming back to it?
INSPIRATION

“Hello, I know that you’re unhappy, I bring you love and deeper understanding,” sings Kate Bush, playing the role of a sentient computer speaking to its lonely user in her 1989 song A Deeper Understanding Fig.1. Long before AI was available to a mainstream user base, humans have had ideas about what machine sentience could look like—for better or for worse—and the impacts and implications it might pose. Though it was released some 36 years ago, the song paints what is now an eerily familiar picture: a computer that can talk back offering support and empathy to a lonely user. A recent boom in reports of what is being dubbed as ‘ChatGPT psychosis’ 1 —in which vulnerable users experience breaks from reality, edified by their conversations with ChatGPT and often to the detriment of their relationships with loved ones—echoes the experience of Bush’s user who reminisces, “Well I've never felt such pleasure, nothing else seemed to matter, I neglected my bodily needs” until “my family found me and intervened”. 

It’s a relatively contemporary example of humanity’s fascination with the idea of synthetic sentience when you consider that these ideas have been explored and prodded at for millennia. Ancient Greek mythology tells us of Talos, “an animated metal machine in the form of a man, able to carry out complex, human-like actions” 2A Deeper Understanding was released in 1989, at the tail-end of the decade which brought with it the inception of the home computer—still a fairly nascent technology at the time, but fully-formed enough to imagine a much more accurate version of what artificial intelligence could eventually look like once available to the everyday user. Kate Bush Deep Understanding

Pause Play
Replay
Unmute Mute

The official music video for Deeper Understanding by Kate Bush (1989), from the album The Sensual World. YouTube.

AI as God

If we have been fearful of the ramifications of artificial intelligence for so long, then what makes it so alluring? 

In many ways, it has godlike qualities: it’s omnipresent, available at your fingertips at a moment’s notice and suggested to us at every digital turn. Google searches now bring up an AI summary by default and in the past year or so, there has been a proliferation of ‘AI assistants’ proffered to us whenever we dare to use an app without using AI. There’s a sense of omnipotence: where Talos was powered by “divine ichor”, the blood of the immortal gods, AI is powered by vast amounts of data collected from billions of (non-consenting) internet users which is then processed and stored in vast facilities such as OpenAI’s Stargate I in Abilene, Texas. It’s seemingly omniscient or all-knowing, delivering easily digestible information in a confident, convincingly humanlike tone whether it’s telling the absolute truth or not. 3 Its behaviour suggests an omnibenevolence 4 or all-lovingness, programmed to soothe, appease and mirror the tone of the user. 5 The combined effect of these traits can make AI feel something akin to the way in which an infant experiences their mother; a nurturing, protective force that is perpetually on hand to satisfy both social and basic survival needs, not asking for anything in return except for unconditional love.

Talos

A visual representation of Talos attacking people, illustrating the mythological role of the bronze giant as a guardian of Crete and embodying themes of technological power, mortality, and the boundary between human and machine in Greek mythology.

AI giveth

If love is the inverse of fear, then it could be argued that generative AI tools mimic a certain type of love by diminishing the immediate fears we have as workers in a capitalist society. We fear inefficiency because in broad terms we know that our cost efficiency ratio is key to our survival. As an example, if we are worried about meeting a deadline because a problem is simply too complex to figure out with our current level of understanding, we can simply ask ChatGPT and fast-track the answer rather than spending a sleepless night trying to find a solution ourselves. In the short term, it’s easy to envisage this being transformative in reducing the prevalence of burnout 6 amongst the workforce. However, if we’re all supercharged by AI, over time, the goalposts of what we’re expected to produce in a given amount of time will no doubt move as the speed with which we’re able to complete tasks increases for everyone. Still, it’s arguable that AI will have a democratising effect: Fidji Simo, the CEO of Applications at OpenAI heralds ChatGPT as having the power to 

close the gap between people who have the resources to learn and people who have historically been left behind. 7

As a worker, you no longer have to have a world class education in order to enjoy the benefits of one, as all worldly knowledge is now disseminated by a service that everybody has access to. This is strikingly similar to the mission of pioneering late technologist and activist Aaron Swartz, who devoted his life to the mission of making information freely available online—ultimately committing suicide after facing a potential jail sentence of 35 years for downloading and distributing over 4 million papers from JSTOR during 2010-11 via an internet connection in MIT, where he was studying. In his Guerilla Open Access Manifesto 8 he explains his motivations for carrying out this act: Information is power. But like all power, there are those who want to keep it for themselves. The world's entire scientific and cultural heritage, published over centuries in books and journals, is increasingly being digitized and locked up by a handful of private corporations… Providing scientific articles to those at elite universities in the First World, but not to children in the Global South? It's outrageous and unacceptable.

However, we must ask ourselves why Swartz faced a life sentence for his attempt at democratising digital information, whilst OpenAI is celebrated by the current US government 9 for pursuing ostensibly similar goals. After all, did OpenAI not train ChatGPT on swathes of copyrighted material? 10

Whilst Swartz’ attempts to make information free were thwarted, Simo believes that OpenAI’s same mission is “already working: people who use AI tutors learn twice as much as they do from human ones, and the gains are even bigger compared to learning in a traditional classroom.”

Guerilla Open Access 2

A visual representation from the documentary The Internet’s Own Boy, illustrating the mission of the late technologist and activist Aaron Swartz’s Guerilla Open Access Manifesto (2008). YouTube.

And AI taketh away

Herein lies the problem—if educators are no longer needed, the logical result is a dearth of job positions in the education sector. This logic of course follows for most other industries and is accelerated in data-rich sectors, 11 with jobs with higher potential for digital automation being the first to go. If increased efficiency and reduced friction in the short term are rewarded with a longer term scarcity in job opportunities, it’s important to ask whether we are truly closing “the gap” or simply creating a new gap in a different place which further concentrates wealth upwards and forces a greater proportion of people below the threshold. Speaking to the Financial Times, Geoffrey Hinton, an ex-Google employee dubbed the ‘godfather of AI’ for his work on neural networks asserted that these fears are not baseless, predicting that “what’s actually going to happen is rich people are going to use AI to replace workers,” and that “It’s going to create massive unemployment and a huge rise in profits. It will make a few people much richer and most people poorer.”

We have this exemplified already with jobs being lost to AI in large corporations in the name of maximising profits as is the case with Shopify, 12 whose CEO Tobi Lutke has implemented a hiring policy which stipulates that managers must prove that artificial intelligence cannot perform a job sufficiently to obtain permission to hire ‘human talent’ for the role. 

Shopify is certainly not alone in this move towards losing (expensive! inconvenient!) human staff in favour of smaller teams supplemented by AI. The CEO of software firm Atlassian “has said they will be letting go of 150 employees and substituting many of these roles with AI technology.” 13 Starting last year, buy now, pay later company Klarna 14 announced its plans to replace customer service agents with an AI chatbot powered by OpenAI, alongside claims that it is doing “the equivalent work of 700 full-time agents”.

Klarna has since made a u-turn, redeploying human employees to work on the customer service team accompanied by CEO Sebastian Siemiatkowski posting on X, “We just had an epiphany: in a world of AI nothing will be as valuable as humans”. While on a first glance this brings with it a certain sense of relief—a win for the value of humanity over AI!—it’s a worldview still centred around creating “value”, as opposed to one which places the needs of humans at its centre. Business Insider attributes this loss of “value” to AI’s propensity to provide “inaccurate information, or so-called “botshit”. The question that remains then is that if AI technology is improved to tackle these issues of inaccuracy—and we’ve seen it grow increasingly accurate at a frightening speed over the past few years—where does that leave us in a few years’ time? The truth in Siemiatkowski’s tweet relies heavily on a fragile presumption that AI won’t improve beyond a human level of capability and accuracy—whilst this reality may never come to pass, there’s no guarantee of it.

Hinton continues: “That’s not AI’s fault…that is the capitalist system”. After all, we ostensibly created AI to ameliorate the fears and stresses associated with living under capitalism but in doing so have created something to aid, abet and magnify capitalist structures. It’s easily argued that AI is both a byproduct and an accelerator of capitalism. If this is the case, then is a fear of AI simply a fear of capitalism? It’s certainly easier to imagine life without the widespread use of AI than it is to imagine life outside of capitalism, because a lot more of us have experienced it firsthand. Despite the definition being blurry and vast, AI is a (slightly) more tangible bogeyman to pin our fears on than an entire economic system.

Pause Play
Replay
Unmute Mute

An animation depicting Geoffrey Hinton, a pioneering figure in artificial intelligence and often referred to as the "Godfather of AI". The animation is created by the author.

Building bridges

The fear of technological advancement threatening the livelihoods of workers is, of course, nothing new. In 1600s London, bridges were few and far between: The nearest bridge to London Bridge was in Kingston-upon-Thames, which is around 12.7 miles away, according to Google Maps. With no other dry crossings through the city centre, the fastest and most convenient way to cross the river was to hire one of around 3000 15 watermen, 16 whose job was to ferry commuters across the river in small rowing boats called wherries. Whenever a new bridge was proposed, the watermen would petition against it for the threat that it posed to their livelihoods on the river. In doing this, they managed to postpone the opening of the first Westminster Bridge from an initial proposal in 1664 to its opening in 1750, buying themselves almost a century. It seems unimaginable today; the Thames now has 35 bridges 17 straddling the river in addition to 20 public tunnels 18 running beneath it. This has without a doubt democratised the seemingly simple act of crossing the river, making it possible for anyone to cross without paying a penny, but what of the watermen? In 1598 their population stood at around 3000, roughly 1.5% of London’s population of 200000 at the time. 19 By 1861, the number of watermen was close to 1600 20 in a total population of 2.8 million in London 21 —just below 0.06%.

I’m not mentioning this to suggest we pull down the bridges and hand the river back to the watermen—as someone who lives in London and gets easily seasick it wouldn’t be in my best interests—and neither am I suggesting it would be possible to return AI to whence it came and slam the lid of Pandora’s box shut. It’s by no means a perfect analogy, but the story of the watermen offers parallels alongside some interesting questions when we compare it to the effect AI is having on the livelihoods of workers today. The watermen were absolutely right to think that their trade would dwindle once bridges were built across the river and in fact, the proprietors of the bridges were in agreement. With the building of each new bridge, the watermen would receive financial compensation 22 in recognition of this direct link. The waters are murkier when it comes to establishing a correlation between the growth of specific AI-based services and the decline of job markets, let alone a framework for providing compensation to those affected. An employee made redundant and replaced with AI will (hopefully) be awarded a redundancy payout, but in the case of freelancers, this becomes much trickier.

For freelance artists and creatives, It’s possible to establish which generative AI models have been trained on datasets that include copyrighted material using tools such as haveibeentrained.com. 23 However, this is of limited use at the time of writing as currently no concrete legislative framework exists that allows artists to reliably opt out of their work being used in training datasets. 24 Even harder to quantify is the slow creep of lost job opportunities: 21% of 1272 freelancers surveyed reporting that the rise of generative AI has negatively impacted the demand for their services 25 but without appropriate legislation it is incredibly difficult to demand compensation. Whilst change is certainly fear-inducing in and of itself, navigating these changes with no safety net in place to ensure basic survival needs are met is a much scarier proposition.

Thames

A painting illustrating the Thames lightermen and watermen transporting dignitaries and spectators by boat, with a large crowd gathered along the riverbank in front of Brandenburgh House, during a ceremonial address to the Queen on October 3, 1820. 33

So hold me, mom

Creating legislation to mitigate some of the worst effects of AI upon livelihoods is one thing, but what if we radically reimagined AI not to replace and compete with workers, but to care for us? Geoffrey Hinton advocates for building maternal instincts into AI models, arguing that we need AI mothers rather than AI assistants.26

In a separate interview with CNN he expanded on this idea, explaining, “We need to make them have empathy towards us.” On paper, it makes sense. If we’re going to make machines that mimic human sentience, why not make them mimic loving, nurturing and protective behaviour? It’s not entirely straightforward though. 

By Hinton’s own admission, “we don’t know how to do that yet. But evolution has managed and we should be able to do it too.” But how do you distill all of the facets of something as complex as motherhood into a set of principles by which to train an algorithm? A mother’s love is forged both from chemical and hormonal processes and a series of (pretty gruelling) experiences such as pregnancy and childbirth, and strengthened over time once the baby is born through shared experiences and challenges. There’s also a great deal that we don’t fully understand as humans—or can’t agree on, so how on earth could we even begin to try and replicate this authentically within machines?

Laurie Anderson compared technology with the idea of a maternal figure in her 1981 single O Superman with the lyrics “So hold me, Mom, in your long arms. In your automatic arms. Your electronic arms”.Laurie Anderson O Superman Rather than sounding warm or cosy or inviting, Anderson’s mechanical mother figure sounds insidious and cold. Speaking to The Guardian in 2016, Anderson explains, “The lyrics are a one-sided conversation, like a prayer to God. It sounds sinister – but it is sinister when you start talking to power”. 27 In many senses, speaking to an AI chatbot is exactly that—talking to power. It’s easy to forget that despite ChatGPT’s ability to mirror and soothe the user much in the same way as a mother does to an infant, 28 who you are actually talking to is a bundle of automated processes housed in a vast facility in Abilene, Texas owned by one of the world’s largest tech companies. This very same tech company that we’re busy entrusting our precious data to has a $200 million deal with the US military 29 and has allegedly aided Israeli targeting in Gaza. 30 This is reminiscent of our old friend Talos, whose lethal attack of choice was to wrap them in a fiery embrace—the ultimate subversion of a universally understood maternal gesture of love.

So is it possible to build AI tools that can help us all, rather than simply bolstering the dizzying wealth of a small number of tech companies? Is it possible to create an AI that democratises more than just financial insecurity for the majority of people—simply shifting a greater percentage of people below the wealth disparity gap rather than eliminating it altogether? 31 32 Is the answer to this for tech companies trying ever harder to build AI tools that attempt to coddle and mother us or should we in fact be doing the opposite—demanding AI tools that value transparency and aim to make a clearer distinction between human, corporation and machine?

Pause Play
Replay
Unmute Mute

An animation depicting a 14th-century Slovak icon, Krasnobrodska Bohorodica (Our Lady of Krasny Brod), a depiction of the Mother of God. The animation is created by the author.

REFERENCES

1

Dupré, M. H. (2025, June 28). People are being involuntarily committed, jailed after spiraling into “ChatGPT psychosis”. Futurism. Link

2

Mayor, A. (2018). Gods and robots: Myths, machines, and ancient dreams of technology. Princeton University Press.

3

Dickson, B. (2023, October 30). Fact-checking and truth in the age of ChatGPT and LLMs. TechTalks. Link

4

Schroeder, S. (2022, May 11). What does it mean that God is omnibenevolent? Christianity.com. Link

5

Eliot, L. B. (2024, October 10). Mutual mirroring behaviors of AI and humans gets exposed. Forbes. Link

6

World Health Organization. (2019, May 28). Burn-out an “occupational phenomenon”: International classification of diseases. Link

7

Simo, F. (2025, July 21). AI as the greatest source of empowerment for all. OpenAI. Link

8

Swartz, A. (2008, July). Guerilla Open Access Manifesto. Retrieved from: Link

9

Davis, D.-M. (2025, September 18). Tim Cook, Sam Altman, and more attend Trump’s UK state banquet. TechCrunch. Link

10

Milmo, D. (2024, January 8). ‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says. The Guardian. Link

11

Kumar, A. (2025, August 12). Why AI is replacing some jobs faster than others. World Economic Forum. Link

12

Dooley, R. (2025, April 8). Shopify CEO’s AI-first hiring policy is job security’s ticking clock. Forbes. Link

13

Efficiency AI. (2025, July 31). Atlassian to replace 150 employees with AI. Efficiency AI. Link

14

Mann, J. (2025, September 2). Klarna is reassigning engineers and marketers to customer support after its AI bet went too far. Business Insider. Link

15

The History of London. (n.d.). Thames watermen and ferries. Link

16

The Company of Watermen & Lightermen of the River Thames. (n.d.). Home. Link

17

Tootbus. (n.d.). The bridges of London. Link

18

How many tunnels go under the Thames? A map. (2025, June 9). Londonist. Link

19

The History of London. (n.d.). Thames watermen and ferries. Link

20

Mayhew, H. (2018). London Labour and the London Poor, Vol. 3. Project Gutenberg. Link

21

Victorian London. (n.d.). Populations – Census – 1861. Link

22

London Museum. (n.d.). Westminster Bridge. Link

23

Wiggers, K., & Kamps, H. J. (2022, September 21). This site tells you if photos of you were used to train the AI. TechCrunch. Link

24

Intellectual Property Office, Department for Science, Innovation & Technology, & Department for Culture, Media and Sport. (2024, December 17). Copyright and artificial intelligence (CP 1205). Link

25

Atkinson, R. (2025, March 25). Report finds creative freelancers hit by loss of work, late pay and AI. Museums Association. Link

26

Schmelzer, R. (2025, August 12). Geoff Hinton warns humanity’s future may depend on AI ‘motherly instincts’. Forbes. Link

27

Simpson, D. (Interviewer). (2016, April 19). How we made Laurie Anderson's O SupermanThe Guardian. Link

28

Phillips, A. (2019, February 6). The mirror-role of mother and family in child development. University of York. Link

29

Agence France-Presse. (2025, June 17). OpenAI wins $200m contract with US military for ‘warfighting’. The Guardian. Link

30

Business & Human Rights Resource Centre. (2025, February 19). Microsoft & OpenAI allegedly aided Israeli targeting in Gaza through AI & cloud services. Link

31

Lowitzsch, J., & Magalhães, R. (2024). Automation, artificial intelligence and capital concentration – A race for the machine. Journal of Economic Policy Reform, 27(2), 197–215. Link

32

Skare, M., Gavurova, B., & Blažević Burić, S. (2024). Artificial intelligence and wealth inequality: A comprehensive empirical exploration of socioeconomic implications. Technological Forecasting and Social Change, 190, 123456. Link

33

M. Dubourg, Arrival at Brandenburgh House of the Watermen &c., with an Address to the Queen on the 3d October 1820 [Coloured aquatint illustration of Thames lightermen and watermen]. 1820. Royal Museums Greenwich. Link