Can artificial intelligence combat the loneliness epidemic?
During Lockdown, many of us were dealing with the increased loneliness of isolation. The emergence of mental health problems rose during this time, and is still felt even today. The question that many face is: how do we combat this?
The digital space has often been seen as a solution for the spike in "cabin fever", amongst those facing the pressures of being alone in a digital world. Now, this is being revolutionised by artificial intelligence.
The rise of AI platforms like Open AI, have allowed AI and human relations to flourish like never before. This can be a positive step for mental health advocates and those who struggle with communication IRL. Many people find this development, increasingly worrying.
AI is currently in its infancy, and this has generated some puzzling or even comedic generated images that are often distinguished from "real" photos. However, AI is changing. Chatbots are AI programmed to mimic human speech/writing in order to serve a wide range of purposes. As of recently, there have been many debates surrounding whether or not this AI has gone to far and passed the "Turing test" . This test was created in 1950, by a scientist called Alan Turing, as a way to distinguish if artificial intelligence could successfully pass off as human.
The lines between a bot and close-friend are therefore- blurrier than ever before. Despite this, artificial intelligence could lack the emotional intelligence for friendship. Metro News recently did an interview with associate professor of philosophy at the University of Texas, Şerife Terkin- where the extent of the impact of AI in our personal lives was brought under scrutiny.
Şerife told Metro:
"It would be wrong to overstate it's capacity for responding to responding to human challenges such as curbing loneliness, or supporting mental health in general".
The interview demonstrates that AI probably isn't yet capable of giving you life advice, so do't go firing your trusty therapist just yet. Even then, there is much to consider about the way AI will develop in the future to fit a more medical function. Currently, new programmes are being tested for this function. There was huge controversy this year, after the mental wellbeing app Koko was found to have been using a chatbot GPT-3 programme as a therapist for thousands of unsuspecting users. This unexpected AI encounter raised ethical questions surrounding the authenticity of medical and life advice online.
Law Professor Leslie Wolf at Georgia State University said in a recently NBC news article about the Koko App-
This highlights a major concern towards human attachment to software. A part of this being, that users may place their trust in a bot used for an ulterior gain- such as the "experiment" ran by the Koko app's founder. Therefore, this could bring up a significant privacy concern for AI users worldwide. Simultaneously, interference with human emotions in this way could potentially be damaging for those with extreme mental health concerns.
No better example demonstrates this risk, than the AI "Replika" which was designed to provide companionship. The cartoon AI, meant to act like a texting friend, has come under fire for causing "heartbreak" and "grief" for removing certain in-game interactions. This demonstrate just how extreme the effects of AI relations can be.
According to the article on the Replika app by The conversation.com -
" Even if these technologies are not yet as good as... human-human relationships, for many people they are better than... nothing".
It is understandable that many people, faced with isolation and social ostracism may turn away from humanity towards a less judgemental, AI competitor. This can also be a great tool for those who are neurodivergent to practice socialisation without the threat of social pressure. However, this could also have negative impacts on communities as people become more reclusive and less likely to engage in community events.
On the topic of AI attachment with the Replikas, the conversation.com also stated:
The article serves a reminder that there may be more of an emotional cost to AI relationships than we can currently imagine. Especially when problems like hacking and surveillance can further jeopardize users' mental and physical wellbeing.
The event horizon
Our social lives in the future will center increasingly around technology. This can have implications on our mental health, as we may struggle to find connection and community. Many feel that AI imposes a significant existential threat. This is because once AI goes past the point of no return, and rivals/ succeeds us, we may find that we are outmatched. In the meantime, AI serves as a great tool for curious thinkers, and potentially even venting about your personal problems. As development in technology expands, we can hope to see new exciting advancements in the way we treat mental health and offer companionship to those who have none. Regardless, no one can yet say with absolute certainty the direction that artificial and human intelligence will take us. It is certain, that AI's advancement will continue to expand at an accelerated rate.
Hopefully, from this article you will have understood the increasing relevance of AI in our social lives, but also some ethical questions surrounding it. Now I pass the question of to you: would you befriend an AI?