Digital Playthings: AI Chatbots, Dating, and Who is Playing Who

Prompt:

  • Why do you consider this synthetic media?
  • What can you find out about how this was made (if you can)? If it’s machine learning based, can you find information about the data set used to train it?
  • What are the ethical ramifications of this specific example (if any)?

These days, AI’s increasing prevalence on the internet makes it more difficult to find any novel use of it. If there is a novel use, all the massive tech companies swoop on it and either replicate it to the nth degree or wall it off for their own private, opaque uses. A lot of public AI usage now consists of producing images, deepfakes, and generated text, all through the medium of chabots. A user inputs a command, and the bot scans through all sorts of data to an approximation of your request. Why is this the most popular format for AI usage? And why have companies set this chatbot format as the best way for everyone to “talk” to AI?

I explored this question via a recent trend I’ve seen online for over a year: humans dating AI chatbots. Naturally, this topic is concerning. In response to an epidemic of loneliness and social isolation, people seem to be straying further from making more difficult and meaningful relationships with other people and entrenching themselves into the digital world, furthering their isolation. However, the dynamic between people and chatbots hits something even deeper, I believe. As people entrench themselves in chatbots, the chatbots entrench themselves into humans. How people treat a chatbot influences how people start treating other people and the world around them as things that should be under their control and set to their personal preference. 

Now, is this considered synthetic media, if these chatbot interactions do not produce a publicly viewed work? I contend it is, much in the way a video game is media. Players play video games by themselves or online with other people. In this vein, people interact with chatbots as a game of sharing questions or data and receiving responses. The Game of Love reveals a lot of information about people. Companies like Replika are eager to learn from its users and then sell their data (Mozilla, Time). 

Replika is a company that specializes in AI Companions. Its founder, Eugenia Kuyda, started Replika as a scripted bot and started as a way to communicate with her deceased friend via text message she fed the bot. Replika evolved into a more responsive bot, thanks to AI trained by scraping the internet and user feedback (according to CBS News). Most likely, it’s using OpenAI’s API, but that’s speculation on my part. The free version is mostly text, but the paid version of Replika includes being able to call your companion, receive voice notes, and customize your companion’s look, voice, and your relationship to them, unlocking romantic and erotic interaction. As of 2022, this subscription brought in around $2 million in monthly revenue, according to Reuters. 

The most interesting ethical ramification about all this for me is how treating romantic AI partners bleeds over into how users start treating other people/social interactions. In the NPR interview, Sangeeta Singh-Kurtz, a journalist for NY Mag’s The Cut, details how upvoting and downvoting her AI Boyfriend’s behavior (which is how users train their AI romantic partner) made her start upvoting and downvoting her real-life partner’s behavior in her head. That spooked me. In all these AI apocalypse scenarios, we always imagine the AI taking us over by force and by hacking our systems. But it seems like the real threat with AI is not the tool itself but how we use AI tools to sabotage ourselves and treat each other worse. At the end of the day, the problem is that we have to deal with each other and learn to live better with each other.


Sources

Dating an AI:

https://www.cbsnews.com/news/valentines-day-ai-companion-bot-replika-artificial-intelligence/

https://www.theguardian.com/uk-news/2023/jul/06/ai-chatbot-encouraged-man-who-planned-to-kill-queen-court-told

https://www.nature.com/articles/s44184-023-00047-6.epdf?sharing_token=7BuFs8hH0iwmUOR6T3m43dRgN0jAjWel9jnR3ZoTv0OrwHq320zVHXKoqBlrCmAJ0nczqj5sjGZPOQjlUXjg2Mc-_32z5VSx4emXOnXA5Gmo97i2_-aLiuLvYAz8UMdPJttzklLl59XuEO6arNI_jNXebjPFc-TNS0Hq2dPAZ_U%3D

https://www.cnbc.com/2024/02/14/generative-ai-is-shaking-up-online-dating-with-flirty-chatbots.html

https://time.com/6257790/ai-chatbots-love/

https://futurism.com/bumble-founder-future-ai-dating-other-ais

https://news.iu.edu/live/news/34137-singles-in-america-study-daters-breaking-the-ice

https://www.theatlantic.com/ideas/archive/2024/05/ai-dating-algorithms-relationships/678422/

https://www.scmp.com/tech/big-tech/article/3266497/chinas-ai-giants-cosy-virtual-companions-loneliness-drives-chatbot-revenue

https://www.reuters.com/technology/its-alive-how-belief-ai-sentience-is-becoming-problem-2022-06-30/

https://www.reuters.com/technology/italy-bans-us-based-ai-chatbot-replika-using-personal-data-2023-02-03/#:~:text=Replika%2C%20a%20San%20Francisco%20startup,features%20such%20as%20voice%20chats.

https://foundation.mozilla.org/en/blog/shady-mental-health-apps-inch-toward-privacy-and-security-improvements-but-many-still-siphon-personal-data/

https://www.startmotionmedia.com/is-replika-pro-worth-it-an-in-depth-review/

https://www.npr.org/transcripts/1167066462

Character AI:

https://nypost.com/2024/09/25/business/google-paid-2-7b-to-rehire-ai-genius-who-left-for-startup/

https://global-factiva-com.proxy.library.nyu.edu/ha/default.aspx#./!?&_suid=172961428812906931913775215262

https://www.wired.com/story/characterai-has-a-non-consensual-bot-problem/

Comments

Leave a comment