From medical records to taste preferences and unforgettable past experiences, AI is quietly building your digital personality profile. But are you really prepared to let AI remember every word you say forever? Behind AI algorithms, there's not just gentleness, but also social death and cruelty.
In April this year, OpenAI released the "memory" feature of ChatGPT:
Since then, ChatGPT's memory feature has been comprehensively upgraded, becoming more intelligent and natural, and even free users can enjoy it. It can remember what you've said, form a personalized profile, and continuously optimize the conversation experience. But problems have also emerged—
Are you really prepared to let an AI remember you forever?
This is important: Not everyone is ready to accept an eternally remembering chatbot.
ChatGPT's memory feature provides more personalized responses by utilizing context information from previous conversations.
For example: Reporter Megan Morrone once asked ChatGPT to provide a vegetarian menu without lentils. Since then, the chatbot remembered she doesn't like lentils.
The initial memory feature was like a personal memo that required active writing.
Now, it has become more "understanding" - even able to automatically record your behaviors and preferences across different conversations.
Christina Wadsworth Kaplan, OpenAI's personalization lead, told media that this year's major update is to make "memory more natural and automatic".
She also shared a personal experience:
Once when she was preparing to travel abroad, ChatGPT proactively added an extra vaccine to the recommendation list based on her previously uploaded health records.
The nurse reviewed and nodded in approval.
This is truly "AI understanding you".
New AI Social Death and Digital Trauma
However, it's not as beautiful as OpenAI promises. There are problems behind the "memory" feature.
For instance, it might suddenly remind you: "Didn't you say you don't eat lentils?"
Or casually mention something sad you said months ago.
Sometimes, this "AI's long memory" can be spine-chilling.
In February 2024, when OpenAI first announced the feature, they promised that sensitive content like health information would be restrained unless explicitly requested.
But do you believe that? Yes, now you can directly tell it: "Remember this." Or conversely: "Don't remember this." The AI will follow your instructions.
Today's ChatGPT "memory" feature automatically records previous chat content to understand user preferences and background.
This personalized system involves not just privacy issues, but also many awkward situations.
Megan Morrone asked ChatGPT to generate an image of herself based on its memory.
The AI portrait included a wedding ring - but she had long been disillusioned with marriage. ❤️🩹
Memory isn't better the longer it lasts, especially when it comes from an uncontrollable machine.
Persistent memory might also make the chatbot "omniscient", thereby reducing user control over large language models (LLM).
Developer Simon Willison uploaded a dog photo and asked ChatGPT to edit it with a pelican costume, but the image also added a "Half Moon Ba" sign.
AI explained: "Because you mentioned this place before."
He laughed in frustration: "I don't want my love for dressing dogs in weird costumes to interfere with my future serious work prompts!" 🥲
AI has permanent memory but has forgotten that life itself should have selective forgetting.
You might think it's just a technical bug, but it actually conceals two spine-chilling problems 👇:
(1) Inadvertent Algorithmic Cruelty;
(2) Context Collapse.
(The translation continues in the same manner for the rest of the text)If it comes from a human hand, it is indeed wrong. But if it comes from code, it can only be said to be unfortunate. Moreover, these problems are difficult to solve, truly challenging.
This is not a simple task. It is difficult for algorithms to determine whether a photo receives numerous likes because it is hilarious, stunning, or heartbreaking.
Essentially, algorithms have no "insight" and are quite "brainless". They run according to preset processes, and once started, they stop thinking.
Saying someone is "lacking insight" is usually a form of contempt or insult. However, humans have allowed many truly "brainless" algorithmic processes to arbitrarily invade users' lives, even turning against themselves.
True intelligence is not just "remembering every sentence you've said", but "understanding what truly breaks your heart".
Context Collapse
The problem Willison encountered is another common phenomenon in algorithmic systems, called "context collapse".
This refers to user data from different domains (work, family, hobbies, etc.) being mixed together, blurring the boundaries between them.
Like many academic concepts, "context collapse" is not the product of someone's sudden inspiration, but gradually emerged through continuous communication and collision.
However, many academic professionals have written to danah boyd, asking if she created the term "context collapse", so she looked back at her records to clarify this issue.
danah boyd: Chief Researcher at Microsoft Research and Founder and Chair of the Data & Society Research Institute. Her research keywords include: privacy, census, context, algorithms, fairness, justice
In 2001, she began her master's studies at MIT.
In 2002, she wrote a master's thesis titled 'Faceted Id/entity', deeply influenced by Erving Goffman and Joshua Meyrowitz's thoughts.
In that thesis, she spent an entire chapter repeatedly discussing "collapsed contexts", although she did not systematically define the term at the time.
The entire thesis was actually exploring how to construct and manage identity in different contexts.
Thesis link: https://www.danah.org/papers/Thesis.FacetedIdentity.pdf
She particularly loved Meyrowitz's book 'No Sense of Place'. This book analyzes how media affects interpersonal interactions, revealing the dilemmas people face when navigating multiple audiences. For example, the misaligned understanding produced when a vacation photo is seen by different people.
The Chinese translation is 'Disappeared Territory', focusing on the impact of new information flow patterns on social behavior. The author inherited scene theory and media theory, proposing an entry point that connects face-to-face interaction research with media research: social "scene" structure.
During 2003-2004, she gave several lectures. In her slides, she applied the term "collapsed contexts" to the Friendster social platform, describing the accidental intersection of different niche cultures.
In some lecture notes, she occasionally simplified "collapsed contexts" to "context collapse", but most of the time, she still used the original term.
Until 2005-2008, she continued to use "collapsed contexts" in some articles. For example, her doctoral thesis used "context collapse" as a core concept.
Thesis link: https://www.danah.org/papers/TakenOutOfContext.pdf
In 2009, she began collaborating with Alice Marwick.
Alice E. Marwick: Associate Professor in the Communication Department at the University of North Carolina at Chapel Hill, Co-founder and Chief Researcher of the Center for Information, Technology, and Public Life (CITAP). She focuses on researching the sociocultural impacts of social media technologies, with major academic contributions including: network media manipulation and misinformation, micro-celebrity phenomenon, online privacy, context collapse. Her latest book: 'Private Domain is Politics: Privacy and Social Media in the Internet Age' (Yale University Press, published in May 2023)
Alice was very interested in "collapsed contexts" and "imagined audiences", and she challenged these perspectives from the angle of "micro-celebrity".
Alice collected extensive data on how Twitter users manage their audiences.
Later, using this data, they published an article.
This was the first time "context collapse" was used in a formal publication.
Exactly how it changed from "collapsed contexts" to "context collapse", danah boyd cannot clearly remember.
Meanwhile, in 2009, Michael Wesch also published an article with the term "Context Collapse" in its title.
Although they were both active in media research circles, they probably did not directly quote each other's work, but rather grew from the same theoretical soil.
Now, when discussing "context collapse", danah boyd often mentions Meyrowitz.
Although he did not propose this term, his theory made danah boyd realize the importance of this phenomenon.
Summary
ChatGPT's memory makes AI more like a personalized assistant - it can remember your preferences, experiences, health conditions, and even your sense of humor.
But this also means:
It might bring up past events you'd rather not be reminded of;
It might misinterpret a momentary mood as a permanent preference;
It might even make people feel: "It knows too much."
So, the real challenge is not making AI remember you - but giving you the right to decide what it remembers, how it remembers, and for how long.
Welcome to the era of AI's "permanent memory".
But don't forget, you are the true master of memory.
Remember, you can always tell AI "don't remember".
References:
https://www.axios.com/newsletters/axios-ai-plus-cc128fe8-9e1b-42ca-8c75-b681425dca55.html
https://meyerweb.com/eric/thoughts/2014/12/24/inadvertent-algorithmic-cruelty/
https://www.zephoria.org/thoughts/archives/2013/12/08/coining-context-collapse.html
This article is from the WeChat public account "New Intelligence", author: New Intelligence, published by 36kr with authorization.