This is a very difficult topic, but within the “New Technologies” section it is worth discussing, because for the first time technology has entered not smart homes or finance, but direct human grief.
The story from China sounds like science fiction, but in reality it is already happening: a family managed to create a full AI replica of a deceased son, reports South China Morning Post. He died in a traffic accident, was an only child, and his 80-year-old mother, who suffers from heart problems, does not know the truth. To avoid an immediate tragedy, the relatives chose a highly controversial path — a digital simulation of life.
To build the “avatar”, they used a standard set of data: photos, videos, voice recordings, speech patterns, and behavioral habits. The result was not just a chatbot, but a visually and behaviorally convincing copy of a person. During video calls, the AI even reproduces small gestures, such as a characteristic forward lean while speaking. For the elderly woman, this appears as normal remote life of her son who “works in another city”.
The family built an entire supporting reality: the son calls every day, asks about his mother’s health, tells her to eat properly, dress warmly, and promises to return when he “finishes his work and earns enough money”. From the mother’s perspective, life continues in its usual logic without any sudden rupture.
And this is where the most complex part of the story begins — not technological, but ethical.
On one hand, the technology solves a concrete problem: it reduces the risk of immediate psychological shock, which at such an age can be life-threatening. In medicine, this is known as “protective information filtering”. People are sometimes not told severe diagnoses immediately if it could worsen their condition dramatically. In this sense, AI becomes a tool for delaying trauma.
On the other hand, this is no longer just “softening the truth”, but creating a prolonged illusion of life. And the longer it continues, the harder the eventual confrontation with reality becomes. This creates a classic conflict: compassion in the present versus potential trauma in the future.
There is also a deeper layer — what does it actually mean to “be present” as a person? If AI reproduces speech, behavior, reactions, and even emotional patterns, it becomes almost indistinguishable for an observer. But this is not a continuation of personality, it is a simulation. And this is where the boundary appears that society has not yet learned to define legally or morally.
Interestingly, public reaction to such stories is always divided into two camps. Some see it as a technological “miracle of comfort” — a way to maintain connection. Others see it as a prolonged lie that will eventually cause even greater pain, because reality always catches up.
There is also a practical question often overlooked: who controls such a “digital person”? Today it is the family, tomorrow it could be a commercial platform, and the day after a service operating under its own logic of monetized memory. Then the discussion shifts from comfort to an entire industry of digital replicas of the deceased.
We are effectively at a point where technology has caught up with human emotions but not with ethics. We can simulate presence, but we still do not understand how long such simulation is acceptable.
And perhaps the key question is not “can we do this”, but “what happens to society if this becomes normal”.
All content provided on this website (https://wildinwest.com/) -including attachments, links, or referenced materials — is for informative and entertainment purposes only and should not be considered as financial advice. Third-party materials remain the property of their respective owners.


