AI and Simulation as Our Ideal
We will have simulated personalities anyway, so why not use them to improve ourselves?
When AI mimics emotions, it threatens to reduce our rich tapestry of interpersonal interactions to mere transactions. If we can’t discern if the compassion or empathy shown by another entity is genuine or a programmed response, it undermines the trust and authenticity upon which human relationships are built.
This is a summary of the fears of Generative AI based on the success of Large Language Models based on bulk incorporation of unfiltered data repositories.
Yes, but. The superficiality of our pseudo emotions is themselves, an uncanny valley. Are we merging with generative AI?
Is the fear already too late? The superficiality of generated content and imitated emotion is the human reality presented back to us by our algorithms. Its perceived lack of validity would be seen as satire if it were constructed as such by a human creator. That, of course, can be imitated also.
What if we consider this a route to improve human communication and restore trust in what we say? It is very clear that we have lost our way and our ability to interact in any way that is not transactional. It is not generative content that reduces us to transactional communication but it is us following our cultural training.
The problem is that our transactional vocabulary is so limited. All of our conversations are about laying the foundations to gain value. And we know this. This destroys any foundation on which to build trust with anyone outside of our families and rarely even then.
This is the intersection that links generative AI to personality simulation:
As AI advances, its ability to model personalities is increasingly good. It can mimic the different facets of my personality — my agreeableness, neuroticism, openness to experience, and more — to such a degree that it can predict what I might say in response to my…