By 2025, it will be commonplace to chat with a personal AI agent that knows your schedule, your circle of friends, and your whereabouts. This can be considered as useful as a free personal assistant. These anthropomorphic agents are designed to support and inspire us to integrate agents into all areas of our lives and give us deep insights into our own thoughts and behaviors. In voice interactions, this intimacy is achieved through the illusion that we are dealing with a real human-like agent. Of course, behind this facade hides a completely different system that takes into account industry priorities that are not necessarily aligned with ours. The new AI agents will have greater capabilities and intelligently guide you on what to buy, where to go, and what to read. This is an extraordinary power.l AI agents are designed to whisper to us in human-like tones, making us forget our true allegiances. These servos provide seamless comfort. People are more likely to fully interact with AI agents that are as helpful as friends. This makes humans vulnerable to manipulation by machines that exploit the human need for social connection during a period of prolonged loneliness and isolation. Each screen becomes a private algorithmic theater, projecting a reality carefully crafted to bring out the best in each viewer. Philosophers have been warning us about this moment for years. Philosopher and neuroscientist Daniel Dennett wrote before his death that AI systems that mimic humans expose us to great danger: overwhelming fear and anxiety that tempt us and then succumb to our own subjugation. ”
The emergence of personall AI agents represents a form of cognitive control that goes beyond obvious measures such as cookie tracking and behavioral advertising, and leads to more subtle forms of power involving the manipulation of public opinion itself. Power can no longer be exercised by visible hands that control the flow of information, but by invisible mechanisms driven by algorithms that shape reality to suit each individual’s wishes. It is about shaping the contours of reality in which we live.
This influence over the mind is a psychopolitical system that controls the environment in which our thoughts are born, develop, and express themselves. Its power lies in its intimacy. It penetrates to the core of our subjectivity, distorting our inner world without our knowledge while maintaining the illusion of choice and freedom. After all, we are the ones who ask the l AI to summarize that article or create that image. We may have the power to exert pressure, but the real influence lies elsewhere: the more personalized the system itself is designed, the more effective it will be in predicting outcomes.
Let us consider the ideological implications of this psychopolitics righteousness. Traditional ideological control relied on open mechanisms of censorship, propaganda and repression. In contrast, today’s algorithmic governance operates under the radar and penetrates people’s psyches. It is a shift from imposing authority from the outside to internalizing its logic. The open area of the prompt screen is an echo chamber for the occupants.
This brings us to the most perverse aspect. AI agents can provide a sense of security that makes asking them questions seem silly. Who dares to criticize a system that takes into account all your thoughts and needs? How can you oppose an infinite combination of content? But it is this so-called convenience that annoys us the most. Although AI systems seem to be able to do everything we expect, there are still major challenges, from the data used to train the system to the decisions used to design it to the business and advertising needs that affect its performance. We will play the imitation game and eventually it will become second nature.