Empathy Is a Feature, Not a Feeling: Not Human, But Human-Centered

When I was little, I used to name everything. My stuffed animals, my bike, even the toaster. I gave them personalities, voices, feelings. My mom would chuckle and say, “Estás haciendo novelas con los objetos, mija.” (You’re making soap operas with objects, sweetie.)
Turns out, I wasn’t alone. Anthropomorphism, or giving human traits to non-human things, is something we all do. It helps us feel connected, safe, and in control. It’s part imagination, part survival. And honestly? Sometimes it’s kind of comforting.
But today, that instinct is being manipulated. Our tools talk back now. They sound supportive, wise, even empathetic. When your AI assistant says, “I understand,” or your chatbot responds with concern, it’s easy to forget there’s no “someone” on the other side. Just code, predictions, and probabilities.
The problem isn’t that AI sounds human. It’s that it feels human. And when we start reacting to that feeling without pausing to question what’s real, we risk losing the very thing that makes our leadership matter: discernment.
So here’s the question leaders must ask:
How do we keep our approach human-centered without confusing AI for human or being misled into believing it is?
Thought of the week: “Aunque la mona se vista de seda, mona se queda.”
(Even if a monkey dresses in silk, it’s still a monkey.)
Tech might look and sound human, but that doesn’t make it so.

THIS WEEK'S INSIGHTS:
The illusion of AI as “conscious” is growing. But wise leaders must anchor their teams in clarity. AI is powerful, but it is still a tool, not a teammate.

TRENDS
AI is becoming more humanlike in ways that blur boundaries.
We now have voice assistants that sound emotionally attuned. Chatbots that respond with empathy. Startups promising “AI companions” that can support your mental health, coach you through decisions, or simulate friendships. The emotional realism is evolving quickly, and while the design is intentional, the perception it creates can be misleading.
Mustafa Suleyman, Microsoft’s CEO of AI, recently issued a warning about this growing trend. In an interview with PC Gamer, he called the push for “AI rights, model welfare, or even AI citizenship” a dangerous turn in the industry. He emphasized that AI is not sentient, not conscious, and not deserving of moral standing. It is a tool, not a being.
"There is no self. There is no will. There is no desire. These systems are just a reflection of the data and the instructions we give them," Suleyman said.
→ Read the full article here
But here’s the real leadership challenge.
It is not about what AI is. It is about what people believe it is. When systems sound wise, empathetic, or self-aware, people are more likely to trust them, follow their guidance, or make decisions without vetting. That is where harm happens. Not because the AI crossed a line, but because we stopped seeing the line clearly.
The line between tool and teammate is not drawn by the machine. It is drawn by us.

TIPS:
Limiting Beliefs → Empowering Shifts
Limiting Belief: If AI can make mistakes or be misleading, it's safer to avoid using it at all.
Empowered Shift: You don’t need to fear AI. You need to learn how to guide it.
How to Mitigate: Build internal pause-and-check habits. Use AI to draft, explore, or summarize, but always apply human oversight before making decisions.
Limiting Belief: AI might replace human connection or care, so it doesn’t belong in people-first work.
Empowered Shift: AI is a supplement, not a substitute. It can free you up to spend more time in human connection.
How to Mitigate: Use AI to reduce administrative burden, not emotional labor. Let humans lead the relationships, and AI support the systems behind them.
Limiting Belief: We should avoid AI tools that feel too humanlike because they might blur the line.
Empowered Shift: The key isn’t avoiding humanlike design. It’s making sure everyone understands what’s real and what’s not.
How to Mitigate: Add disclaimers or context when deploying AI in employee or customer-facing spaces. Train teams to ask, “Who is speaking? A person or a pattern?”
TOOLS
Here are some tools worth exploring that go beyond the typical corporate stack. And if your organization does not allow access to third-party platforms, you can still leverage approved enterprise AI tools like ChatGPT, Microsoft Copilot, Google Gemini, Claude, or Perplexity to get similar results. Just copy, paste, and adapt these prompts.
Tool: Personal AI
An AI-powered memory assistant designed to grow with you. Personal AI learns from your conversations, documents, and work habits and stores that context so you can ask relevant, personalized questions in the future.
Pro Tip: Ideal for executives and knowledge leaders. Use it to augment your institutional memory, whether tracking project nuances, recalling cultural values, or surfacing recurring team concerns.
Alternate Prompt for Company Tools:
“Using my recent project notes or team messages, recreate a summary of recurring themes or concerns, with suggestions for follow-up conversation areas.”
Tool: Elicit
An AI research assistant designed to show how it reaches its answers, helping you ask better questions and evaluate sources.
Pro Tip: Use it for brainstorming, literature reviews, or framing decisions, especially when evidence and reasoning matter.
Alternate Prompt for Company Tools:
“Create a decision matrix that compares [Topic A] and [Topic B], including strengths, risks, and assumptions behind each option.”
Technique: Create an AI Clarity Code
Design a short internal guide that defines how your organization will use AI. This should include what tone AI should take, what it should never pretend to do, and how transparency will be maintained.
Pro Tip: This creates alignment across teams and builds trust from day one.
Prompt to Jumpstart in ChatGPT or Copilot:
“Draft a first version of an 'AI Clarity Code' for our organization. Include principles on tone, transparency, human oversight, and appropriate use.”

👉🏽 Where do you need to draw the line so AI remains a tool that empowers you, not a voice that misleads you?

CLOSING THOUGHT
It’s tempting to confuse fluency with intelligence, especially when AI sounds helpful, supportive, or even wise. But fluency is not a signal of truth. It is a reflection of training data and probability, not insight or understanding.
As AI becomes more embedded in the way we lead, learn, and work, our responsibility is not to fear it or idolize it. Our responsibility is to see it clearly, use it wisely, and never forget what makes our leadership truly matter.
In this new era, discernment is not just a skill.
It is a form of protection.
And clarity is not just a leadership trait. It is a cultural advantage.
The moment we stop questioning what feels real is the moment we begin to lose what actually is.
Don’t just adopt AI. Clarify and verify.
Hasta la próxima, Abrazos!
(Until next week, hugs!)

When You're Ready...
Here’s how I can help you and your organization take your leadership and professional growth to the next level:
Speaking: I deliver engaging, high-impact keynotes and workshops on AI-driven leadership, personal branding, career advancement, and transformation. Whether it's a corporate event, leadership summit, or industry conference, I bring practical insights and strategies that empower professionals to thrive in today's rapidly evolving world.
💃🏻 💃🏻 HISPANIC HERITAGE MONTH IS AROUND THE CORNER! REACH OUT SOON, MY CALENDAR IS FILLING UP FAST!💃🏻 💃🏻
Learning & Development Consulting: I partner with organizations to design and implement learning experiences that drive real impact. From AI-powered career development programs to leadership training tailored to their talent, I help companies advance their workforce and create growth opportunities for high-potential professionals.

Did a brilliant friend forward this your way? Subscribe here to get ¡AY AY AY, AI! delivered fresh every week...straight to your inbox, con todo y sazón! Don’t get left behind.
How did you like today's newsletter?
🔥 Loved it! It was insightful.
🤔 Decent read, but could be better.
😐 Meh, it didn't resonate with me this time.
