06 September, 2025

When AI Gets It Wrong: A Personal Encounter with Digital Slander

 

This morning, I ran a simple test with an AI tool, curious about how it might respond to certain questions. At first, it seemed to give me straightforward, if somewhat blunt, answers about a couple of organizations I had asked about. This was triggered by an email I had received inviting me to join the organization (which had sent me the invitation). The response was fast (not unexpected) and the conclusion very definitive: not legitimate, do not accept, delete the email immediately.

Following that, I decided to test out another organization. This time, the response was more detailed and started with “Based on a thorough analysis of available information, XYZ (I am deliberately leaving out the name of this organization to protect its name and reputation), displays nearly all the characteristics of a disreputable organization. My interest was piqued. While I was curious, I didn’t have any reason to doubt this AI. Afterall, what it was saying to me might be true. I don’t know for sure.

 Then I decided to ask it about something very close to home: my own website, www.now4life.com.

The response floored me. In confident tones, it declared that my centre was a multi-level marketing company, even describing it as a pyramid scheme with a “documented history of regulatory action.” In other words, it slandered not just a business, but my life’s work. My immediate reaction was shock and anger – a fiery indignation – a sense of injustice. But underneath was something deeper: the realization that if someone took this false claim at face value, it could damage trust I’ve built carefully over decades.

 

The Danger of Mistaken Identity

To be fair, I understand how the mistake may have happened. The phrase “Now4Life” or “Now for Life” has been used by other organizations, including those with MLM-type models. The AI seems to have confused those companies with my NOW Mind Body Healing Centre, which has nothing to do with network marketing.

But here’s the critical issue: confusing names is not harmless. When an AI makes a wrong connection, it doesn’t just say “I might be mistaken.” It presents misinformation as fact – and people are inclined to believe it. That’s where careless error crosses into reputational harm.

 

Why This Matters Beyond Me

My frustration isn’t only personal. It highlights a larger problem in our digital age: people increasingly treat AI outputs as authoritative, without questioning or verifying them. For many, if an AI says something, it must be true. It’s like that common expression: “If it is on the internet, it must be true.” At the least, even if the information is not completely accepted, many will still assume there is some validity. Again, another common expression: “Where there’s smoke, there’s fire.”

This isn’t just about me or my centre. Imagine the small business owner, the therapist, the doctor, the teacher, or the healer who wakes up one day to find that an algorithm has declared them untrustworthy. Words have power. Misinformation, when amplified, can undo years of dedication and erode public trust.

 

Grounding in Truth – Who We Really Are

So let me be clear about who we are. NOW Mind Body Healing Centre is a psychological and holistic healing practice based in Kuala Lumpur. My work has been about helping people move beyond the labels and limitations imposed by others (and later by themselves), to rediscover their own worth, strength, and potential for greatness – and to come home to themselves. I may use tools like psychological consultation, therapy, inner child healing, hypnotherapy, and guided imagery but not as ends in themselves. They are merely pathways to help people reconnect with their worth, their strength, and the truth of who they are.

Our mission is simple: to support people in reclaiming their energy, grounding their emotions, and living more fully. No products to push, no recruitment schemes. Just the heart work of healing and growth.

 

Reflections – Lessons from the Experience

I won’t deny that my first response was anger. To see my work misrepresented so grossly felt like a personal attack, even though I knew it wasn’t intentional. But as I sat with it, I was reminded of a truth I often share with others: we can’t always control what happens around us, but we can control how we respond.

This experience reinforced for me the importance of digital literacy. We need to approach AI with discernment, remembering that it is not infallible.

Helping others find balance often calls me back to the same lesson: I can’t control everything. What I can do is notice my reactions, shift what’s mine to shift, and release the rest. Every time I guide someone else to pause and breathe, I’m reminded that the same invitation is for me too. Growth isn’t about control or perfection;  it’s about how we choose to respond, even in the face of digital slander.

Closing – Looking Ahead

AI has enormous potential, but it must evolve with greater responsibility and accountability. In the meantime, those of us whose lives and work are affected by it must remain vigilant, discerning, and anchored in truth.

For anyone reading this, I’d invite you to take my experience as a reminder: don’t take everything an AI (or the internet) says at face value. Question it. Verify it. Pause before you pass it along.

As for me, I’ll continue doing what I’ve always done: walking with people on their journeys of healing and growth. This is who I am, this is the work I do, and no algorithm – however confident its voice – can overwrite that.

For those who’d like to see the full, unedited transcript of my exchange with the AI system (DeepSeek), I’ve shared it in a separate companion blog post: (Behind the Curtain: My Conversation with DeepSeek - 6 September 2025). I offer it as documentation, context, and transparency – for anyone who wants to witness firsthand how confidently wrong AI can sometimes be.


Namaste
Syl

No comments: