AI Just Snatched Your Voice, Face, and Cat Pics and Might Be Using Them Better Than You
- Brainz Magazine

- Oct 3
- 4 min read
Updated: Oct 6
Written by Maranda Sloan, Advocate for Holistic Health
Through personal experience, customer feedback, and extensive study, Maranda has crafted innovative health solutions. As the creator of Haloblujuices, she empowers others to transform their wellness with sustainable, life-changing strategies, helping people breathe new life into their health journeys.

AI isn't just a nosy roommate anymore, it's more like a con artist wearing your hoodie, your face, and maybe even your LinkedIn profile. From apps quietly stockpiling your selfies to bots absorbing every rant you've ever posted at 2 a.m., your digital DNA is being cloned without so much as a "thanks."

The fallout? Deepfakes that can nuke reputations, stolen data feeding shady algorithms, and a professional identity crisis where your AI twin outperforms you in job interviews. Health-wise, constant digital exposure and the anxiety of not knowing where your data goes can chip away at mental well-being.
The fix: audit your apps like you're catching a bad ex in a lie, call out sketchy terms of service, watermark your work, and maybe stop yelling into the internet (just kidding, kind of). Protect your digital self before AI makes a deepfake version that sings karaoke.
The free-for-all of your digital DNA
Remember when you posted that one picture of your cat wearing sunglasses? Cute. Except in 2024, The Guardian reported that photos of actual Australian children were swept into massive AI training datasets without their parents' knowledge or permission.[1] If kids' faces aren't safe, what makes you think your tabby's Instagram career isn't being hijacked for machine learning glory?
Then there's Reddit! You thought your 2 a.m. existential rant about cereal mascots went unnoticed? Nope. According to AP News, Reddit is suing AI company Anthropic for allegedly scraping user comments to train their chatbot Claude.[2] Which means, congratulations, your most unhinged post might already be living rent-free inside a chatbot's brain.
As one Guardian report warns: "No one knows how AI is going to evolve tomorrow. Personal data are not legally protected, and therefore not protected from misuse by any actor or any type of technology".[1]
The serious fallout (Yes, even beyond cat pics)
Jokes aside, the consequences aren't just embarrassing. We're talking about:
Professional harm: Imagine a deepfake version of you nailing a job interview better than the real you.
Identity theft: Your voice and face could be cloned into scams or fake endorsements you'd never sign off on.
Mental health stress: Constant uncertainty about where your data ends up can leave you spiraling harder than a YouTube rabbit hole at 3 a.m.
Top 10 AI or internet self-defense moves (2025 edition)
According to Stanford's Human-Centered Artificial Intelligence Institute, protecting your data in an AI-driven world demands active defense.[3] Think of it as digital karate, except your opponent is invisible, tireless, and really into scraping memes.
Here are ten steps to fight back:
Audit app permissions, like you're catching a bad ex in a lie. Revoke unnecessary access to your camera, mic, or contacts.
Scrutinize Terms of Service. If it's longer than "War and Peace" and full of vague "may use your data" clauses, beware.
Watermark or cloak your images. Subtle tech like Fawkes can scramble facial recognition systems without altering the photo (Shan et al., 2020).
Use privacy tools. VPNs, tracker-blocking browser extensions, and encrypted messaging apps are your new digital sunscreen.
Set up Google Alerts for your name, so you'll know if your "digital twin" shows up somewhere it shouldn't.
Strong authentication. Two-factor, passkeys, and unique passwords for every account. Think of it as adding multiple locks on your digital front door.
Keep everything updated. Outdated apps and systems are hacker candy. Enable automatic updates.
Be mindful about oversharing. Sure, post your latte art, but maybe skip the geotagged vacation pics until you're back home.
Support stronger AI regulation. As Stanford HAI stresses, systemic change matters. Laws should make your data yours, not free fuel for algorithms.[3]
Practice digital minimalism. Less data shared = less data to exploit. Ask yourself before posting: "Does future-me want this living online forever?"
The bottom line
Protecting your digital self doesn't mean going off the grid and raising goats in the mountains (though tempting). it means fighting smarter: locking down permissions, watermarking your creations, and supporting policies that force companies to treat your data as your property.
Because here's the deal, if you don't protect your digital self, AI will, and unlike your roommate, it won't even leave a passive-aggressive Post-it on the fridge.
Follow me on Instagram for more info!
Read more from Maranda Sloan
Maranda Sloan, Advocate for Holistic Health
Maranda is a passionate advocate for holistic health, with a personal journey that led to the creation of Haloblujuices, where life breeds life. Through research and hands-on experience, she’s uncovered innovative, sustainable solutions that go beyond conventional wellness. Maranda empowers others to make meaningful improvements to their health, offering a refreshing approach to living well.
References:
[1] The Guardian. (2024, July 3). Photos of Australian children used in dataset to train AI, human rights group says.
[2] AP News. (2025, Sept 18). Reddit sues AI company Anthropic for allegedly ‘scraping’ user comments to train chatbot Claude.
[3] Stanford HAI. (2024, Jan 16). Privacy in an AI era: How do we protect our personal information?
[4] Georgiades, E., Birt, J. R., & Pedram, M. (2025). Privacy concerns about third-party images hosted by deepfake technologies. Abstract from Artificial Intelligence, Law and Society conference, Sydney, Australia.









