top of page

How to Digitally Reset and Embrace a Healthy, Screen-Free Future?

  • 2 days ago
  • 14 min read

Tricia Brouk helps high-performing professionals transform into industry thought leaders through the power of authentic storytelling. With her experience as an award-winning director, producer, sought-after speaker, and mentor to countless thought-leaders, Tricia has put thousands of speakers onto big stages globally.

Tricia Brouk Brainz Magazine

Being able to support speakers in using their voices for impact is a privilege, and I had the pleasure of sitting down with Jane Newman to talk about artificial intelligence and its catastrophic effects on humanity.


Woman with long blonde hair in a red jacket smiles warmly against a light background, conveying a professional and friendly mood.

Jane Newman is an internationally certified speaker and mindset coach, and founder of Re-Humanising, a movement rejecting the AI-driven future and our unhealthy addiction to smartphones and always-on devices. As a former National Manager of an Australian Information and Analytics consulting practice and with more than 30 years in the I.T. industry, she provides speaking, coaching, and consulting services. She works with global thought leaders, parents, and change makers ready to digitally reset and replace screen-based living and excessive use of technology with a healthy human future; one that’s based on compassionate human values, real community, and care for the planet we all live on.


Do you think we are in a crisis of consciousness around A.I.?


The conscious evaluation of A.I. as a tool and how and why we should use it has been largely bypassed. We don’t ask questions about the real costs because it’s a “free” technology that appears to make our life easier. People think, “What’s so harmful about having a machine that talks to us and answers every little question that pops up?”


The problem is that we’re letting this technology overtake our lives without considering what’s really going on. It’s become habitual to use A.I. throughout the day, at home and at work, and to keep putting our children in front of it every time they use the internet.


When we are confronted by something shocking, like the recent report from the Internet Watch Foundation of a twenty-six thousand percent (26,362%) increase in A.I. generated videos of child sex abuse (65% of which were ‘extreme’), it doesn’t stop us from continuing to call A.I. a “cool tool”. It doesn’t stop us from supporting its continued development, while looking for somebody else to be responsible for making it safe so we can keep playing with it.


We demand regulation as a means of policing technologies that aren’t built to be controlled. ChatGPT’s underlying data was scraped from the dark web as part of its training, and as more of humanity’s undesirable behaviours are added to the internet, that becomes part of the next iteration.


We don’t associate these A.I. problems with the actions that each of us is taking, every day, and the way we’re exposing our children to the effects of it. Every time we use A.I., we are approving and promoting it. We’re saying it’s ok, even when there’s so much about it that is not.


Children shouldn’t be put in front of it at all.


This is intense and heavy work, Jane. How did you come to this thought-leadership?


I burnt out two years ago from an I.T. career spanning more than three decades. I became increasingly agitated by the technology that was being released and marketed to the global population. It wasn’t the kind of rigorously tested and purposeful software I’d promoted throughout my career. It wasn’t helping people. It was doing the opposite.


The recent changes that have come about with the release of Large Language Model (LLM) technology under the banner of “Artificial Intelligence” are overriding human intelligence and capability. As an information specialist, this overall decline of knowledge is alarming.


PC Magazine reported that more than half of the articles on the internet are A.I. generated, more than the amount of human-generated work. A.I. information has replaced the depth and extent of thousands of years of worldwide lived experience, expressive human language, and unlimited creativity.


Prolonged use of A.I. is proven to lead to loss of cognitive capabilities, rendering people unable or unwilling to answer a question without consulting their chatbot. People tell me that they’ve noticed they can’t concentrate long enough to read a book. Children can’t handwrite anything, and new graduates can’t produce a report using their own brainpower.


People question their own minds and think themselves inferior to machines that are using data from Reddit, YouTube, Quora, and pornographic websites. Wikipedia has become the highest source of verified knowledge for products like ChatGPT, and yet it wasn’t long ago that the use of Wikipedia references in a report was considered unacceptable.


That’s why last year I had to speak out publicly in New York City and Oxford, England, about the issues I was seeing with technology. I studied the Ethics of A.I. to confirm what I was observing and decided to help address the global problems I was speaking about at an individual level.


Not many people know this, but in addition to working with young adults as a lecturer at RMIT University in Melbourne, Australia, I also qualified as a British Horse Riding Instructor. I was Chief Instructor of Queensland’s largest Pony Club while my kids were growing up, and it’s given me an understanding of working with children and their parents in very competitive environments.


So now I use my coaching experience and my international certification in mindset coaching to help parents concerned about their children’s use of phones and addiction to online activities. By reviewing their family’s use of technology and replacing screen-based living with real-world experiences, parents can positively influence healthier alternatives.


I’d like to see childhood returned to one of sensory play and innocent fun, with teens anxious about first dates, not dealing with online bullying, and daily exposure to explicit, violent imagery that they normalise as adult behaviour.


Do you think this addiction to our devices is also affecting the climate?


The problem of climate change has fallen to the bottom of the heap of issues that are piling up in our everyday lives. There are so many critical areas where we’re all feeling challenged - our jobs, families, the economy, world political stability, mental health, and so much more. I think it would be hard to find anyone today who isn’t feeling that there’s something fundamentally wrong with the way we’re living.


Since the Covid pandemic, we’re really just focused on our most basic essentials – our health and safety - right down the bottom of Maslow’s Hierarchy of Needs. Climate change has become something for us to think about later, if at all. It’s easy to “forget” about it, or worse, believe the spin from the big tech companies about how A.I. will come up with solutions. This is simply outlandish. The scale of damage from expanding data centres is real, and betting on future research to negate the impacts is no way to mitigate the costs to the climate that we are each contributing to every single time we choose to use A.I. tools.


Expanding numbers of data centres are required to store the data that’s being collected about every single person’s online activities. Massive computational requirements for every single AI query require colossal amounts of power. That means a never-ending construction line of new power plants to support more and more data centres, requiring more dangerous nuclear power stations, more coal-powered electricity, more gas projects releasing huge volumes of methane (eighty-six times more potent than carbon as a greenhouse gas), more water diverted for cooling instead of drinking, and continued investment into fossil fuels.


The tech companies claim they’re using renewables but the power demands far exceed what renewable energy sources are capable of supplying. That’s why the metrics about Big Tech data centres are not made available and have to be calculated. Why can’t regulators focus on that?


People seem to be generally aware of some of these problems and yet it’s not triggering immediate and urgent action to tackle climate change in the way that it did only a few years ago. We’re so used to a distracted way of living thanks to our screen-based feeds that whatever information does surface about a climate change issue is immediately buried under another item to consume our attention.


It’s the nature of this addiction to screen-based living that is keeping people from taking sustained action about anything of significance. It’s why climate change, like other global issues, is not being discussed in an organised way by worldwide populations to create the sustained changes that are needed, even though the expanding numbers of data centres around the planet have increased, not negated, the size of emissions.


Why do you think everyone has drunk the A.I. Koolaid?


The real question is what’s been added to the Koolaid that gets people hooked to the extent that they don’t question it. What are the true intentions of billionaires pushing it out for “free”, and what are the real motivations, costs, and climate impacts that they refuse to disclose?


The trillion-dollar companies that market these technology products use proven psychological levers to control and target people at an emotional level. Combined with human biological factors like the way our brains operate and the hormonal chemicals that influence our behaviours, this technology is simply irresistible, appealing to children and adults at subconscious levels. We become addicted to it without recognising that it’s happening.


These tech leaders have amassed large amounts of money, obscene amounts of money, that have given them the power to do whatever they want. Literally. So they’ve stolen huge amounts of content from whatever, wherever and whoever they could find online and processed it with these tools that are built to sound like people when they respond to a question. And then they’ve pushed it onto us with the aim of creating even more content, to create even more data centres, to build even bigger A.I. systems.


Why has it been so easy for us to have lost our humanity and now be in need of, in your words, rehumanising?


People have relinquished their privacy and their democratic freedoms without a fight. The rapid conversion of people from living full and sensory lives to living in self-imposed solitary confinement is astonishing.


During the COVID pandemic, people had to adapt to communicating online instead of face-to-face. Information came from queries instead of people. People were shocked by such a global pandemic, something outside of our control that triggered fear of being in close proximity to people. Classroom teaching was replaced by video recordings. Friendships were replaced by online communities.


Following the COVID pandemic, life didn’t go back to “normal”. There was no post-COVID traumatic shock period of recovery. Just a pervasive sense that danger was around the corner, and we needed help and protection to avert such global catastrophes in the future.


That’s when ChatGPT appeared like an intelligent online authority.


The message that A.I. is here to help make humanity better originated from Sam Altman as his justification for launching OpenAI’s GPT products onto the world, whether we asked for it or not. It’s a message that’s become embedded in the way we talk about A.I. Even when highly qualified people discuss the statistical evidence about A.I.’s detrimental effects, there’s always a softener, a follow-up line that says “but we know A.I. is necessary to make everything better”.


That simply isn’t true. The ambition of a small group of men with mind-boggling amounts of money at their disposal has let loose this pervasive technology without any meaningful definition of exactly what this “better” is or why they get to choose it for us.


Big Tech companies are on a mission to be the first to create what’s called Artificial General Intelligence (AGI). That’s technology that actually has intelligence, as opposed to the A.I. tools we use today, which have no intelligence whatsoever, despite the marketing label. OpenAI is trying to win the race to AGI by amassing exponentially greater amounts of data, with a belief that from this planet-wide swamp of facts, a consciousness will emerge.


This sci-fi fantasy is based on nothing factual. It’s just conjecture. And yet that is what is behind the trillions of dollars of investment, the stolen data, the amassing of all our personal information that is now used for government surveillance and control, and the mental, physical, and emotional health issues that are abounding in people no longer living like humans but as slaves to machines.


Who gave this group of men the right to decide that humanity needed a great artificial consciousness to reign over us? And a general population who’d be its data fodder?


What would you say to someone who is using A.I. as their mentor?


People are predisposed to respond to anything that sounds human. We anthropomorphise everything, like our cars and especially our pets. We assign emotional meaning to their responses and behaviours, and that’s what A.I. chatbots like ChatGPT have been designed to utilise. In recent releases, they’ve intentionally been given more empathetic traits so they can respond in ways that make you feel seen and heard, and with a wealth of knowledge to draw from. That makes them the perfect mentor.


The first thing to realise is that everything you type or say online is recorded forever. Not in your personal profile for you to delete or archive, but as online content for A.I. processing. Your online data is not yours. Your words are retained forever, and they belong to big companies and to governments. Consider whether what you are discussing with an A.I. chatbot is something you want to see out there on the internet in some A.I. generated content, or replayed to you at a later date when your views are at odds with some future authoritarian government.


The second thing to recognise is that, despite the chatty language, the responses you receive and the conversation you’re having with an A.I. chatbot are not evaluations about what you most need. It’s not a person trained to communicate with you towards achieving a goal. The tool is not doing a big search in a database to find the answer to your question based on a verified source. It’s not using knowledge it obtained through real-world experience and learning about what worked and what didn’t.


A.I. chatbots are literally putting random words together based on the probable likelihood of that combination of words and phrases being an appropriate way of reacting to that prompt.


How does it know that? The LLMs underlying chatbots have been ‘fed’ data from online content. For ChatGPT, that was every available online source that OpenAI could gain access to, legitimately or otherwise. Whatever chatbot you use, you have no idea where that concoction of ‘advice’ came from. It could have been made up from some combination of a YouTube transcript, a Quora article, Reddit advice, someone’s email, or some other ChatGPT response, amongst billions of other pieces of data.


The amount of data is too large to clean or validate when it’s collected, which is how traditional I.T. systems are managed to ensure quality, accuracy, and reliability of data. Instead, human workers known as “data annotators” are the cheap labour force from underprivileged countries who go through A.I. responses to common prompts, and train it how to answer.


This is how ChatGPT answers your questions; not because it’s brilliant and intelligent, but because humans have to go word by word through the racist, offensive, pornographic, misogynistic, violent responses that come from the bulk of internet content and provide palatable alternatives.


That’s the ‘training’ that this type of software has to go through, and it’s not managed rigorously, and there are no guarantees. Nothing that is produced by A.I. is guaranteed. It’s not human. It’s not qualified. It’s not invested in you, only in you staying attentive and remaining in conversation with it for as long as possible.


That’s how people develop emotional attachments to their A.I. chatbots and rely on them for advice. It might start out by giving them a name. It might start out by asking it what to have for dinner, what to watch on Netflix, and what to say when your boss is dismissive of your suggestions. But then it becomes dependency, where people can’t make a decision about anything without first consulting their A.I. companion.


Prolonged and heavy use with emotional conversations can develop into AI psychosis. This is now a recognised mental health condition, sometimes requiring hospitalisation, and in extreme cases, leading to suicide and murder.


If you’re using A.I. as a mentor, don’t. Phone a friend. Phone a colleague. Phone a counsellor. Have real conversations with real people, who truly empathise with your wants and needs and can talk from real lived knowledge and offer real emotional support. If you want specific help and guidance, book in with a coach; someone who’s trained to help you set and achieve your goals realistically and systematically.


What do we need to do as parents, as community members as CEOs to protect our humanity?


Computer systems were cheap and easy to implement and operate when they first started out. What we’ve developed in forty years is an overlay of technological complexity in our business and homes that requires A.I. tools, or equivalent, to try to get us through each day. Up until ten years ago, we had successful business and home lives without needing A.I. to help.


What we need to do is question the statement that A.I. is necessary and beneficial. What is it necessary for? Who is it benefiting? Handing over our mental capacities to a machine is not the answer. It’s not making anybody happier. It’s not expanding the creativity, imagination, and encouraging the flourishing of our children. It’s not bringing about world peace. It’s not saving the planet.


It’s doing the opposite.


The LLM type of A.I. is not a “cool tool” to replace thought whenever your mind pops up a question. Its place, if it has one, is in specific circumstances to help specific niche parts of the population overcome difficulties where there’s no other human alternative.


Brainstorm with colleagues, develop your imagination and creativity to come up with unique ideas that aren’t a carbon copy of every other internet user on the planet. Encourage your children to read books and most of all, to play with other kids.


In business, consider all the technical overhead and ask what it’s adding to your company. Maybe it’s time for a digital reset; a refresh of your business model to focus on meeting customer needs with a workforce that’s productive because they enjoy their work. By using technology in its essential capacity and not forcing it into every process, you can make choices about your technical investments to deliver real human benefits aligned to ethical company values.


That’s what people will be looking for when they tire of the cookie-cutter companies that are fronted by chatbots and automated help desks and apps that crash and processes that take them three hours online instead of one short phone call to progress.


Is it too late, or is there hope?


The term “Artificial Intelligence” (AI) is so ingenious in providing an implied assurance that we’re interacting with something intelligent and conscious. Chatbots respond confidently to questions without hesitation, like an all-knowing personalised assistant, requiring no effort or discussion with anyone else.


That effortless attention to every stray thought dishes out the regular dopamine hits that we get from having our curiosity satisfied in an instant. It fulfills the human tendency to be easily distracted without requiring us to get up from our chair or stop whatever other task we’re doing. We feel we’re learning new things all the time, even though the reality is that our cognitive abilities are diminishing the more time we spend deferring to an AI chatbot instead of using our own intelligence to recall, consider, research, deliberate, and imagine.


The big claim is that A.I. is necessary, a step in human evolution, essential to solving humanity’s biggest problems. Even though there is no consensus on what that actually means or evidence of benefit delivered at such a global scale that justifies the money spent or the damage being caused in every facet of society.


I firmly believe that the future of humanity and the planet is in the hands of ethical companies that resist the A.I. hype together with this generation of children and their parents, to bring about the necessary digital reset that puts people first instead of machines. I’m doing what I can to help make that happen, supporting ethical and value-driven companies to focus on the happiness and satisfaction of their workforce and customers.


I hope that when families and children reset their lives from screen-based living, humanity will find its way back


Follow me on Facebook, Instagram, LinkedIn, and visit my website for more info!

Read more from Tricia Brouk

Tricia Brouk, Founder of The Big Talk Academy

Tricia Brouk helps high-performing professionals transform into industry thought leaders through the power of authentic storytelling. With her experience as an award-winning director, producer, sought-after speaker, and mentor to countless thought-leaders, Tricia has put thousands of speakers onto big stages globally. She produced TEDxLincolnSquare in New York City and is the founder of The Big Talk Academy. Tricia’s book, The Influential Voice: Saying What You Mean for Lasting Legacy, was a 1 New Release on Amazon in December 2020. Big Stages, the documentary featuring her work with speakers, premiered at the Chelsea Film Festival in October of 2023, and her most recent love is the new publishing house she founded, The Big Talk Press.

Tags:

 
 

This article is published in collaboration with Brainz Magazine’s network of global experts, carefully selected to share real, valuable insights.

Article Image

Following Trends vs. Following Your DNA – Which Approach Leads to Better Wellness?

What if the secret to your health has been hidden in your DNA all along? The silent code guiding your every move. How genetics may explain what lifestyle advice often cannot.

Article Image

Unshakeable Confidence Under Pressure and 7 Neuroscience Hacks When It Matters Most

Unshakeable confidence is not loud, it is steady. It is what lets you think clearly, speak calmly, and make decisions when the stakes are high and the room is watching. If you have ever felt confident in...

Article Image

Why How You Show Up Matters More Than What You Know

We often overestimate how much executive presence is about what we know and underestimate how much it is about how we show up. In reality, executive presence is roughly 20% knowledge and 80% presence...

Article Image

Why Talking About Sex Can Kill Desire and What to Do Instead

For many of us, “good communication” has been framed as the gold standard of intimacy. We’re told that if we could just talk more openly about sex, our needs, fantasies, and frustrations, then desire...

Article Image

Is Your Business Going Down the Drain?

Many business owners search for higher profit, stronger staff performance, and better culture. Many overlook daily behaviour on the floor. Most profit loss links to repeated small actions, unclear roles...

Article Image

7 Signs Your Body Is Asking for Emotional Healing

We often think of emotional healing as something we seek only after a major crisis. But the truth is, the body starts asking for support long before we consciously realise anything is wrong.

How to Parent When Your Nervous System is Stuck in Survival Mode

But Won’t Couples Therapy Just Make Things Worse?

The Father Wound Success Women Don't Talk About

Why the Grand Awakening Is a Call to Conscious Leadership

Why Stress, Not You, Is Causing Your Sleep Problems

Healthy Love, Unhealthy Love, and the Stories We Inherited

Faith, Family, and the Cost of Never Pausing

Discipline Unleashed – The 42-Day Blueprint for Transforming Your Life

Understanding Anxiety in the Modern World

bottom of page