top of page
  • Writer's pictureDannielle K Pearson

Digital Dysfunction: Why the rise of AI concerns me

I am a self-professed health nut. I have my daily greens, I train five to six times per week consisting of high intensity and some form of strength training, I have a regular meditation practice, and I am religiously in bed by 9pm. My latest health addition, is the infrared sauna. I love my sauna time. I find it relaxing.  The soothing music admitted into the fifty-five-degree centigrade wood panelled cocoon, beckons relaxation.   I can feel my body unwind and the tension release.  Every breadth I take is with purpose, mindfully aware of my most basic vital sign.  The peace cultivates an awareness of both my mental and physical state. A rare but necessary opportunity to connect in with myself.  A luxury, I find, only obtainable when life’s distractions are suspended.   Despite my love for this savoured moment, the sanctity rarely lasts.  Invariably the door swings open, an awkward greeting is exchanged, and my Zen time is shared with a sauna mate.  Typically, he or she picks the furthest spot from my own, in a consciousness nod to personal space. I appreciate this, I close my eyes keen to recapture my blissful state when I am interrupted by a familiar glow against the dimply lit back drop; one only a mobile phone can create. The neon light is paired with the sound of the newest tik tok dance routine, or Mr Beast’s latest youtube clip.  My sauna mate is in the grips of a digital trance.  I was born in the early 80s, which means I am part of a new generational genre coined the “geriatric millennials” (ouch btw).   In attempt to lean in on my geriatric labelling, I clear my throat in the hopes to capture my sauna mate’s attention. As suspected, no such luck.  I politely yet unavoidably wave grabbing their attention and the phone falls silent.  By t this point the moment has past, my sauna session has concluded.

The term “digital disruption” is nearly 30 years old.  Harvard Professor Clayton Christensen coined it in 1995, when the internet was in its infancy. A term that is exclusively reserved to organisations from corporates to councils.  The term comes with an “adapt or face irrelevancy” warning in an endless barrage of technological innovations. Leaving tech teams unequipped to contend with the perpetual threat. This is all too familiar. What is less familiar is digital “human” disruption.  The impact continued tech innovations are having on humans’ ability to be human.  Unlike our organisational counterparts, adapting to avoid irrelevancy is not a winning strategy to live life by.  

We often loose site that my younger millennial counterparts, and next akin, “zoomers” grew up in a world where the internet was commonplace.  They are the first generation(s) in human history that this is the case.  While I am not an evolutionary biologists, it seems unrealistic our biological wiring is able to adapt in a span of thirty years.   Most neuroscientist and mental health professionals agree with this sentiment.  Dartmouth College conducted a study on the impact technology, specifically tablets and mobile phones, have on people’s cognitive ability.  While a few benefits were identified such as the ability to enhance learning and improve decision making the negative impacts far outnumbered the positive.  Some of the key findings on routine mobile and tablet exposure include:

·      Higher propensity towards being distracted

·      Memory impairment

·      Increased levels of impulsivity

·      Alters how our brains take in and process information

·      Impaired ability to empathise

Numerous studies have concluded that the greater the dependence on a mobile phone directly correlates to decreased learning ability and cognitive reasoning.  Which ironically, outweighs the purported benefits of improved decision making.   Studies have also found a relationship between compassion levels and digital media engagement.  Those that spend less time on social media, displayed higher levels of compassion and were less likely to regularly engage on the platforms; the inverse was also found.  Those with lower levels of compassion were more frequent users of social media.

What does this have to do with AI?

Putting it bluntly, I don’t think we have a handle on how technology, universally, has disrupted human beings. This ranges from how we develop cognitive capabilities, formation of vocabularies, which are in decline by as much as 30% from previous generations, and how we develop social intelligence, a fundamental life skill.  What will be the impact of introducing a technology with shorter innovation cycles and an increased probability of being further entrenched in our everyday lives? 

Importantly, I am not anti-Artificial Intelligence (AI). I feel the same way about it, as I do my iPhone. It has utility, I like that it’s there, but I am not camping outside an apple store to score the next model. Despite this, I can’t help to ask the question what’s the rush?  Maybe I am just as my generational label insinuates, geriatric and I am digging my elder millennial heals in and resisting change. It’s true change is a constant, but not all change is good change.

There is no doubt AI has its benefits, but my concern is it’s not being created purposefully or with thought.  To paraphrase a recent meme, I read “I want AI to do my house chores, so I have time to write and create art, I don’t want AI to write and create art, so I can do my house chores.”  It’s as much humorous as it’s astute.

I speak and write on the topic of critical thinking in the hopes that more people will opt for independent thought over blind acceptance, a trend inane to human nature and one technology has exacerbated. My AI concerns are confounded in a belief that like my sauna mates, technology has become a dependency rather than a useful addition.  There is a need to take a step back and ask the question why, and for what purpose.  If we fail to do this, I believe digital disruption will only result in more digital dysfunction.




8 views0 comments


bottom of page