logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Studies Claim That Using AI Has Negative Effect on Human Cognitive Skills

But how horrible is it?

metamorworks/Shutterstock

People have very different opinions on AI nowadays. Creatives hate it for trying to take their jobs, but if you talk about it with your friends who are removed from this fight, you've likely heard them praising AI and saying how it helps them in their tasks. 

Whatever your opinion on the technology, it is slowly leading us to an unsurprising result. Several studies referenced by Forbes claim that using AI is detrimental to our cognitive skills.

In the Generative AI Can Harm Learning report, researchers at the University of Pennsylvania say that Turkish high school students who had access to ChatGPT while doing math problems did worse on a test than those who didn’t use it.

The authors think the problem is that students are using the chatbot as a "crutch" and just ask for the answer: "Students were not building the skills that come from solving the problems themselves."

Furthermore, according to Forbes, educational experts believe that students "are increasingly being taught to accept AI-generated answers without fully understanding the underlying processes or concepts." They are concerned that future generations "may lack the capacity to engage in deeper intellectual exercises, relying on algorithms instead of their own analytical skills."

Primakov/Shutterstock

The National Institute of Health also warns against "AI-induced skill decay," the result of excessive use of the tech, which can lead to stifling human innovation. When workers turn to AI for everyday tasks, they might miss opportunities to practice and refine their cognitive abilities, the outlet says.

Moreover, reliance on AI is raising concerns about the "erosion of human judgment."

"In sectors like finance and healthcare, AI systems are increasingly being used to recommend investment strategies or medical diagnoses. The risk of incorrect outputs or dangerous guidance remains a concern, as glitches can show up in even the most sophisticated LLMs. The more decisions we delegate to AI, the less practice we get in honing our own judgment."

Forbes concludes that our collective goal should be to "create spaces where human intelligence remains at the center." Researchers at Stanford, in turn, highlight the importance of explanations, so AI shares insights, not just outputs.

It can be argued that math problems are not a perfect study subject. I'm sure the invention and wide use of calculators also caused its fair share of fearmongering back in the day: of course, counting on a piece of paper or even in your head wakes up more brain cells than doing so with technology, but is it as horrible as it's painted? 

"Tools like calculators and spreadsheets were designed to assist in specific tasks – such as arithmetic and data analysis – without fundamentally altering the way our brains process information," says Forbes.

From my point of view, math is a means to an end, all those calculations lead to a bigger goal than simply putting numbers together. Asking ChatGPT instead of thinking or doing research is definitely not as beneficial for the mind, but is it truly the end of our intelligence? The internet opened a whole new world of possibilities, letting us learn about all kinds of stuff quickly instead of spending hours at the library, and this was also once a reason for concern and contempt.

I believe it's our methods that evolve, not the end goal, and this is not as scary as it looks to some. While generative AI worries are valid, I doubt solving math problems with artificial intelligence instead of the real one will lead to humanity's decline, but it can also be argued.

What is your stance on the matter? Join our 80 Level Talent platform and our new Discord server, follow us on InstagramTwitterLinkedInTelegramTikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 1

  • Anonymous user

    Yes, focusing on maths questions may not be the best method for conducting this study, since we already had technology that could spit out answers to maths questions for us. That said, I don’t think that’s reason enough to dismiss the broader concerns around skill erosion and the atrophy of reasoning abilities.

    In the worlds of business and industry, end results are valued over processes. Processes are literally just considered the means to the end. If something gets the job done faster, cheaper or more efficiently, then it’s obviously better, right?

    But a weakness of looking at everything through the lens of financial performance is that you tend to start only valuing things that can be easily quantified numerically and graphed alongside profits. That positions certain important things in your blind spot.

    Important things like the degree to which your staff intuitively understand the underlying processes involved in what they’re doing. Or how much knowledge of key systems, techniques and potential pitfalls is being passed down from senior staff to junior staff. Or the amount that your staff is actually personally invested in the quality of your product. Not to mention how much they’re even able to distinguish between a quality product and an inferior one.

    Company culture is vital to fostering innovation and ensuring your business can weather nasty surprises or solve novel problems. A toxic culture is one where employees are hesitant to point out potential issues for fear of being blamed, shamed and fired for them. If an employee doesn’t feel valued or respected, they’ll do the bare minimum, never stick their neck out and apply Band-Aid solutions.

    So consider a company where there are fewer employees who are paid less, because their job is not their expertise, craftsmanship or problem solving skills, it is simply to tell the AI what to do. Maybe they get to be creative insofar as they tell the AI whether to generate a blue dragon or a red dragon.

    But they’re not going to be trained in anatomy, because the AI is supposed to do that for them, so they can’t do interesting stuff with the design. They won’t know what subtle movements will led an intimidating weight to its walk. The way the dragon’s muscles attach to its skeleton is a mystery to them, so if the legs look wonky they won’t know what the AI has done wrong or where to begin fixing it. They settle for whatever it gives them, because that’s all they are paid to do.

    Now imagine this not just in creative industries, but all industries. Industries that control the vast supply chains involved in making the planes we fly in and the medication we take. Industries which our safety depends on. Imagine the staff of our governments and courts relying on ChatGPT to interpret legislative documents because they’re not used to parsing the intricacies for themselves. Or worse, imagine them deferring to it for actual decision making.

    Not all automation is automatically a good idea. Automate too much and we stop perceiving what’s being automated. We get complacent. We forget how it works and what it’s even for. Decisions get made for us that we would never have made ourselves, had we known what was being decided on.

    0

    Anonymous user

    ·6 days ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more