Possible effects on cognitive function of using AI.

hapl808

Senior Member
Messages
2,446
I get comments like "stop posting all this AI slop" and suchlike. I try to explain that I did not use AI, and that I took many hours or even days to research and write my article, but people will just say to me "but the format of your writing looks exactly like AI".

I can understand people's responses, because I don't like to see people posting AI articles myself. It comes across as lazy, and the information contained in AI articles is often shallow, and may contain factual errors.

Furthermore, when I am online, I want to engage with real people, not with machine output. So I understand why people do not want to read posts created by AI. Indeed, some forums have now banned AI articles, which I think is a good move.

This is a real problem. Obviously AI trained on people's writing, so of course there's a subset of people who write in a very similar way.

If you're interacting with a person directly, it's easy to see if their regular writing is formatted in a similar way. I've found some people who will claim they wrote something, but it uses formatting they're not even aware is embedded. But for many like @Hip, there's an immediate witch hunt because people don't believe a human can write intelligibly with embedded formatting and lists.

Ah well, I feel like society might entirely collapse before that becomes too much of an issue, so…yay?
 

Hip

Senior Member
Messages
18,301
But for many like @Hip, there's an immediate witch hunt because people don't believe a human can write intelligibly with embedded formatting and lists.

That's right. If you write an article which is well-organised and formatted with section titles, bullet lists, quotations, and reference link to studies, people are immediately suspicious that it is AI. Because generally posts written by humans are done quickly, and so are not well formatted.

So the irony is, the more effort you make your article's layout readable and intelligible, the more people will think it is AI, and so the more it will be ignored.
 
Last edited:

Hip

Senior Member
Messages
18,301
AI can be great but the fact that it's often lying or hallucinating makes me not want to use it at all. I would have to fact check everything it says and that would only add more work.

Because AI is super-fast with providing an answer to your question, people now often go to AI first, rather than doing a web search. But as we know, AI can often be wrong, or may not provide a deep enough answer. So you get the answer to your question quickly, but you know that the answer might not be correct.

It's only when the answer is important to you that you will spend more time trying to verify it using a web search, or verify by reading the reference links provided by AI.

This means that for less important questions, to save time, people are now accepting answers from AI which they know could be wrong.



What I do to reduce the chance of getting an incorrect answer from AI is to pose the same question to three AI bots, and compare their answers. I usually use ChatGPT, Perplexity and Gemini. Sometimes I use Grok as well.

I actually wrote a simple AppleScript app for my Mac, which when I click on, opens ChatGPT, Perplexity, Gemini and Grok as four tabs in one browser window. That makes it easy to copy and paste the same question into all of these bots.

If all the bots provide similar answers to my question, then I have more confidence that their answers may be correct. But if the bots contradict each other, then I know that some of their answers are suspect.
 

Viala

Senior Member
Messages
804
I actually wrote a simple AppleScript app for my Mac, which when I click on, opens ChatGPT, Perplexity, Gemini and Grok as four tabs in one browser window. That makes it easy to copy and paste the same question into all of these bots.

It definitely saves some time. For the moment I do not even ask questions on forums a lot because it is quicker for me to find the answers myself. Another thing why I am not so fond of it is that AI often gives long replies and I feel like I am waisting my time reading all of it, meanwhile I can get an answer quickly by doing a web search and then decide if I change a question or do a proper deep dive. I would use AI if it could do thorough statistical analysis because that actually takes a lot of time.

If all the bots provide similar answers to my question, then I have more confidence that their answers may be correct. But if the bots contradict each other, then I know that some of their answers are suspect.

I wonder what would happen if you gave all these AIs their replies to analize and find what is correct. It could be automated, asking one question, AIs reply then they fact check themselves and come up with one and true answer. Would that work or would they still hallucinate? Something like a master AI.
 

Hip

Senior Member
Messages
18,301
I wonder what would happen if you gave all these AIs their replies to analize and find what is correct. It could be automated, asking one question, AIs reply then they fact check themselves and come up with one and true answer. Would that work or would they still hallucinate? Something like a master AI.

Yeah, it would good to get the bots to automatically fact check each other.

Though I find it valuable to get different answers to the same question from different bots. For me it helps understand the answer. I might read the answer from the first bot, and not quite get it, but then when I read answers from the other bots, saying the same thing using different words, it then penetrates my brain.
 

pamojja

Senior Member
Messages
2,756
Location
Austria
Please also consider the environmental impacts of Artificial Intelligence.

This is something most folks either aren't aware of, or don't want to be.
What I do to reduce the chance of getting an incorrect answer from AI is to pose the same question to three AI bots, and compare their answers. I usually use ChatGPT, Perplexity and Gemini. Sometimes I use Grok as well.

Triple the environmental impact!?!

I think the only way to use AI with least environmental, global and individual impact (making us all dumber and loner) is to use it rarely.

On the one hand, one really has to acquaint oneself with it, to put it to good use - after all, it soon will govern all of us - on the other hand, one has to train in critical thinking more than ever, for exactly that reason too. Only that way it won't make us dumber and loner.
 

southwestforests

Senior Member
Messages
1,389
Location
Missouri
There are a number of posters here who seem to have something against AI, for whatever reason.
I will admit to being one of those.
As it happens, although I'm no tech geek, I do know some about what it is and how it works.
And I still have something against it, a something which has a defined foundation.
I will not go in to the details for both I do not desire to & I do not currently have the mental energy, reasons.
I just wanted to say, yes, your assessment is correct for at least one of us & is not a figment of your imagination.
 

southwestforests

Senior Member
Messages
1,389
Location
Missouri
There are AI text detectors like this one, but I am not sure how accurate they are.
There is content about that,
a couple things I can remember right now, and/or have bookmarked
some are fairly recent and some are a couple years old,

There are a number of teachers in the circle of people I know or am related to; and, there is a few years worth of ongoing conversation and news about AI on a couple space and aviation forums I'm still on.

https://www.trails.umd.edu/news/detecting-ai-may-be-impossible-thats-a-big-problem-for-teachers
Turns out, we can’t reliably detect writing from artificial intelligence programs like ChatGPT. That’s a big problem, especially for teachers. Even worse, scientists increasingly say using software to accurately spot AI might simply be impossible.

The latest evidence: Turnitin, a big educational software company, said that the AI-cheating detector it has been running on more than 38 million student essays since April has more of a reliability problem than it initially suggested. Turnitin — which assigns a “generated by AI” percent score to each student paper — is making some adjustments, including adding new warnings on the types of borderline results most prone to error.

I first wrote about Turnitin’s AI detector this spring when concerns about students using AI to cheat left many educators clamoring for ways to deter it. At that time, the company said its tech had a less than 1 percent rate of the most problematic kind of error: false positives, where real student writing gets incorrectly flagged as cheating. Now, Turnitin says on a sentence-by-sentence level — a more narrow measure — its software incorrectly flags 4 percent of writing.

My investigation also found false detections were a significant risk. Before it launched, I tested Turnitin’s software with real student writing and with essays that student volunteers helped generate with ChatGPT. Turnitin identified over half of our 16 samples at least partly incorrectly, including saying one student’s completely human-written essay was written partly with AI.

The stakes in detecting AI may be especially high for teachers, but they’re not the only ones looking for ways to do it. So are cybersecurity companies, election officials and even journalists who need to identify what’s human and what’s not. You, too, might want to know if that conspicuous email from a boss or politician was written by AI.

There have been a flood of AI-detection programs onto the web in recent months, including ZeroGPT and Writer. Even OpenAI, the company behind ChatGPT makes one. But there’s a growing body of examples of these detectors getting it wrong — including one that claimed the prologue to the Constitution was written by AI. (Not very likely, unless time travel is also now possible?)

The takeaway for you: Be wary of treating any AI detector like fact. In some cases right now, it’s little better than a random guess.

https://cte.ku.edu/careful-use-ai-detectors
We want to emphasize the importance of the instructor's role. AI detection is imperfect, and instructors should use it with caution.
Why you should use caution

Turnitin walks a fine line between reliability and reality. On the one hand, it says its AI detection tool was “verified in a controlled lab environment” and renders scores with 98% confidence. On the other hand, it appears to have a margin of error of plus or minus 15 percentage points. So a score of 50 could actually be anywhere from 35 to 65.

The tool was also trained on older versions of the language model used in ChatGPT, Bing Chat, and many other AI writers. The company warns users that the tool requires “long-form prose text” and doesn’t work with lists, bullet points, or text of less than a few hundred words. It can also be fooled by a mix of original and AI-produced prose.

There are other potential problems.

https://citl.news.niu.edu/2024/12/12/ai-detectors-an-ethical-minefield/
AI detectors: An ethical minefield
December 12, 2024 Amanda Hirsch Best Practices, Resources, Teaching, Technology Trends, Tips, Web Tools

code of conduct team people work together on paper document on laptop screen - vector illustrationGenerative AI use has been on the rise among faculty and students. Tools like ChatGPT, Gemini, Adobe Firefly, and Claude, among others, have transformed how students approach academic work. In response, some faculty have clamored for AI detectors to help them identify content they believe is AI-generated, with the goal of upholding academic integrity. However, these tools are far from perfect, and they can lead to unintended consequences for students. AI detectors’ false positive rates (and the consequent serious ramifications of false accusations) and the equity issues that arise from their use deserve careful scrutiny. Instead of relying on this flawed technology, faculty and institutions should use alternative approaches to navigating the challenges posed by generative AI in education. Ultimately, any approach should prioritize fairness, understanding, and promotion of the responsible use of AI.

https://mitsloanedtech.mit.edu/ai/teach/ai-detectors-dont-work/
As AI tools like ChatGPT gain popularity on campus, instructors face new questions around academic integrity. Some worry that they could inadvertently give higher grades to students who use AI compared to those who don’t use AI for coursework. Others are concerned that reliance on AI tools could hinder students’ development of critical thinking skills. Whether or not you integrate these technologies into your courses, it’s important to reflect on how you’ll address them with students. How can you foster academic honesty and critical thinking when every student has easy access to generative AI?

In response to these concerns, some companies have developed “AI detection” software. This software aims to flag AI-generated content in student work. However, AI detection software is far from foolproof—in fact, it has high error rates and can lead instructors to falsely accuse students of misconduct (Edwards, 2023; Fowler, 2023). OpenAI, the company behind ChatGPT, even shut down their own AI detection software because of its poor accuracy (Nelson, 2023).

In this guide, we’ll go beyond AI detection software. We’ll discuss how clear guidelines, open dialogue with students, creative assignment design, and other strategies can promote academic honesty and critical thinking in an AI-enabled world.
 

Rufous McKinney

Senior Member
Messages
14,509
There are a number of teachers
imagine it: why study and work in school, in life, if you can cheat instead. Imagine thinking anything is OK about the computer wrote your paper.
I feel for our children. And the poor teachers.

Well, I do not know how any of them Pass the Actual Test. I mean, in school or in class or in college, you sit there at a desk and write an essay, on paper, using a pen. If they have abandoned such basics, I don't know what to say to These Teachers.
 

southwestforests

Senior Member
Messages
1,389
Location
Missouri
While AI energy use has been mentioned & I'm in the mood to source references from universities,

https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/
AI’s integration into our lives is the most significant shift in online life in more than a decade. Hundreds of millions of people now regularly turn to chatbots for help with homework, research, coding, or to create images and videos. But what’s powering all of that?

Today, new analysis by MIT Technology Review provides an unprecedented and comprehensive look at how much energy the AI industry uses—down to a single query—to trace where its carbon footprint stands now, and where it’s headed, as AI barrels towards billions of daily users.

This story is a part of MIT Technology Review’s series “Power Hungry: AI and our energy future,” on the energy demands and carbon costs of the artificial-intelligence revolution.

We spoke to two dozen experts measuring AI’s energy demands, evaluated different AI models and prompts, pored over hundreds of pages of projections and reports, and questioned top AI model makers about their plans. Ultimately, we found that the common understanding of AI’s energy consumption is full of holes.

This isn’t simply the norm of a digital world. It’s unique to AI, and a marked departure from Big Tech’s electricity appetite in the recent past. From 2005 to 2017, the amount of electricity going to data centers remained quite flat thanks to increases in efficiency, despite the construction of armies of new data centers to serve the rise of cloud-based online services, from Facebook to Netflix. In 2017, AI began to change everything. Data centers started getting built with energy-intensive hardware designed for AI, which led them to double their electricity consumption by 2023. The latest reports show that 4.4% of all the energy in the US now goes toward data centers.


https://iee.psu.edu/news/blog/why-ai-uses-so-much-energy-and-what-we-can-do-about-it
What are the key environmental consequences of AI development?

The environmental impact of AI extends beyond high electricity usage. AI models consume enormous amounts of fossil-fuel-based electricity, significantly contributing to greenhouse gas emissions. The need for advanced cooling systems in AI data centers also leads to excessive water consumption, which can have serious environmental consequences in regions experiencing water scarcity.

The short lifespan of GPUs and other HPC components results in a growing problem of electronic waste, as obsolete or damaged hardware is frequently discarded. Manufacturing these components requires the extraction of rare earth minerals, a process that depletes natural resources and contributes to environmental degradation.

Additionally, the storage and transfer of massive datasets used in AI training require substantial energy, further increasing AI’s environmental burden. Without proper sustainability measures, the expansion of AI could accelerate ecological harm and worsen climate change.
 

Hip

Senior Member
Messages
18,301
Triple the environmental impact!?!

Yes, I've read that one AI query uses about 60 to 100 times the electricity of a single Google search.

A single Google search takes 0.3 watt-hours of electricity (which costs about $0.002, and creates about 0.1 grams of carbon dioxide), whereas an AI search takes 20 watt-hours (which costs about $0.16, and creates about 7 grams of CO2).

Although when I am searching for complex piece of information, I may perform hundreds Google searches before I find the information, and open up hundreds of websites to read, so that uses quite a bit of electricity also.

But being housebound, rarely socialising, never travelling anywhere, and never going on holiday, I save about 4 tons of CO2 per year, as that is the average yearly CO2 output per person from all their travelling.

And anyone who travels long haul by air for a holiday will create about 2 tons of CO2 for the return trip.

Those 4 tons of CO2 are equivalent to 40 million Google searches, or about half a million AI searches.

So even if you perform 1000 AI searches per day, every day, you would still create less CO2 than the average person creates via their routine travelling.
 
Last edited:

Rufous McKinney

Senior Member
Messages
14,509
A single Google search takes 0.3 watt-hours of electricity (which costs about $0.002, and creates about 0.1 grams of carbon dioxide), whereas an AI search takes 20 watt-hours (which costs about $0.16, and creates about 7 grams of CO2).
thank you, I'll be NOT ASKING Questions in the future.

Wonder how Bing is doing?
 

Hip

Senior Member
Messages
18,301
From the saving the planet perspective, keep an eye on a company named Quaise Energy, who are using powerful electromagnetic waves to vaporise rock and drill holes in the ground around 4 miles deep, in order to reach hot rock layers where there is abundant geothermal energy.

If they succeed, there is around 20 million years worth of geothermal energy available in these deep rocks, so this energy is essentially limitless, and totally clean.

What's more, this geothermal is available anywhere in the world at a 4-mile depth, all it requires is to drill a 4-mile hole in the ground. Quaise plan to drill these holes in decommissioned coal-fired power stations, which already have the necessary steam turbines, and are already connected to the grid.
 

Rufous McKinney

Senior Member
Messages
14,509
drill holes in the ground around 4 miles deep
I'd been wondering what was taking them so long tapping into that.

Surface geothermal= surface impacts to biodiversity and by tapping geothermal, it goes away (is drained off),

Maybe we can drill a few holes discretely in the right places. Then we could use the "mined material" to make LEGGO building supplies.
 

pamojja

Senior Member
Messages
2,756
Location
Austria
So even if you perform 1000 AI searches per day, every day, you would still create less CO2 than the average person creates via their routine travelling.

I didn't mean to blame anyone. Just to keep an eye on one's own environmental footprint. Not on anyone's else, which is sort of senseless. Since other preconditions than in one's own case too often determine in which section of life one could save some. If one understands, is allowed to by responsibilities, and wishes for.

The excess usually comes from using everything, from daily AI searches to streaming services, living in a western amenities flats, driving a private cars AND flying for vacations. However, as always, everyone is the heir to one's actions, only. No reason to envy or despise, before walking in the moccasins of another.

Perplexity.ai:
ActivityAverage Annual CO₂ Footprint (kg)Notes
Driving a car (typical, 1 year)4,600Based on average fuel economy and distance in Western countries.
Living in a 1-person flat (housing only)1,500–2,500Heating, electricity, and other home energy use.
One round-trip flight. India2,000–3,000Economy class, Western Europe–India, round trip.
Daily AI use (20 prompts/day)30–40Based on average prompt emissions.
Food (typical Western diet)1,500–3,000Higher for meat-heavy diets, lower for plant-based.
Clothing & goods consumption500–1,000Varies with shopping habits and product types.
Sports & recreation500–1,000Higher for energy-intensive or travel-heavy activities.
Streaming video (heavy use)100–200Several hours daily.
Laundry (washing & drying)200–300Depends on frequency and efficiency.
Waste generation (household)100–300Includes food, packaging, and other household waste.
Commuting by public transport500–1,000For regular, long commutes.

Makes roughly 10 tons a year, without much self-limiting. You can call yourself lucky - in an of course also bitter way - that life puts you in a place where you can't drive or fly. Saving at least half of that.

I've lucky to have the wits, at 20 years of age, to promise myself never to own a car again. 38 years ago. Unlucky due to my diseases at later age, where I found escaping each winter to an Indian beach was fundamental to regain health again and stay independent, now for 10 years already.

Where ever we find ourselves.
 

pamojja

Senior Member
Messages
2,756
Location
Austria
when it comes to inheriting excessive carbon, this may not be the case

Here I have to enter the realm of faith. When I was long-term meditating and uncovering so many disturbing emotions, I just couldn't conclude other, than that my short life would never have been able to create such mountains of a mess. My parents never really beat me up. Why all that much deeper existentially threatening trauma? Determining whole lives for the better, or worse?

But also from experience, nothing comes from nothing, or likewise goes into nothing. Whatever it could be before and after our personal life, I have rational difficulty negating it to nought completely. When I was gardening, it was so plastic: life decays to create compost and nutrients for what grows after. The cycles of becoming, from what came before. Causes and effects only. No coincidence without hidden causes.
 
Last edited:

Viala

Senior Member
Messages
804
A good question about environmental impact is how will that look like when AI takes a lot of our jobs. Because I do not see any initiative from people in power to stop that, meanwhile we have a lot of talk about how what we do everyday, the regular things, impacts the environment. It's a huge chokehold in my opinion and another way to take what we have and make our lives even more miserable. To keep us in our homes and limit everything, our heating, our lighting, the amount of clothes we buy and meat we eat, cars we use and where we travel. Making us cold, stuck in one location and sick from nutrition deprived food is not the way to save the planet or ourselves.

I choose people doing the actual jobs and enjoying their lives over AI doing our jobs at the expense of people watching over their shoulder in fear of generating CO2 and living very limited lives. It's silly. This is another case of turning our good intentions against us, most of us realize that environment is important but what solution we choose to address it is the most important part here. Limiting everything we do is not a solution. Limiting AI is what we should do instead and then work on environment friendly technologies first. Implementing AI everywhere in our lives will make us jobless and rich people even more rich and powerful. I do not buy this environmental talk, there is too much hypocrisy behind the agenda.
 
Back