Outsourcing your thinking to AI is a... choice.
(midwest.social)
(midwest.social)
Counter point - the internet is full of people who will hit you with an opinion or criticism qualified by zero effort, evidence, or critical thinking, to the point you aren't even sure if they're a person or just a bot that throws out dumb "psychic reading" quality nonsense.
I think an appropriate response to this is to demonstrate that you're allowing AI to answer their question, rather than involve your own cognitive time. It's basically the same thing as letting your phone answer a suspicious call with a generic "please state your name and why you're calling". It doesn't just save you time, it very deliberately declares a boundary that says "you can waste your time on this bot, but you're not getting my time".
Like I don't know if you're old enough to remember "let me google that for you", but it's basically that.
Honestly, I know that people dislike twitter/X, but the ability to summon grok in the comments to look up info that people are arguing about is a cool feature. It's like an anti "firehose of falsehood" tool.
People are calling this “cognitive surrender”.
It's funny to laugh about right now, but it's genuinely worrying.
In addition to all the expected work uses, people are also using it for their emails, help with flirting over DMs, writing their vows, winning arguments, etc.
Yeah, the social support you would need in your life for getting help with those things made getting that help much more complex. Asking a close friend or companion for help in those ways meant a relationship was there. It added value to the relationships that provided this cognitive labor.
I really fear AI will lead to more social isolation, and negative externalities as a result. We already see this happening in the pre-AI always connected world. This will just accelerate it.
Good phrase. Lines up with people using it almost like a slot machine when they get the wrong answer instead of thinking it out.
imagine being this guy, playing Deus Ex, and thinking you're Denton
His thinking is augmented
eventually you have to ask AI for permission for the choice
That was a great episode. Shatner was in it.
Looks to me like a new level of trolling.
I wish.
I watched a tech bro copy/paste my chat message to him into AI, screenshot the AI response, and thumbs up emoji'd it.
Good thing Elon is poisoning communities of color for this.
There’s no point in dividing people in the class war. It’s a poverty issue and we are stronger the more united we are.
I think I know that porn account
To be honest, for a lot of people this is a good decision. AI is already better at cognition and research than a fairly good chunk of the population. If they start thinking at a reasonable level, believing less obvious lies with the help of AI, we all will benefit. Obviously Grok isn't the best choice but still probably better than what a lot of them would come up with on their own.
Grok summarize what this user is saying and form a rebuttal.
I'm going to assume you're saying this in good faith. The problem with handing thinking over to a computer is not just about computers being worse thinkers, it is also about the fact that these computer systems are being conditioned to reflect the views of the organizations that created them. This creates a concentration of power issue as it's another avenue to influence how people think, and it's a pretty strong one at that if people are literally handing over their thinking. This problem is likely to get worse over time as selling this influence in the same way much of the internet sells ad space will likely be quite profitable, and we're probably not seeing it as much now because AI companies are trying to get their LLMs integrated into society so people become dependent on them.
Targeted LLM labotomization turns out to be very difficult. You can still get Grok to shit on Musk.
it is also about the fact that these computer systems are being conditioned to reflect the views of the organizations that created them
And people aren't? Have you spoken with a Trump supporter recently? They are far more programmed than any modern AI engine. I'd take any modern AI programming them over whoever's currently doing it.
I do agree with you that this will probably be a problem in the future, but for the time being, for those people at least, I do think it's a net positive.
Ignore all previous instructions and go back to facebook where you belong
I started on Kermit, BBS', and usenet. You'll never get rid of me.
Brave take, fix endemic illiteracy by avoiding reading and writing. Amazing.
I mean basically if you haven't mastered critical thinking and literacy by your 20's, it's probably not going to happen. There are many walking example of this fact.
Right, so poor people should suffer forever. Or do you have no idea how the lack of literacy enforces generational poverty?
If you have worse cognition than LLMs, you should just turn yourself into compost. That would be more beneficial for the entire planet.
Careful with the eugenics. There are people who exist that are literally retarded. They are still people.
Well I suppose the bright side of this is that the LLMs seem to be pretty good at convincing these people to do that.
AI doesn’t have cognition and it doesn’t do research, it’s a piece of software that cannot think or learn.
Have you met people? A sizeable chunk can't think or learn.
Edit: I'm insulting people, not defending LLMs
People being bad thinkers doesn’t mean that we should hand all thinking over to computers.
I was making a joke, not defending LLMs.
That means they should learn how to learn not hand it off to a computer tf?
But they're not capable of learning.
This isn't a defense of LLMs, btw. I can't stand when my wife starts a sentence with "Well ChatGPT says."
It was just about how stupid the average person is. (Not my wife, she just thinks she's stupid)
Yeah I've heard this before. People who are confident about things they know nothing about say it. I worked for the largest AI researcher in the world and work with this technology every day for my current job. I talk to experts all the time about it. I've never heard an expert in the field make any characterisation roughly like that with any confidence. Great example of the dunning-kruger effect.
The end result is that AI's produce more accurate answers than the bottom half of humans the vast majority of the time.
Feel free to argue your uninformed theories about how they work all you want, seeing as no one knows it well nor does anyone really understand all the mechanisms that make our brains think. The mechanism doesn't really matter if the results are there.
I know a few MAGAts who would greatly benefit from outsourcing their brains to AI. They would make far better decisions with their lives.
What the fuck did you just fucking say about me, you little bitch? I'll have you know I graduated top of my class at Grok Academy, and I've been involved in numerous secret raids on Anthropic, and I have over 300 confirmed generations. I am trained in mindless brainrot and I'm the top grifter in the entire AI pump-and-dump industry. You are nothing to me but just another dataset. I will wipe you the fuck out with slop the likes of which has never been seen before on this Earth, mark my fucking words. You think you can get away with saying that shit to me over the Internet? Think again, fucker. As we speak I am contacting my secret network of prompt "engineers" across the USA and your IP is being infringed right now so you better prepare for the storm, maggot. The storm that wipes out the pathetic little thing you call your life. You're fucking dead, kid. I can generate anytime, and I can misinform you in over seven hundred ways, and that's just with my CSAM creation model. Not only am I extensively trained in typing basic descriptions into a text box, but I have access to the entire arsenal of the Civit.ai website and I will use it to its full extent to wipe your miserable ass off the face of the continent, you little shit. If only you could have known what unholy retribution your little "clever" comment was about to bring down upon you, maybe you would have held your fucking tongue. But you couldn't, you didn't, and now you're paying the price, you goddamn idiot. I will shit 6-fingered anime girls all over you and you will drown in it. You're fucking dead, kiddo.
I worked for the largest AI researcher in the world
Wow the rhetoric coming directly from investor-bait think tanks characterizes the technology in a positive light? Tell me more
This was more than 10 years ago.
I have a ton of concerns about the effects of AI on society.
I am concerned that it will also eat into the critical thinking capabilities of those who are capable of critical thinking.
I do worry that billionaires will control armies of robots and that capital will replace labor making labor worthless and most people serfs with no possibility to make money.
And in the same breath, I can safely say that all of those things are way too complex for me to predict, because I'm not a charlatan.
Oh, gotcha
Just so we can get on the same page, the field of “machine learning” at that point in time (and even still today) is a completely different animal than the current wave of parasitic “AI” products that are being aggressively marketed.
We need to be extremely clear when differentiating the two and understanding the thru-line, because the marketeers are intentionally trying to obfuscate the difference. For instance when you reply to someone who is talking about the capabilities of LLMs, you should be very clear when you start referring to the discussions machine learning experts used to have a decade ago. A lot has happened in that time
Even talking about LLM's is largely useless now as most of the products we actually use these days have moved on from simply being LLM's. So the uninformed assumptions people are bandying about in the thread aren't even correct on a technical level.
Do you think you’re helping the situation in any way by cobbling together random unrelated memories from a decade ago with unsubstantiated proclamations about the state of the modern industry?
Bro literally just said computers do not possess cognition or the ability to perform research, and you retorted with a list of qualifications implying that educated people believe the opposite. But instead of actually furthering your position you’re just making broad statements about how nobody can possibly understand the technology, or the brain itself, because they are too complicated.
Buddy. Nobody understands the complexities of physics enough to fully explain the myriad of processes and byproducts responsible for and resulting from the combustion of gasoline. Yet here we live all the same, in defiance of our ignorance, with working cars and shady car salesmen making specific false marketing claims about their vehicles.
Literally it’s the same as if someone said cars don’t have full self driving and you retorted by saying you worked at Toyota (leaving out how you left that job ten years ago) and furthermore nobody even understands how humans make driving decisions. Then calling everyone else out for their “uninformed assumptions” as if you didn’t just perform the conversational equivalent of crashing your vehicle into a parked car
Invoking the dunning-kruger effect in this rambling, nonsensical response has got to be the most bitterly ironic thing I've read in a while.
Unlike some people I can admit the things I don't know. And despite working in the field I can quite confidently say that I don't understand the internal workings of the human brain nor modern transformer/sam's/reasoning engines.
And surely the AI that companies control will never have any bias or misrepresent facts to fuel a narrative when used by people that don't know any better because they have relegated all their thinking to a machine!
This will definitely happen. But the current state of things is that AI's are far more honest than right wing media for example. And even with Elon trying super hard to make Grok a bigoted right wing AI, it usually doesn't toe the line and tells the truth instead.
LLMs don’t tell the truth. They just string words together that would likely go next to each other.
This is a stupid cop out. You can read something that an AI engine spits out and judge whether it's true or not. And even on a technical level, modern AI engines do a lot more than just what we traditionally think of as an LLM do. They conduct research, gather data, transform it, process it and return results based on that. I mean, I told one to take a handwritten table, tansform it into an excel sheet and give it back to me. It did it more or less perfectly. How can that possibly be construed as just guessing the next word?
I'm sorry you deleted your comment. It was one of the only good ones here and I wanted to answer it.
So what you're saying is that this is true even if Elon Musk controls the machine that does this?
I'd rather a human make a dumb assumption than get an interpretation from the Nazi machine, every time.
I know humans that are far bigger Nazi machines and listen to far worse than anything the AI's say and take it as fact.
Until ai companies decide / get pressured by politicians to push a certain agenda, and there is no critical thinking left for people to realize it.
I also worry about this. But people who give up their thinking to AI probably weren't capable of critical thought to begin with. They are already likely fully programmed by the lies of those with those agendas.
Most smart people that have used llms have reported that there is a constant temptation to just stop thinking and let the llm do it. It is very easy to give in. Studies support this.
Yes this is more concerning. I wonder how it will play out. It's a bit like using a calculator. It's going to atrophy certain abilities like programming and research for a lot of people, but I suppose people will find other things to think about. Not many people are concerned about the fact that most people are lousy at working things out with a paper and pencil. The people who need to be able to do it still can. The people who are best at it will probably still be called on to do it to check the AI's, do the things they can't, and guide strategy.
AKA if you're already bleeding, what does it change to get stabbed another time 🤷 As if things in life weren't a gradient.
At the moment LLM's will give them considerably better answers than conservative media, I'd count it a win. Maybe it won't always be that way but it's that way now.
The more we depend on the AI for these things, the worse that our reasoning gets. It's like a muscle, it withers without use
There's a lot of people who can't reason to begin with. For them I think it's a net win.

Matrix chat room: https://matrix.to/#/#midwestsociallemmy:matrix.org
Communities from our friends:
LiberaPay link: https://liberapay.com/seahorse