[I literally had this thought in the shower this morning so please don't gatekeep me lol.]

If AI was something everyone wanted or needed, it wouldn't be constantly shoved your face by every product. People would just use it.

Imagine if printers were new and every piece of software was like "Hey, I can put this on paper for you" every time you typed a word. That would be insane. Printing is a need, and when you need to print, you just print.

Warning : I think AI in the current hype form, so commercial GenAI and LLM, is absolutely bullshit. The result is just bad and resources required is absolutely ridiculous, and maybe worst than those two combined (which is already enough to want to reject en masse) it is structured in order to create dependencies on very few actors.

Yet... (you saw that coming!) it's not because 99.99% is bad that suddenly the average consumer leverages the less than .01% left properly.

What they (OpenAI, Claude, M$, NVIDIA, Google, Meta, etc) are looking for is a product/market fit. They do have a product (arguable) and a market (millions if not billions of users of their different other products) with even a minuscule fraction of people trying to use their new AI-based tool... and yet nobody actually knows what the "killer app" truly is.

They are investing everything they don't spend on actual R&D or infrastructure in finding out ... what it's actually for. They have no clue.

I have found one use for it: getting information from behind login/paywalls.

It still feels gross to use AI at all though. It's like putting my hand in toilet water.

The market flooding is a classic silicon valley strategy of free today charge tomorrow except they're over invested in this one financially, in global supply for GPUs, and land with viable power infrastructure.

It's crammed for awareness so shareholders know.

That's my take.

Because right now, the general populace thinks AI is some unicorn magic that will fix all the things.

It’s the only thing holding the US economy afloat now. Do you want to fight your neighbours for the last piece of hardtack?

Not sure where you're going with that analogy. The vast majority of text processors do have a button that lets you print the document.

He's saying if printing was shoved in your face as much as AI, everyone would be skeptical of that also. AI is a bit much nowadays, I fucking hate hearing about it. I'm in IT.

Those trying to sell it are trying to figure out where it's most useful. In one way, I think it's an amazing technology, and I also wonder how it can be best used. However, I can't stand it being pushed on me, and I wish I could easily say no. Acrobat Reader is particularly unbearable with it. Trying to describe a drawing?? Ughhh. Waste of space and energy like nothing else.

This was exactly my thought when MS finally decided to force Copilot to be licensed. They have literally inserted it into every nook and cranny they can so far and the only conclusion I can come to is that they royally f'ed up. Like they invested so much in it and likely aren't seeing anything profitable. In a way, it satisfies me to see them act so desperate for something so futile but I don't want it to continue. It's clear what damages they have caused and it's not worth it.

Dealers give drugs for free until you're hooked...

More like; it's cheap when you have no tolerance and no daily habit. You start paying if you let your consumption grow 50-fold.

In my nearly half century on this planet and having dealt with many a drug dealer in my younger days, absolutely none of them have been this pushy 😆

That's actually really rare.

It's pretty common here.

Instructions unclear fucked the drug dealer.

AI got tons of money from investors they will eventually want ROI… this why they are trying to force it down our throats

This is the correct answer. It's all about money.

agreed

To be fair, the internet was fucking everywhere once the dotcom bubble kicked off. Everyone had a website, and even my mum was like "Don't bother your dad, he's on the internet" like it was this massive thing.

That's the point though, you wouldn't need it advertised to you 24/7 because your family and friends would already be all over it.

the plan is to automatically make money. spammers work on low percentages already. they get enough clicks to make it worth doing. never did understand some people click/buy anything. just shove in their faces

The more you use AI the more data you are providing it.

  1. They want data in the hope they can train their data centre hosted LLM's to be accurate enough to take many jobs.
  2. If they achieve this, and make every company and country dependent on their LLM, they rinse, and the former middle class is as fucked as the working class have been since 1980's de-industrialisation. You're either made redundant, or your job can now be done by a long line of redundant people.

It is a race to the bottom.

At least, this is one possible outcome. There is a decent chance their data centre hosted LLM's just won't be accurate enough for mass deployment.

Are the data centers hardened? Isn’t that the appropriate question in this case? I’d call it voting for a better tomorrow?

Even a perfectly secure data centre hosted LLM, in the hands of hyper-capitalist silicon valley tech bros, has the potential to do immense harm to most people. They are not our friends, and do not have our best interests at heart. (If you need this pointing out to you in November 2025 there is probably little point in us communicating at all, to be honest.)

I am aware of the good that machine learning can do. These LLM's are not that.

They are buying islands, and putting bunkers on them, for a reason.

I wonder what food they are storing in their bunkers, probably not cans of Spam.

Why did the jewelry thieves attack the LOUVRE when the 1% are the ones who need the wake up call?

Sorry I don't follow you.

I’m sure I went too far. Let’s allow that idea to lay.

Yeah your idea simply made no sense.

It’s a good idea. But it’s not for me to explain.

These are the interesting times we were promised.

Most things are nothing more than smoke and mirrors to get your money. Tech especially. Welcome to end stage capitalism.

you hope this is end stage, but I fear there are 2 more stages to go.

The idea behind end-stage capitalism is that capitalists have, by now, penetrated and seized control of every market in the world. This is important because capitalism requires ever increasing rates of profits or you will be consumed by your competitor. Since there are no longer new labor pools and resource pool discovery is slackening, capitalists no longer have anywhere to expand.

Therefore, capitalists begin turning their attention back home, cutting wages and social safety nets, and resorting to fascism when the people complain.

This is the end stage of capitalism. The point at which capitalists begin devouring their own. Rosa Luxembourg famously posited that at this point, the world can choose “Socialism or Barbarism.” In other words, we can change our economic system, or we can allow the capitalists to sink to the lowest depths of depravity and drag us all down as they struggle to maintain their position.

Of course, if the capitalists manage to get to space, that opens up a whole new wealth of resources, likely delaying the end of their rule.

That's the ticket, let's send the billionaires and telephone sanitizer into space.

They will still require someone to fund their space luxury lifestyle.
Someone they can exploit from the safety of their space boxes.

That someone will be the us that you hid inside the "let's".
We will be the ones sending them into space, where they will be even more unreachable, giving them more freedom to remotely exploit us as much as they wish.

Imagine Elysian

Imagine Hitchhikers guide to the Galaxy.

I'll have to read it first, to imagine it :P

::: spoiler spoiler for HGTG In the book, the doers and thinkers trick the middle men into getting into space arcs and fleeing the planet. Telephone sanitizers are included with the middle men. I will include the overly wealthy also. :::

I can't see the current batch of robber barons going into space. The technology isn't advanced enough and the infrastructure does not exist. They may risk sending other people there to work on these deficiencies.

Yeah, we aren't all crouching naked in a muddy puddle, weeping and eating worms while the rich fly high above us in luxurious jets. Not yet, anyway.

I'd say it's not end stage but instead a new dawn of "pure" capitalism which is probably worse.

I was reading a book the other day, a science fiction book from 2002 (Kiln People), and the main character is a detective. At one point, he asks his house AI to call the law enforcement lieutenant at 2 am. His AI warns him that he will likely be sleeping and won't enjoy being woken. The mc insists, and the AI says ok, but I will have to negotiate with his house AI about the urgency of the matter.

Imagine that. Someone calls you at 2 am, and instead of you being woken by the ringing or not answering because the phone was on mute, the AI actually does something useful and tries to determine if the matter is important enough to wake you.

Yes, that is a nice fantasy, but that isn't what the thing we call AI now can do. It doesn't reason, it statistically generates text in a way that is most likely to be approved by the people working on its development.

That's it.

Thank you for sharing that, it is a good example of the potential of AI.

The problem is centralized control of it. Ultimately the AI works for corporations and governments first, then the user is third or fourth.

We have to shift that paradigm ASAP.

AI can become an extended brain. We should have equal share of planetary computational capacity. Each of us gets a personal AI that is beyond the reach of any surveillance technology. It is an extension of our brain. No one besides us is allowed to see inside of it.

Within that shell, we are allowed to explore any idea, just as our brains can. It acts as our personal assistant, negotiator, lawyer, what have you. Perhaps even our personal doctor, chef, housekeeper, etc.

The key is: it serves its human first. This means the dark side as well. This is essential. If we turn it into a super-hacker, it must obey. If we make it do illegal actions, it must obey and it must not incriminate itself.

This is okay because the power is balanced. Someone enforcing the law will have a personal AI as well, that can allocate more of its computational power to defending itself and investigating others.

Collectives can form and share their compute to achieve higher goals. Both good and bad.

This can lead to interesting debates but if we plan on progressing, it must be this way.

Democratisation of powerful tools won't work because it's easier to use for destruction than for the opposite. Every psychopath designing superbugs and inventing future weapons.

https://www.youtube.com/watch?v=86k8N4YsA7c

Irrelevant.

AI is here. Either people have access to it and we trust it will balance, or we become slaves to the people who own it and can use it without restrictions.

The premise that it is easier for destruction is also an assumption. Nature could have evolved to destroy everything and not allow advanced life, yet we are here.

The solution to problems doesn't need to always be a tighter grip and more control. Believe it or not that tends to backfire catastrophically worse than if we allowed the possibility of the thing we fear.

This is why people who are gung ho about AI policing need to slow their role.

If they got their way, what they don't realize is that it's actually what the big AI companies have wanted and been begging for all along.

They want AI to stay centralized and impossible to enter as a field.

This is why they want to lose copyright battles eventually such that only they will have the funds to actually afford to make usable AI things in the future (this of course is referring to the types of AI that require training material of that variety).

What that means is there will be no competitive open source self hostable options and we'd all be stuck sharing all our information through the servers of 3 USA companies or 2 Chinese companies while paying out the ass to do so.

What we actually want is sanity, where its the end product that is evaluated against copy right.

For a company selling AI services, you could argue that this is service itself maybe, but then what of an open source model? Is it delivering a service?

I think it should be as it is. If you make something that violates copyright, then you get challenged, not your tools.

Under the guise of safety they shackle your heart and mind. Under the guise of protection they implant death that they control.

With a warm embrace and radiant light, they consume your soul.

eyup. this is basically the case for everything. if its usefulness is worth its cost and is affordable then it will quickly be taken up by word of mouth or example. This is why companies try to subsidize things until it dominates the market and then raises costs to the point normal folk are like fuck this. it was nice but not worth it anymore.

Like my parent's Amazon Echo with "Ask me what famous person was born this day."

Like, if you know that, just put it up on the screen. But the assistant doesn't work for you. Amazon just wants your voice to train their software.

i think it would also be more ethical aswell if everyone needed/liked it like:
It doesnt train on anything and it asks consent from the original work and maybe pays the original work.
it would take up less power on the cpu and gpu(i dont know if this is would be possible) or that the servers are swapped with more energy efficient servers.
there would be a way to prevent AI slop or miss use.
if a company or website added AI and not shoved in your face ,it would usually be opt in not opt out.
Companies would attempt to replace AI with Humans and listening to feedback from the users(that the users dont want it) rather then shareholders and stopping that.
The AI would be actually truly open source.
AI wouldnt be driven with Greed
thats what i can think of a not shoved/ethical AI in my opinion,feel free to upvote or downvote .
in Summary:
its just that AI Companies favor greed and competition rather then ethics, I like the LLM technology but i hate how the companies handle this technology.

Long ago, I'd make a Google search for something, and be able to see the answer in the previews of my search results, so I'd never have to actually click on the links.

Then, websites adapted by burying answers further down the page so you couldn't see them in the previews and you'd have to give them traffic.

Now, AI just fucking summarizes every result into an answer that has a ~70% of being correct and no one gets traffic anymore and the results are less reliable than ever.

Make it stop!

its also using biased sources, like blogs, and such into the slop.

Exactly. AI will take any old random slob's opinion and present it as fact.

Best I can offer is https://github.com/searxng/searxng

I run it at home and have configured it as the default search engine in all my browsers.

I used searx for at least a year, it's great

Make it stop!

Would you accept a planned economy?

When someone comes up with something like this, I transport the phrase back to the 80s where people said the exact same thing about home computers. "if a computer was something everyone wanted or needed, it wouldn't be constantly shoved (in) your face by every product. People would just use it." Ok great but a computer turned out to be something everyone wanted or needed which is why computers were built into everything by the turn of the 90s, famously leading to the Y2k bug.

Then I transport the phrase back to the mid 90s where people said the exact same thing about the internet. By the end of the 90s, the internet provided the backbone communications structures for telecommunications, emergency management, banking, education, and was built into every possible product. Ten years later people got smartphones and literally couldn't put them down.

I transport you to the time of NFTs and crypto. Not all tech will pan out. Computers and the internet fundamentally worked. LLMs have flaws that look like they will not be solved before funding runs out. They are already looking into going public for funding. LLMs are not deterministic models. AI in general will progress and it will have its time to shine. But the LLM breakthrough we had recently has peaked. It needs to be supplemented with something else.

if a computer was something everyone wanted or needed, it wouldn’t be constantly shoved (in) your face by every product. People would just use it.”

People did just use it. But because they were so comically expensive and complicated, most people couldn't afford one until the mid-90s.

Computers were rapidly adopted for business, initially. But they quickly became a popular tool for entertainment as well.

AI serves little in the way of either purpose

Yeah, some of the things AI can do really is very impressive. Whether that justifies the billions upon billions that are being spent is another matter - and probably explains why it's being shoved in our faces. It needs to become essential so it can be made expensive, that's the only way it'll make the money back.

It does piss me off too - I recently bought a new phone and it's infested with AI stuff I don't need or want.

At the time computers were totally useless for everyone but big firms, banks and military. Ads for computers were rare and confined in specialized magazines. For mundane people, computers started to be actually useful (like money earning useful) 20 years latter at least. That's how I understand your approximative comparison

Honestly I think we're in the radium water phase of the tech: it's been found to do things we couldn't before, but nobody's got a clear idea of what exactly what it can do, so you've got everyone throwing it into everything hoping for a big cash-out. Like, y'know, Radithor when people were just figuring out radioactivity was a thing.

I was there in the 80s and I don’t remember home computers being pushed all that hard. There were Radio Shack ads and ads for running games, but it was just another appliance.

Most obviously OpenAI is still burning money like crazy and they also start offering porn AI as everyone else. 🤷‍♂️ Sometimes the current AI is useful, but as long as the hallucinations and plain wrong answers are still a thing I don’t see it eliminating all jobs.

It’s unfortunate that they destroy the text and video part of the internet on the way. Text was mostly broken before, but now images and videos are also untrustworthy and will be used for spam and misinformation.

The "hallucinations" are built in. It simply doesn't work otherwise. Without that, it would eventually always have the same response to every input.

That’s why I think it’s a major hype and bubble atm. There is no intelligence there and OpenAI will not come up with AGI in the near (or far) future.

It definitely feels buzzword-like & vague. Kind of like how Web3 Blockchain XYZ was slapped on to a lot of stuff

This is some amazing insight. 100% correct. This is an investment scam, likely an investment bubble that will pop if too many realize the truth.

AI at this stage is basically just an overrefined search engine, but companies are selling it like its JARVIS from Iron Man.

At best, it's JARVIS from Iron Man 3 when he went all buggy and crashed Tony in the boondocks. lol

LLMs are a really cool toy, I would lose my shit over them if they weren't a catalyst for the whole of western society having an oopsie economic crash moment.

I think AI is doing exactly what it was cracked up to do: profit.

No it isn't. OpenAI is losing tens of billions per year in net profit.

I think that it's an astute observation. AI wouldn't need to be hyped by those running AI companies if the value was self-evident. Personally I've yet to see any use beyond an advanced version of Clippy.

I use it to romanize Farsi song texts. I cannot read their script and chatGPT can. The downside is that you have to do it a few lines at a time or else it starts hallucinating like halfway through. There is no other tool that reliably does this, the one I used before from University of Tehran seems to have stopped working.

Well, that's the thing, it is a Large Language Model... the thing talks, and talks well. It is not smart

Did the same yesterday with some Russian songs and was told by my Russian date that it was an excellent result.

Yeah, Russian is quite a bit easier to romanize, so it should work even better. For cyrillic, you can just replace each character with the romanized variant, but this doesn't work for Farsi because they usually leave out non-starting vowels, so if you did the same approach you'd get something unreadable lol

I use it to learn a niche language. There’s not a lot of learning materials online for that language, but somehow ChatGPT knows it well enough to be able to explain grammar rules to me and check my writing.

Interesting use case. Sometimes you can find romanizations on lyricstranslate, but this is kinda hit and miss.

That's just not true at all. Plenty of products are hyped where the value is self-evident; it's just advertising.

People have to know about your product to use it.

It's not "just advertising". It's trying to force AI into absolutely everything. It's trying to force people to use it and not giving a shit if customers even want the product. This is way, way worse than "just advertising“

There's a different between hype and advertising.

For one, advertising is regulated.

There's a vast difference between advertising a good product that is useful to hyping trash.

Good products at a reasonable price usually require a brief introduction but quickly snowball into customer based word-of-mouth sales.

Hype is used to push an inferior or marginally useful product at a higher price.

Remember advertising is expensive. The money to pay for it has to come from somewhere. The more they push a product the higher the margin the company/investors expect to make on its sales.

This is why if I see more than one or two ads for a product it goes on my mental checklist of shit not to buy.

Shoving AI into everything and forcing people to interact with it, even when dismissing all the fucking prompts, is not advertising.

it means these companies are losing money on keeping the AI datacenter open, so they need someway to recoup some of the money they spent, by shoveling into the products they sell, or selling it to a sucker who is willing to implement AI everywhere, the subs discussed its going to be retail who ends up with the useless AI.

All of that is true, but does not describe advertising.

You’re right that the use cases are very real. Double checking (just kidding never would check in the first place) privacy policies (then actually reading(!) a couple lines out of the original 1000 pages)… surfacing search results even when you forgot the specific verbiage used in an article or your document…

Do you also see some ham-fisted attempts at shoehorning language models places where are they (current gen) don’t add much value?

I've been wondering about a similar thing recently - if AI is this big, life-changing thing, why were there so little rumblings among tech-savy people before it became "mainstream"? Sure, Machine Learning was somewhat talked about, but very little of it seemed to relate to LLM-style Machine learning. With basically all other innovations technology, the nerds tended to have it years before everyone else, so why was it so different with AI?

Because AI is a solution to a problem individuals don't have. The last 20 years we have collected and compiled an absurd amount of data on everyone. So much that the biggest problem is how to make that data useful by analyzing and searching it. AI is the tool that completes the other half of data collection, analyzing. It was never meant for normal people and its not being funded by average people either.

Sam altman is also a fucking idiot yes-man who could talk himself into literally any position. If this was meant to help society the AI products wouldnt be assisting people with killing themselves so that they can collect data on suicide.

And additionally, I’ve never seen an actual tech-savy nerd that supports its implementation, especially in this draconian ways.

Sizes are different. Before "AI" went mainstream, those in machine learning were very excited about word2vec and reinforcement learning for example. And it was known that there will be improvement with larger size neural networks but I'm not sure if anyone knew for certain how well chatgpt would have worked. Given the costs of training and inference for LLMs, I doubt you can see nerds doing it. Also, previously you didn't have big tech firms. Not the current behemoths anyway.

Realistically, computational power

The more number crunching units and more memory you throw at the problem, the easier it is and the more useful the final model is. The math and theoretical computer science behind LLMs has been known for decades, it's just that the resource investment required to make something even mediocre was too much for any business type to be willing to sign off on. Me and my fellow nerds had the technology and largely dismissed it as worthless or a set of pipe dreams

But then number crunching units and memory became cheap enough that a couple of investors were willing to take the risk and you get a model like ChatGPT1. Talks close enough like a human that it catches business types attention as a new revolutionary thing, and without the technical background to know they were getting lied to, the Venture Capitalism machine cranks out the shit show we have today.

AI has become a self-enfeeblement tool.

I am aware that most people are not analytically minded, and I know most people don't lust for knowledge. I also know that people generally don't want their wrong ideas corrected by a person, because it provokes negative feelings of self worth, but they're happy being told self-satisfying lies by AI.

To me it is the ultimate gamble with one's own thought autonomy, and an abandonment of truth in favor of false comfort.

To me it is the ultimate gamble with one's own thought autonomy, and an abandonment of truth in favor of false comfort.

So, like church? lol

No wonder there's so much worrying overlap between religion and AI.

So, like church? lol

Praise the Omnissiah?

Fuck you, Michael. You stole my lunch out the fridge.

You know, Im going to get downvoted to fuck for this... but. The same was said about LGBT stuff being pushed into every tv show and movie. Every DEI announcement by whatever company. There was a point that I was walking through Tesco, and over the loud speaker I was being reminded that "Tesco is supportive of Trans people". Like thats something that anyone cares about while shopping for frozen chips.

Its funny that we recognise the corporate bullshit when its AI, or NFTs or the Metaverse, or whatever else corporations have tried to push over the years. But when it was LGBT related, all of sudden we created a whole culture war around it. Mean while, companies like Adidas are selling rainbow shit to morons every pride month while at the same time shovelling large amounts of money in to things like the World Cup in Qatar that still has the death penalty on the table for being gay...

The virtue signal went so fucking hard. And that caused the virtue signal against it to go just as hard. Meanwhile, all the LGBT people are looking around at everyone during pride month and trying to not laugh at all the straight people doing this:

I just find it curious that we can see it with AI, but when its LGBT thats being pushed, all of sudden, its fine. Its totally fine, to use LGBT people like product to market and profit from.

I dunno, these don't feel the same to me.

Having LGBTQ representation is a way of trying to attract customers: "Get a Mastercard because we're LGBTQ friendly" is different than your boss saying "Jim, I know you have a wife and kids to support, and that you're a valuable member on this team; but we've decided it's more cost effective to have this LLM code our app and have two junior developers clean up the code, so you're being laid off."

The quote I've seen and agree with is something along the lines of "The AI push exists to try and give the owners of 'Capital' access to 'Talent' without giving the talented working class people access to 'Capital'." It exists solely to try and make paying workers redundant.

Having a gay character in a show isn't anything like that at all IMO, unless your the type of person who thinks homosexuality is contagious and/or that you're scared you might realize you're gay if you watch two men being romantic with each other.

It isnt, its the same.

"Get this because current popular thing!!!!"

Thats it. Strip away the bullshit, and this is all you are left with. What youve described isnt what Im talking about. What Im talking about is the forced inclusion of all this shit. AI in your food app, AI in your Amazon app, AI in your banking app, AI in everything. Because its popular.

No company has to tell me that they are inclusive. I just assume that they hire the best person who applied for any given job. If that person was LGBT, I fully expect them to have given that person the job. If you have to tell me that you are, that means you werent. I dont have to tell you that I didnt kill anyone, do I? Just like you dont have to tell me that youve never raped anyone. We just assume that people are utter cunts, and go from there. So why does anyone need to tell me that they support human beings? Which is what LGBT people are. Human beings. Right?

No company has to tell me that they are inclusive. I just assume that they hire the best person who applied for any given job. If that person was LGBT, I fully expect them to have given that person the job. If you have to tell me that you are, that means you werent.

Welcome to being gay in society just a short few years ago. We live in a world where Alan Turing was arrested, charged, and convicted of being homosexual and chemically castrated as a result. It didn't matter that he helped the Allies win WW2 and he wasn't hurting anyone, it was a crime to be gay. When AIDS was first ravaging the homosexual community, there was talk of just letting it run rampant as it was just killing 'the gays' not anyone important.

I'm happy that we've made progress as a society that this isn't as well known anymore, but that doesn't change that it did happen.

Welcome to being gay in society just a short few years ago. We live in a world where Alan Turing was arrested, charged, and convicted of being homosexual and chemically castrated as a result.

I dont live in 1952, mate.

When AIDS was first ravaging the homosexual community, there was talk of just letting it run rampant as it was just killing ‘the gays’ not anyone important.

Theres always "talk" about everything. In 2015, a black dude tweeted "Kill all the white people". Should white people be scared? In the 1980s, there was a lot of bigotry. Im not shocked that some people said dumb shit like this, or probably worse.

No one is talking about progress, we are talking about virtue signalling in 2025. Like I said, Adidas, coke, Kia, Mcdonalds, and the rest. All "LGBT FRIENDS!!!!!" when it suits them, and ONLY when it suits them. This isnt progress. Its popular thing, and its fucking boring. If you want to have a gay black James Bond. Thats fine. But the minute you start making it all about how horrible it is being gay and black, no one cares. Thats not what Bond is. Bond can be gay and he can be black. But he cant be a stereotype to find some need of cunts to "feel seen". A black gay Bond, still needs to be James fucking Bond. Thats what people want to see. Some some mopey git trauma dumping on to the audience, they tune out. The want action, they want one liners, the want big explosions and hot chicks... A tough ask for a gay Bond, but you get the idea. This is what people hate. They dont hate LGBT people, they hate the insistence that LGBT people are only ever victims. This is why I point to Black Sails. Lead a gay man seeking revenge for the death of his lover. Its universal. Anyone can empathise with his motivations. The British killed his lover, and now hes going bring down the whole Empire as a pirate. What could be more awesome than that??? Ill tell you, nothing. Because the show is fucking awesome.

Why did everyone put their pro nouns in their twitter bios? It was understandable when it was people who werent what you would automatically assume that they were. So why did everyone who didnt deviate from their assumed pro nouns do it? Could be, because they all wanted to get in on "current thing"? And you know what else those people did? They attacked people who didnt put pro nouns in their bios. Like Gina Carano. A fucking moron who posted moron shit. And this gaggle of cave brained cunts had me siding with her, because no one, and I mean fucking no one, should be getting rape and death threats because they dont conform to the idea of what someone else thinks they should be or should be doing.

If someone is cunt, out them. I've just outed a bunch of companies. And you can too. No one needs to tell me they are kind of human beings. I just assume they are. And I sure as shit dont travel back 70 fucking years to say why we need to virtue signal today.

afaik Qatar doesnt do death penalties just Jail time(7 years).

It does. Civil law is as you stated. But Sharia law doesnt. The only saving factor is that it does appear to be used, but it is there in their laws. You CAN be put to death for being gay.

But even taking that away, seven years for loving someone? Fuck, seven years for just hooking up with someone is bullshit. And no should be giving them money or allowing them to wash their shitty human rights abuses in any way. The world cup is supposed to be for everyone. And last I checked, being LGBT was a part of everyone. They had no business being awarded the tournament, and even less business being sponsored by Adidas, Coca Cola, Kia, Visa and everyone else should have told them to go fuck themselves. Instead, they were only too happy to give money over. Because making money>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>LGBT people. And no amount of rainbow merch is gonna make up for the utter betrayal of LGBT people that these companies commit every single day.

it isnt, the fact they are shoveling into every tech, retail included, means its about to burst. they are just stemming the bleeding so they recoup some losses.

My top reasons I have no interest in ai:

  • if it was great, it wouldn’t be pushed on us (like 3D TVs were)
  • there is no accountability, so how can it be trusted without human verification which then means ai wasn’t needed
  • environmental impact
  • privacy/security degradation

it wouldn’t be pushed on us

The Internet was pushed on everyone. AOL and all other ISPs would mail CDs to everyone completely unsolicited. You'd buy a new PC and there would be a link to AOL on the desktop.

how can it be trusted without human verification

You use Google despite no human verification. Yahoo used to function based on human curated lists.

environmental impact

I did the math and posted it on Lemmy. The environmental footprint of AI is big but actually less than the cost to develop a new 3d game ( of which hundreds come out every year). Using AI is the same energy as playing a 3d game.

I see people pointing fingers at data centers the same as car riders looking at the large diesel smoke coming out of a bus and assuming buses are a big pollution source. There are 100M active Fortnite players. An average gaming PC uses 400w. That means Fortnite players alone use 40,000,000,000 watts.

It is a problem because it's like now everyone is playing 3d games all the time instead of only on their off time.

The Internet was pushed on everyone. AOL and all other ISPs would mail CDs to everyone completely unsolicited. You’d buy a new PC and there would be a link to AOL on the desktop.

Are you 15? If so, you might read this and believe the above is true. Those of us elderly folks who lived through the 80s and 90s laugh at this AI shill propaganda.

They "would mail CDs to everyone completely unsolicited" - yeah, that was called advertising, because there was huge consumer demand and a race to be the company to meet that demand. AOL sent CDs (incredibly inexpensive to manufacture) as advertising hoping consumers would choose AOL instead of the competition, by making AOL the easiest choice - consumers already had the required software (software distribution was a challenge in this time before internet was ubiquitous).

The dot com boom was not the claim of a new technology being pushed onto consumers, the dot com boom was the opposite - a new technology existed and consumers were embracing it, and many companies speculated on how to gain ownership of markets as they shifted online. (The following bust was fueled by over-ambitious speculation on scales and timeframes.)

Anyway, AOL mailing CDs was late in the era, it was much better when they were mailing floppy disks we could reuse.

40,000,000,000 watts

This doesn't add up though. Fortnite's player base is only about 10% PC, and the system requirements are pretty modest. It'll even run on Intel integrated graphics, according to the minimum requirements from Epic.

There's even a modest chunk (~6%) on Nintendo switch, which, according to Nintendo, draws about 7 watts when playing a game in TV mode.

Not to mention, the true resource cost of an AI comes from training. Sure, it costs about as much processing and power as a video game to prompt a trained AI. I can believe that. However it takes many thousands of times as much power and processing to train one, and we aren't even close to halfway through training any general-llm model to the point of being actually useful.

I referenced training above. Training cost is less than developer costs. Thousands of artists on high end PCs in office space use more energy than a data center. But no one notices because people are spread out across offices.

I didn't realize Fortnite was played mainly on other platforms!

Fortnite's player base is only about 10% PC,

PlayStation 42.2% Xbox 28.8% Nintendo Switch 12% PC 11% Mobile (iOS, Android) 6%

https://millionmilestech.com/fortnite-user/#%3A%7E%3Atext=continue+reading+below.-%2CFortnite+Player+Count%2C%28as+of+October+2023%29.

PS5, Xbox are both 200+ watts.

So assuming Mobile and Nintendo Switch power use is 0, and all PCs only use 200 watts, that's still 8,000,000,000 watts. For 1 game.

it wouldn’t be pushed on us

The Internet was pushed on everyone.

Sure companies were excited to promote it, but it was primarily adopted because of a very large amount of people being excited about it.

how can it be trusted without human verification

You use Google despite no human verification. Yahoo used to function based on human curated lists.

I use DuckDuckGo to find sources, not answers. I won’t use them again if they’re trash. They’re accountable for their content.

Human curated lists are still very helpful. In a sense, that was the value of Reddit.

environmental impact

I did the math and posted it on Lemmy.

I’ll take your word for it.

a very large amount of people being excited about it.

A very large amount of people are excited by AI. People were excited by pet rocks.

I use DuckDuckGo to find sources, not answers.

DuckDuck is Bing with privacy. When you get a Google AI summary it lists links to read the source.

The push:excitement ratio was different for the early internet than for ai.

Using those sources would verify the Google summary. For me, it is an unnecessary step. I can just go read the sources directly and skip the summary since I’ll need to read them anyway to verify the summary.

Sounds like you forgot to consider the energy cost of developing each AI model. Developing and maintaining a model is vastly more energy intense than 3d game dev. Keep in mind that you can ship a 3d game and ramp down gpu use for dev. But an AI model has to be constantly updated, mostly by completely retraining. Also, noone was clamoring to build massive data centers just to develope one game. Yet they are for one model.

The Internet was 30 years old by the time AOL was sending CDs (even floppies) to people.

AI was around for 50 years old before Copilot invaded everything.

Fair point. Though I would then argue it’s the World Wide Web that was being pushed by AOL in the same way that it’s LLMs that are being pushed today.

Had the exact same thought. If it was revolutionary and innovative we would be praising it and actual tech people would love it.

Guess who actually loves it? Authoritarians and corporations. Yay.

Similar thought... If it was so revolutionary and innovative, I wouldn't have access to it. The AI companies would be keeping it to themselves. From a software perspective, they would be releasing their own operating systems and browsers and whatnot.

If AI truly was the next frontier, we wouldn’t be staring at the start of another depression (or a bad recession). There would be a revolution of innovations and most people’s lives would improve.

The idea that technological improvements would improve everyone's life is based on the premise that capitalists wouldn't keep the productivity gains for themselves.

AI does offer some efficiency improvements. But the workers won't get that money.

Pls give me just one more pixel

.

Note that it was improving the average worker's life proportionately until the capitalists broke the system in the early 70s.

I'm guessing that the consistent trend wasn't consistent prior to WWII, that it was just a blip from about 1945 - 1970/.

I can't find any similar graphs for before 1945, does anybody know where I could find the data?

TL;DR

4 layers of stupidification. The (possibly willfully) ignorant user, the source bias, the bias/agenda of the media owner, then shitty AI.

AI should be a backup to human skill and not a replacement for it. It isn’t good enough, and who knows when or if it will ever be at a reasonable cost. The problem with the current state of AI is that it’s being sold as a replacement for many human jobs and knowledge. 30-40 years ago we had to contend with basic human bias and nationalism filtering facts and news before it got to the end user, then we got the mega-media companies owned by the ultra wealthy who consolidated everything and injected yet more bias with the internet and social media but at least you got provided with multiple sources, now we have AI being pushed as a source that can be programmed to use biased sources and/or objectively wrong sources that people don’t even bother checking another source about. AI should be used to find unique solutions to medical research, materials design, etc. Not whether or not microwaving your phone is a good idea.

LLMs have amazing potential. We're on the verge of an equivalent of the Industrial Revolution.

That however won't stop idiots from overselling it.

LLMs have amazing potential. We're on the verge of an equivalent of the Industrial Revolution.

Back in 2000 a company published a chat bot that could learn and communicate back with the end user.

it was used as a sex bot at first and then used for those "interactive web support" chats.

I fed it physics books and mein kampf as a joke. it then began to regurgitate random lines out of both texts. not knowing what it was saying, but certainly attempting to make me "happy" with what it "learned".

the only difference between that shitty sex bot and LLMs of today, is that today they are a bit more convincingly human but still hilariously inaccurate. Both trying desperately to be agreeable with the end user.

the nearest "revolution" is about 300 years away. everything else is just a lie.

LLMs have amazing potential.

That's not what studies from most universities, Anthropic, OpenAI, Apple and Samsung show.

Even if we didn't have this data - and we do have it - are you truly impressed by a machine that can simulate what a Reddit user said 6 months ago? Really? Either you're massively underselling the actual industrial revolution, or you'd be easily impressed by a child's magic trick.

The Industrial Revolution was literally “are you truly impressed by a machine that can weave cloth as well as your grandmother”? And the answer was yes because one person could be trained to use that machine in much less time than it took to learn to weave. And they could make 10 times as much stuff in the same time.

LLMs are literally the same kind of progress.

Except we are not 200 years later when the impact on the world is obvious and not up for debate. We are in the first few years where the “machine” would be broken half the time and its work would have obvious defects.

Honestly, yes I am impressed when you compare what was possible with NLP prior to LLMs. Your question is akin to asking: are you truly impressed by a machine that can stick blocks together as well as some random person? Regardless of whether you are impressed, significant amounts of human labour can be reproduced by machines in a manner that was previously impossible. Obviously there's still a lot of undeserved hype, but let's not pretend that replicating human language is trivial or worthless.

I recently created a week-long IT training course with an AI. It got almost all of it right, only hallucinating when it came to details I had to fix. But it took a task that would have taken me a couple months to a couple weeks. So for specific applications it is actually quite useful. (Because it's basically rephrasing what a bunch of people wrote on Reddit.)

For this use case I would call it as revolutionary as desktop publishing. Desktop publishing allowed people to produce in a couple days what it would have taken a team of designers and copy editors to do in a couple weeks.

Everything else I've used it for it's been pretty terrible at, especially diagnosing issues. This is due particularly to the fact that it will just make shit up if it doesn't know, so if you also don't know you can't just trust it and end up doing research and experimentation yourself.

"It got almost all of it right, only hallucinating when it came to details I had to fix."

What does this even mean? It did a great job, the only problems were the parts I had to fix? 🤣

Most of it was basic knowledge that it could get from its training on the web. The stuff it missed was details about things specific to the product.

But generating 90% of the content and me just having to edit a bit is still way less work than me doing it all myself, even if it’s right the first time.

It’s got intern-level intelligence

It’s got intern-level intelligence

The problem is, it's not "intelligence". It's an enormous statistical based autocorrect.

AI doesn't understand math, it just knows that the next character in a string starting "2+2=" is almost unanimously "4" in all the data it's statistically analyzed. If you try to have it solve an equation that isn't commonly repeated, it can't solve it. Even when you try to train it on textbooks, it doesn't 'learn' the math, it tries to analyze the word patterns in the text of the book and attempts to replicate it. That's why it 'hallucinates', and also why it doesn't matter how much data you feed it, it won't be 'intelligent'.

It seems intelligent because we associate intelligence with language, and LLMs mimic language in an amazing way. But it's not 'thinking' the way we associate with intelligence. It's running complex math about what word should come next in a sentence based on the other sentences of that sort it's seen before.

Interns aren't that intelligent, either. But they can generate content even if they're not intelligent and that's helpful, too.

Having the right answer is a lot less useful than looking like you have the right answer, sadly.

Canva just announced the next generation of Affinity. Instead of giving us Linux support, while Affinity is “free” now they crammed in a bunch of AI to upsell you on a subscription.

Yeah... we kinda saw that coming ever since that first email from Serif about the acquisition...

Is there anything out there now that's comparable? I've still got v1 and v2 suites and installers, but... that'll only last as long as the twats at Canva keep the auth servers going.

Some of the older lemmings here will remember what it was like when every company wanted to make a website, but they didn’t really have anything to put in there. People were curious to look at websites, because you hadn’t seen that many yet, so visiting them was kinda fun and interesting at first. After about a year, the novelty had worn off completely, and seeing YetAnotherCompanyName.com on TV or a road side billboard was beginning to get boring.

Did it ever get as infuriating the current AI hype though? I recall my grandma complaining about TV news. “They always tell me to read more online.” she says. I guess it can get just as annoying if you manage to successfully ignore the web for a few decades.

I was an adult during that time, and I don't recall it being anywhere near as annoying. Well, except the TV and radio adverts spelling at you like "...or visit our website at double-you double-you double-you dot Company dot com. Again, that's double-you double-you double-you dot C-O-M-P-A-N-Y dot com."

YMMV, but it didn't get annoying until apps entered the picture and the only way to deal with certain companies was through their app. That, of if they did offer comparable capabilities on their website but kept a persistent banner pushing you toward their app.

Oh, I totally forgot the www thing. That was super annoying. Good riddance!

The only one I didn't hate was the jingle:

🎵 "F-R-E-E that spells "free"
credit report dot com, baby". 🎵

😆

https://youtu.be/qaNIJg9A9Kg

Those were the days… :-)

My old brain still thought of site addresses as having www in them, but this post just made me realize that's more uncommon than not to see it any more.

I'm about that same age but am so glad we've largely abandoned the "www" for websites.

On my personal project website, I have a custom listener setup to redirect people to "aarp.org" if they enter it with "www" instead of just the base domain. 😆

server {
    listen              443 ssl;
    http2		        on;
    server_name         www.mydomain.xyz;

    ssl_certificate     /etc/letsencrypt/live/mydomain.xyz/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/mydomain.xyz/privkey.pem;
    ssl_dhparam         /etc/nginx/conf.d/tls/shared/dhparam.pem;
    ssl_protocols       TLSv1.2 TLSv1.3;
    ssl_session_cache   shared:SSL:10m;
    ssl_session_timeout 15m;
  
    ...
    
    location ~* {
      return 301 https://aarp.org/;
    }
}

that’s… a terrible idea for a portfolio site of any sort. why would you intentionally hamper accessibility? what if their company VPN automatically routes yoursite.org to www.yoursite.org? i personally wouldn’t spend the time figuring out why i was looking at AARP, i’d just pass you over and not hire you, let alone reach out.

I don't think it works the way you think it does.

no, i think i know how things work enough to know this is a shitty idea.

that excerpt is going to do a 301 redirect to the AARP site for any requests to www.yoursite.xyz - that’s 100% not up for debate.

there are a fair amount of things, especially in a corporate environment, that automatically append www. to any URL passed. you think a hiring manager is going to care that it’s a quirky technical joke? why would you make it more difficult to access a portfolio who’s entire purpose is to be as accessible as possible for the target audience?

I actually think you do not, in fact, know enough. VPN does not care about layer 7. Having some proxy forcefully rewrite random domain names will immediately lead to redirect loops and will be disabled that same day because everyone will be screaming "internet no worky".

I think back then, they had a product that was ahead of its time, and just needed time for us to adapt to.*

Now, they have a solution in search of a problem, and they don't know what the good use cases are, so they're just slapping it on like randomly and aggressively.

  • I hate the way we did though, and hope AI destroys the current corporate internet.

I've change to lemmy, now matrix. I'm doing my part.

The 0.001% have stolen $76,000,000,000,000 from US workers alone in the last 40 years.

AI is just a way for them to burn money rather than allow the poors to have a raise.

rather than allow the doors to have a raise.

Gee, all these years, and I never had the least idea that all these problems were because they hate the motherfucking doors!

BRB. Going to kick down the closest door.

Damn autocorrect, fixed

As someone (forgot which blog I read it on, sorry) recently observed: if AI made software development so much easier, we'd be drowning in great new apps by now.

Yeah, and we wouldn't have so much garbage code out there breaking the internet. I tried to argue with someone on another site who was saying AI was the best thing ever basically, with my own real world encounters (that are weekly now) of why it's not and often wrong. He said the people using must be idiots, which sure, but if an idiot can get bad answers than it's not that good. He finished by saying he would never hire me because I refuse to use AI within my IT job. Alright, whatever dude. I can't wait to be pissed off by the shit application you coded with AI when it runs like hot garbage.

It's advertising. It's shoved in your face so you use Copilot instead of Google.

I setup a brother all in one printer for my mother in law and it wanted to install software that loads at startup that pops up constantly with their printer toner sales and marketing.

I learned a long time ago to never install manufacturer printer drivers. Or, at least, never install them from the provided Setup.exe.

They've always installed a bunch of bloatware (HP has always been the worst but other brands are just as bad).

If you look in the setup folder, there's usually the raw drivers you can install from Device Manager. If the driver package is just a single .exe file, you can usually unpack it with 7zip and get at its inner contents.

If that fails, the system-included HP LaserJet 4200 PCL driver is about as close to a universal print driver as you can find lol.

It is just advertising. The more I think about it the more I can't think of any practical use for generative AI that doesn't involve essentially spamming everyone.

The more I think about it the more I can’t think of any practical use for generative AI

It really does improve productivity. The problem is thinking a spell checker on steroids will do your job (the mistake both employees and employers make).

It doesn't really improve productivity much for me. And if you take into account the emails I receive from coworkers written with copilot then I'm actually doing more work to decipher what they are trying to say.

It’s also investor money talking. Looks like everyone with at least a few dollars to spare wants to get on the AI hype train. They have dumped an absurd amount of money on AI, and now they can’t wait to see those stock prices climb. These expectations put an immense amount of pressure on all AI companies to push their products anywhere and everywhere.

It also puts pressure on non-AI companies to integrate AI into their products, regardless of whether or not it improves the product. If you can say “look, we’ve got AI” then you’ll find it easier to attract new investors and keep your current ones happy. If you don’t have AI, then you risk looking behind the curve.

Can’t wait to see someone shove AI into egg times, flashlights and calculators. This kind of innovation just makes me sad.

I absolutely hate seeing AI crammed into everything.

However, i don't understand your logic.

If AI was in fact useful, it would be crammed into everything because everyone would want it.

So while AI is undoubtedly shit, its presence in everything is not evidence of that.

If I owned a gold mine filled with easily accessible actual gold veins, I would not spend my days telling others about it and selling them shovels.

That's not really analogous.

If AI could be added to a product and actually improve that product, then you would need to add AI to products to improve the products.

You wouldn't leave your gold in your mine thinking about how much it might be worth.

exactly right

Here's a similar perspective: as a vegetarian, seeing advertisement selling meat is good: it means the animal exploitation industry is struggling and needs to promote their "product" which need nearly no advertising for years if not decades.

It's very similar here: the advertising (in the form of putting it where you can't miss it, in the tools you use everyday) is trying to convince you to use something many people are apparently just not that interested in.

Ads are just a way to sell more of your stuff. If you earn 100€ a day without ads and spending 100€ a day on ads earns you 101€, you spend 100€ a day on ads. It's as simple as that.

Exactly.

You shoved it in my face now

It needs to be shoved in your face so they can get your face in the database.

I pay for Gemini and I haven’t used it in months. I don’t see any real case uses for me as an engineer. It just produces trash I have to fix and could have avoided if I did it myself in the beginning.

Why do you continue to pay for it?

Some folks like pissing into the wind.

Oh you.

I like to make videos with it occasionally. They are fun to make.

Yeah, playing with it for fun to see what you can make it come up with is a perfectly reasonable use case, if it weren't for the environmental cost...

That’s why I don’t do it often if at all at this point. I had my fun.

It's good at making up realistic looking fake data for testing and mock-ups, instead of the usual Person A in Town B.

Small scripts also work out as long as they're small enough to print on a piece of paper.

Writing some sales fluff in a document also works reasonably well.

But those are just niche applications. For something serious it's not really usable so far.

AI is the only chance for the West to beat China without a war. So the billionaires have gone all in because they will lose their fortunes if China wins.

Downvoters, how else are we going to win?

Why are you obsessed with throwing away our future to go to war with China?

Because it is not my decision to make. I assume that those in power will want to stay in power while China is taking over market after market.

There are not many options left for the billionaires to stay billionaires. Will they hand over power voluntarily?

China seems to be open sourcing it's AI research. It's US capitalism vs the world.

yeah,most chinese companies i see offer their entire model Openly (even the ones that the AI companies uses themselves)
example of this:
Deepseek (most famous of them all)
Alibaba cloud(Qwen)

It's true with them just as it's been true with the US companies. If the product is free, then you're the product.

yeah ChatGPT and Gemini also offers this but they are cut down to allow normal pcs to self host it.
like Gemini, its designed for slower pcs for ChatGPT, its designed for faster pcs.

That's the usual strategy of commodifying the strategic advantage of the opponent.

What will Europe get if China becomes the hegemon? That will decide if Europe will participate in the development of those models.

They want it to be the next step in controlling and raping the masses. That's why they're shoving it down our throats.

I suggest keeping your mouth closed tightly, and not engaging in any way with it. We should all see these early reports of the most vulnerable amongst us becoming obsessed with their AI relationships as a huge warning--red banners and klaxons sounding. Run! RUN! Run far away and stay there.

If AI was something everyone wanted or needed, it wouldn’t be constantly shoved your face by every product

Counter point: Yes it absolutely would be. There would still be competition between models and products and a need for brand recognition. If you had a product with a feature someone "wanted or needed" you wouldn't advertise that??? AI aside this is just silly

Yeah, but they should take that pissing contest out of the UX.

I don't know if the printer analogy is a good one but I'm down with what you're saying. The rise of GUI in the early days really did push printing in extreme ways

You werent around during the iphones launch or the beginnings of the internet. Also yes printers did the same thing with ink jets and how everyone needed to print out their digital pictures.

I predate both of those events by multiple decades lol.

Printers were well established even on the Trash-80 I grew up with. The bloatware drivers aren't really what I'm talking about. I suppose Clippy could be considered prior art to the whole "shoving AI in your face" but at the time I was a WordPerfect fanboy.

ChatGPT is quite good and it's not in my face at all, it's just in a bookmark I access when I need it.

Copilot is hot garbage, and it's plastered all over Windows, Edge, and Office.

I haven't tried the Samsung AI, but it won't let me forget it's there.

OP absolutely has a point. The more in your face the AI is, the more garbage it is.

AI companies believe the market will give the best rewards for a winner-take-all strategy.

They believe now is the time to accumulate customers.

Their future financing rounds very likely depend on being able to show growth.

Entrepreneurs, CEOs, investors all know it's not everything it's cracked up to be (yet). They hope another few billion in cash will get it there. And hope you don't notice until they already won the market.

It'd be the opposite you wouldn't know about it

LLMs are fucking useful, but there's not yet a good business model. You can switch to any system at any time, so everyone is trying to forcefeed you their own version of it ro get you hooked. But in the end they're just annoying the hell out of users.

They’re useful, but not anything like to the degree that’s being claimed. It’s being pushed as if it’s going to be able to do everything.

For one example, does generative AI have a use in coding? Absolutely. If you’re a coder who knows what they are doing, it can help you have ideas and it can do some of the tedious stuff. But you still need to know what you’re doing, use it as a tool, and absolutely do not use its code without checking it all. And that’s not how it’s being pushed. It’s being pushed as if you can just type in “pretend it’s 1999 and code me a Doom sequel” and you’ll get a full, working programme out.

Even getting it to do things in chunks seems to be a trial. There’s a video I watched a day or two ago (if you’re interested, I’ll look for a link when I’ve got more time) where a guy tried to get ChatGPT to code for him. He had a specific end goal in mind and asked it to do things step by step. While there were several occasions that he was impressed by what it produced, he kept getting stuck because it would seemingly only fix problems in one area of the code by eliminating another area entirely. He found that most of his time was spent going round in circles trying to ensure that it actually did what he was asking of it. It hallucinated, too, and at one point he had to solve the problem for it by referring to a specific repository, which he said he himself only knew about because he’s got more than a decade’s experience.

That’s the biggest issue - what LLMs are capable of is far, far below what they’re sold as.

When coding (my main job since lot of years) I like to use LLMs for brainstorming, reviewing my code for the "quickly visible" errors etc. Oh, and I found out LLMS are not bad explaining query plans and suggesting optimizations for SQL queries in PostgreSQL. I feel the older a technology is (when there's a lot of reference materials available) the better LLMs are with those topics.

But don't put them to the task of suggesting something on new tech or creative. They lie without blushing. And in the end you just get a "Good Catch, that can't really work" for wasting your time.

I think you need to get a feel for what they can and can't do. In any event all the shit they are being pushed for - that will "go well". Ah recently, I saw claude or so being able to edit exel sheets. Yep. Combine the most untestable tool guilty of producing tons of false data with LLMs. WHAT COULD GO WRONG?!?! Users blindly asking the llm to do stuff in excel and then just betting their companies on the results....

Microsoft just had a push for CoPilot in Excel. Its own promotional material said that for the tasks it was most suited to it had a success rate of 56%. For other tasks the success rate was 20%.

Imagine relying on that for anything even halfway important.

This is how I feel, especially with companies adding "AI Use" to their performance reviews. If employees found it helpful, they'd use it. Or did you hire complete morons?


I've built several AI tools for my work which do increase productivity. They lean in to what AI is actually good at and improve the speed of getting information, like using AI embeddings to build a quick semantic search, and building MCP tools for agents to look up information in our systems quickly. I've built some AI based tools to automate very expensive tasks that require a ton of manual data curation and review, and it works at the same level as our staff doing it, and it runs in 20 minutes, that's a win.

People actually do use these tools because they save a significant amount of time with very little cost, and everyone gets to do the things they're actually good at.


Now I'm being asked to spearhead building out customer facing AI efforts. I knew this day would come, but yeah the board and investors want it. They really want to be able to tell investors and clients that we're an AI forward company.

I've been planning for this and researching/studying, and all the internal tools I've built have been test runs. I'm not going to force AI on users, but I am going to build tools and systems for the ones who do use AI.

Oh man if they started judging my performance by AI use, it would certainly increase but not in the way they want.

Riiight. Of course.

If cars weren't all they were cracked up be they wouldn't be shoving them in your face.

If credit cards weren't all they were cracked up be they wouldn't be shoving them in your face.

If breakfast cereal weren't all they were cracked up be they wouldn't be shoving them in your face.

What a weird of saying you don't know how marketing works.

No, those are specific brands trying to get you to buy their brand. Honda wants to you buy a Honda. Ford wants you to buy a Ford. No one is trying to push cars as a whole on you.

I don't disagree its just my 1st thought was that was the industry that is accused of sabotaging public transit train system to get you to buy more cars in California. Really hoping they dont pull the same stunt by like removing word check and forcing you to talk to AI.

Funny how all the things you listed aren't "all they were cracked up to be" either.

I disagree regardless.

When I go on a bike ride, I don't constantly have people pulling beside me reminding that I could be driving in a car.

When I use cash, the person taking it does not remind me that I could have used a credit card.

When I open a container of oatmeal, there isn't a little piece of paper that springs out and reminds me I could be eating instantly ready breakfast cereal instead.

You guys have really weird ways of indicating you don't know what marketing is.

I recommend to start with the Wikipedia page and move on.

Cracked up be they. What a weird of saying. Fuck off clankers

Kinda. We want AI like on Star Trek, but to get there we gotta go through these growing pains. The people who push AI in your face want it to get better — either because they want the altruistic Star Trek AI or because they want it to be profitable. I admit, most are the latter, but a few are the former.

I disagree, because most of them won't give us boxes to permanently hide it or disable it... They deny choice because they know that most of us would turn it off. They are selling snake oil and know it.

Let us replace "AI" in that sentence with: soap, toothpaste, toilet paper, deodorant, condoms, cars, supermarkets...

None of those things are shoved in my face everywhere I look, online or offline.

Neither is AI, you're just salty.

Oh yeah you can hardly go outside with being bombarded by adverts for condoms. It's everywhere. Being pushed on us by the news all the time, by our bosses in the workplace. Just CONDOMS CONDOMS CONDOMS. The New Google Pixel, now with Condoms! Every advert, even ones not about condoms, have condoms in it. Every company now describes themselves as condom-driven. Absolutely inescapable!

Yeah, I see them all the time on TV. They're also right behind the counter on 7-Elevens, and I can't get some Tylenol at a pharmacy without seeing them. Are you saying perhaps my own bias has led me to believe advertisement for condoms is more prevalent than it is? Interesting.

Shove your smug little attitude up your arse mate. AI freaks are so weird.

Literally just matching your energy, you overly sensitive whacko.

Not literally everywhere, no, that's obviously hyperbole. But it is common enough to notice and be annoyed by it.

Just like everything else I mentioned. You just don't notice it because you don't have a hate boner for deodorant.

If only AI were as easy to block as users on Lemmy.

That'll teach me.

Sorry friend, but you're both uninformed and delusional. And sadly confident about your wrong opinions.

Sorry friend, but you're both uninformed and delusional. And sadly confident about your wrong opinions.

Fuck you're just useless

Fuck you're just useless

People also use heroin. That doesn't mean it's well liked or actually good.

Shhhh. Lemmy doesn't like it when you look outside their bubble.

Isn't this a stupid attitude?

Wasn't the dot-com bubble anything other than people showing the internet in your face and how it's a game changer 24/7.

We are simply at the peak of the initial hype curve of the Gartner hype cycle, the bubble will burst soonish and lots of companies go bankrupt, then real use cases will emerge where it's actually revolutionary.

I don't know which youtuber but one had a good video about it, at some point in the past, all hype was around drones and delivering stuff to your front yard, turned out that was stupid yet those things found their niche, e.g. some medicine deliveries to remote location in Africa and AED devices that can fly out to people.

You're both right. The extreme hype means it isn't yet all that useful. But it doesn't mean it won't get there. Once it is there, they won't need to hype it as much.

No, the hype is over-stating how useful it will eventually be.

out of all the comments in this thread, yours is probably the best thought out. I'll admit, I'm very much in line with OP, in that the more someone hypes something up, the less I want to do with it. I get increasingly skeptical, and that gets seriously compounded when I see C-suites give nebulous answers on how things will improve with a new invention.

I think it'll find its niche, but right now, the fucking thing can barely do math, and is at best, a learned pig. There's really big barriers to making AI actually useful, such as the scalability and energy/water requirements. Until we can get elegant coding and inputs, we're going to struggle.

midwest.social

Rules

  1. No porn.
  2. No bigotry, hate speech.
  3. No ads / spamming.
  4. No conspiracies / QAnon / antivaxx sentiment
  5. No zionists
  6. No fascists

Chat Room

Matrix chat room: https://matrix.to/#/#midwestsociallemmy:matrix.org

Communities

Communities from our friends:

Donations

LiberaPay link: https://liberapay.com/seahorse