In 2018 I was also enthusiastic about OpenAI, I didn't expect them to try to destroy the world. They were litterally selling themselves on doing the exact opposite.

it likely was a grift from the start.

No, it used to be a non-profit that got literally taken over by Sam Altman and Microsoft and turned into a money-~~printing~~ burning machine

I forget but wasn't anthropic mostly made up of former OpenAI engineers after Altman went off the deep end?

Not that it makes them any better, but I'm pretty sure GPT-3 was the nexus point in the current mess we have now, which means they hopped off right after it was internally finished and made their own company.

2018 was a full 2 years before that point, and back then AI was still primarily stuff like OpenCV, Pytorch projects, etc. that were things you could legitimately run on one or two workstation GPUs or even a cheap tensor core addon if you didn't want to run on CPU.

2018 was a full 2 years before that point, and back then AI was still primarily stuff like OpenCV, Pytorch projects, etc. that were things you could legitimately run on one or two workstation GPUs or even a cheap tensor core a

It was around the time when OpenAI showcased OpenAI five. At that time I was still playing Dota 2 and I found it really impressive. Here's a decent summary video

GPT-2 was released 14 February 2019 so they were actually cool gaming AI opensource research foundation in 2018.

Remember that OpenAI were the people behind the DotA-playing model, back in 2017, called OpenAI Five.

2018, the before times.

Insane how true this is.

No wonder 1999 feels like a million years ago.

I was there Gandalf. 1000 years ago. When neo deleated me.

I love how everyone is so desperate to make Gabe to be a terrible person.

It's crazy for people to take a stance against him considering how much he's done to protect the hobby from less reputable corps.

God imagine the world we'd live in if he didn't build modern child gambling with his own two hands

Are you mistaking him for the inventor of Pokémon cards, Richard Garfield, or the inventor of baseball cards, John Baseball?

Said 'modern' for a reason, but I guess telling someone they're wrong is worth ignoring what they say

Pokemon cards aren't modern? They came out in the US in 1999 and aren't considered obsolete or old AFAIK.

God your self-righteous

Things to say when you want to make being a piece of shit into a good thing

He may be a middle man rent seeking on the entire gaming industry, but you can't hate him because of all those great game series he never finished

I can't hate him since he broke Windows monopoly in gaming.

And I can't hate Elon Musk because he promoted electric cars.

I love it when completely unaccountable industry monopolists make decisions for all of society when I agree with them. They're so cool!

I hate billionaires existing but comparing Gabe to Elon seems pretty disingenuous. Elon has and continues to normalize some pretty horrific things.

So compare them on the axis I explicitly compared them on. I wasn't obtuse with my point.

Oh no he runs a buisness, grow up.

A 'business' that doesn't produce the thing that makes it profit.

You sound like my landlord

You mean a storefront that enables tons of indie devs to be noticed and actually compete with AAA publishers?

Are you asking because you're confused what we're talking about? We're talking about Steam.

Tell that to every business that sells products they don't make in house. Go on, I'll wait.

What would you call someone who has $1B worth of yachts while we're all struggling to eat and pay bills?

I'm a lot less angry at him than the ones who seemingly use their wealth exclusively to make our lives worse. He just has a bunch of money to buy boats. He's the last on the list for me

yes, but valve still exploits people. They put on steam sales explicitly to get people to buy more. Especially poor people. More, that they otherwise might not be able to afford just because "it's on sale!"

Oh no, they lower prices sometimes. Whatever shall we do except buy, buy, buy!?

Right. Because valve puts games on special just because they are good people. It’s absolutely NOT so people will buy more games than they otherwise would have. Definitely not. Nope

Like Torvalds, Stallman and Wozniak, Gabe Newell had an idea of how he thought technology should work and has never let up on it, it's a shame they couldn't profit as much as Gabe.

The yacht hobby is a bit much but compared to the standard billionaire hobbies, destroying the planet, rape, and murder, he's pretty mild.

You don't call it a shame when it is a conscious choice:

"Wozniak has discussed his personal disdain for money and accumulating large amounts of wealth. He told Fortune magazine in 2017, "I didn't want to be near money, because it could corrupt your values ... I really didn't want to be in that super 'more than you could ever need' category.""

A billionaire...

insert "they're the same thing" meme

Obligatory reminder that billionaires are not our friends. But also, donating to AI research in 2018 is quite a different matter than if he had done so in recent years. Most people in tech were somewhere between neutral and enthusiastic towards machine learning back then and few foresaw the monster it would become. Doubt he's as enthusiastic nowadays, considering what it did to Valve's hardware ambitions.

OpenAI, back then, was also a very different organization. They were mostly a non-profit, claiming to be a research organization who's goals were to ensure AI benefited all of humanity. Hell, I'd say Whisper, which that OpenAI did release, was very positive for humanity. It was when Sam Altman saw big dollar signs in GPT2+ that things started changing fast.

Very much this, in 2023 there was a falling out between Altman and the board of OpenAI over this, and Altman was kicked out. However some big shareholders (Microsoft) made a stink and reversed it.

And then they faked an employee letter and Lemmy sucked Altman's dick as the board was forced to resign in turn for having principles.

I remember your sins, hive mind.

I think many employees close to Altman also went to strike or theaten to leave. But I think he's bad for the (now) company. They should've stayed non-profit

It wasn't "many employees close to Altman". It was the entire company, including the people who initiated the process of getting him kicked out. The whole thing made absolutely no sense.

Wellllllll, I dunno about this take seeing as he's still very enthusiastic about it as of less than a year ago, with some very.. hype-style statements about it.

https://www.pcgamer.com/software/ai/gabe-newell-says-ai-is-a-significant-technology-transition-on-a-par-with-the-emergence-of-computers-or-the-internet-and-will-be-a-cheat-code-for-people-who-want-to-take-advantage-of-it/

If you can mentally separate the technology from the capitalist orgy around trying to shoehorn LLMs into every possible thing, he's not wrong.

The technology has promise, but the reality of what it can be useful for is complete overshadowed by the hype frenzy declaring the end of all knowledge workers and creatives.

LLMs are significantly better at translation than anything we've been able to design, for instance. But that's not flashy, it doesn't generate seed funding or lure investors so it's largely not what people think of when they hear "AI".

Dude, he's just another greedy billionaire. The guy doesn't deserve all the glaze gets

Edit: He's also incredibly wrong, like all other AI cultists. LLMs are a useful tool but they're no where even close to the level of computers or the Internet.

LLMs are a useful tool but they’re no where even close to the level of computers or the Internet.

LLMs are not, certainly.

But neural networks ("AI") can do pretty incredible things and the money being poured into LLMs is being spent on AI research (and all of the RAM/graphics cards in the world).

We're only seeing LLMs and image generators because it's what we have the most training data of. The Internet doesn't have hundreds of billions of MRIs or robotic motion plans, so those uses of AI take longer to appear.

But neural networks (“AI”) can do pretty incredible things

Name one.

Valve is trying to use neural networks for their anti cheat. They want to move it entirely to server side and rely less on the client to make Linux gaming an industry standard.

Instead of spying on your PC to see if you’re using something you’re not supposed to, they want to examine your behavior instead and act based upon that. I think this is a good usecase.

Fixing my PC's rebooting issue, for one. It diagnosed the error logs that I gave it, and suggested to use my motherboard's Load-Line Calibration feature to prevent the shutdowns. Before, I could get over a dozen reboots in a day.

For me, not having sudden reboots gives me a great deal of mental peace.

They are very good pattern matching machines. Most of our life science scientists are out there finding patterns. Like which antibodies pair with which cellular components to do things like predict cancer before it is a problem.

They are also very good at determining locations for clinical trials based on criteria found in previous trials that would be near impossible to do for a human.

Predict protein structures better than any other methods.

Here we provide the first computational method that can regularly predict protein structures with atomic accuracy even in cases in which no similar structure is known. We validated an entirely redesigned version of our neural network-based model, AlphaFold, in the challenging 14th Critical Assessment of protein Structure Prediction (CASP14)15, demonstrating accuracy competitive with experimental structures in a majority of cases and greatly outperforming other methods.

I love it when this happens in posts 😁

The fun part about those other uses, like MRIs, is that it requires the work of skilled professionals and then apparently weakens the skill of those professionals, which sure sounds like a nasty downward spiral.

Using AI Made Doctors Worse at Spotting Cancer Without Assistance

This is effectively pitching potential snake oil to the uninformed, while ignoring every real-life issue in the medical industry and side effects it would cause.

Sure, tools make people worse at doing the thing without tools.

Using AutoCAD made draftsmen worse at drafting, that doesn't matter because there is no occasion where you need to draft complex plans without a computer. If AI diagnosis makes doctors worse at reading MRIs... that would only matter in a world where they're reading MRIs but also don't have access to a computer. There is no hospital that has a functional MRI machine that wouldn't be able to access these tools.

The important thing is that the doctors, when using these AI tools, are measurably more effective. The result is the thing that matters for public health, not any individual's ability to operate without their tools.

https://newsnetwork.mayoclinic.org/discussion/mayo-clinic-ai-detects-pancreatic-cancer-up-to-3-years-before-diagnosis-in-landmark-validation-study/

Researchers used the AI model to analyze nearly 2,000 CT scans, including scans from patients later diagnosed with pancreatic cancer — all originally interpreted as normal. The system, called the Radiomics-based Early Detection Model (REDMOD), identified 73% of those prediagnostic cancers at a median of about 16 months before diagnosis — nearly double the detection rate of specialists reviewing the same scans without AI assistance.

Doubling the early detection rate of one of the most deadly types of cancers will result in many more lives being saved.

That's not AI. Those algorithms were pattern matching developed at Carnegie Mellon 15 years ago. so now they want to call it AI.

Radiologist and pathologist have always had a massive error rate because of human cognition bias.

Are you confident that the American healthcare system wouldn't declare experts to be a redundancy and simply replace them with the AI? Not only would that fit with their well-known profit motive, it is explicitly what AI companies claim they want to do.

I would love to live in a utopia where AI can be used ethically, but it is dangerous to promote the assumption that it magically just will be.

There's a balance to be struck here. Relying on automation tooling wholesale will always make you worse. There's a reason that even though we have calculators, it's important to know the fundamental maths that would let you perform those same calculations yourself. For the majority of people, it's probably not critical, but if you need to validate that information, you cerainly want to be able to understand how the original conclusion was drawn.

The same goes for software engineering, where AI is seeing heavy use. People asking it to build who programs receive bug riddled and inefficient code, but software engineers who are using it for rapid prototyping or to reduce the work of rewriting common functions in different projects are going to be more effective because they understand what the resulting structure should look like.

AI is not a replacement for the human, and if there's a future for it, it will be assistive to the fundamentals and knowledge human specialists already posess. But that requires the continued education and development of skills within the industries these tools are deployed in.

Code generation and medical result generation are similar enough to compare (I think), but to expound on the point I was making to the other person I replied to: There is far less medical data online than there is code. We basically have every code textbook online. We have tons of examples to create scaffolds from. We don't have so much medical data, and the people promoting the tools to the medical field tend to be the tech bros who don't mention the caveats of what their products can do.

In other words, if AI could be good in medicine, it needs to be rolled out by none of the people who are currently pushing for it, and the caveats need to be explained in a way that none of them do. (It's not objective, it will not create new science like OpenAI CEO Sam Alman says, etc.) If AI boosters managed to convince the medical field of the same things, they have already convinced politicians and journalists of, I think the result would be rapid quality degradation of treatment, deskilling, lots of unnecessary death. And boosters that promote potential benefits without acknowledging that are being very reckless.

Nah, sorry, if Gabe looked at the LLM mess of the last 5+ years and is still pumping it as 'ermagerd this is technology that rivals the importance of the internet, or computers themselves' he is cooked on marketing hype.

It's still crap.

Its most promising commercial application in paid models (coding), is still writing code slower than professional coders, when actually measured in studies.

The only goals it's hit is makinh a few jerks more wealthy, move that wealth inequality needle more towards the billionaires, and set us up for the next global financial crisis that we'll all be bailing them out on and suffering global decades long recessions through.

I reckon 2027 it'll hit, that's looking like when the money guys will finally be completely out of wiggle room and there will be no more cash for the cash fire.

Right, he might be a little further down, but he’s absolutely still on the list. There are no good billionaires.

Obligatory reminder that billionaires are not our friends.

Why does this even come up?

Because lots of people worship Gabe despite the fact that he is ungodly wealthy.

I don't think anyone thinks that. What I mean is, its obvious they are not our friends.

Those who you call "worship Gabe" I don't think they are. In example I am a fan of what Valve as a company does. Gabe is just the manifestation and voice we have, so we talk about Gabe as a whole company in example. I do not think there is a "worship" involved or any cult in example. Often its just meme replies for the sake of jokes, that look like a worship..

Talking for me personally at least, I like in example that their goals mostly align with mine, relatively speaking from the entire gaming companies. I wouldn't call myself a worship, but its the only gaming company I want to spent my money on. And its the only company that supports what I value (Open Source, Linux, PC, the way lot of things are handled in Steam). Just talking for myself here.

I was of course using the word "worship" in a non literal way. Let me rephrase to be more literal:

It comes up because there are many people who give Gabe a pass on being a billionaire because it is convenient for them. The choice between Gabe being a billionaire and Valve doing the awesome work for open source is a false choice and a nonsequiter. Gabe should not get a pass for his downright unethical amount of wealth just because he is the CEO of a company you like. Yet he very often does in gaming communities full of people who are, otherwise, in favour of eating the rich. For clarity, eating the rich is also not used literally.

You can appreciate the things Valve does and condemn Gabe for hoarding his vast wealth at the same time.

Lol, I think we had this discussion before. Nice to meet you again. :p

I mean I understand this position of yours. And yes, there can't be rich people without poor people, so in that sense I agree being rich is evil by definition. But there is a difference in how to get rich, either by exploiting the weak or those who need it, or by creating good products people WANT to spend money on willingly, without getting exploited. They can get rich this way, which is not really unethical to me. Its a bit of paradox with this (my) argumentation.

I don't think that Gabe is an evil person, or soulless like other CEOs. Especially because Gabe / Valve makes money by creating good products on a free and open market. Other CEOs make money by selling their soul and users to investors (remind you, Valve and Gabe doesn't have investors).

However, there is something I hate Valve (and Gabe) for actually, and that is having lootboxes AND item market in Steam and their games available. If anything, this is what would I call the most evil thing and exploit Valve (and therefore Gabe) does.

You say that Gabe has earned his wealth ethically. In the next paragraph you defeat your own stance by providing an example of how he earned it unethically. We can agree on this point.

I would further say that no one can earn a billion dollar net worth ethically. No one, not even Gabe. Hence, to the root of the conversation, why this comes up.

I don't think the majority of his money comes from those exceptions. Without the lootboxes and the item market, Valve (and Gaben) would probably make most of the amount of money they do right now. Just because I don't like that part does not defeat my previous argument. My point is, the examples about item market and lootboxes in some of their games are not core to their strategy and their business does not stand on those legs.

Or is your argumentation that Gaben is a bad person, just because of these two points and everyone who hates him hate him for that? Are these the central points you are calling him an evil person? I don't think so. That's not the core issue. Your core issue is, that he is rich. So it does not matter in what ways he earns his money. Therefore reasoning alone how he earned his money is meaningless to discuss at this point. You just try to find a justification and point to it, after i pointed it out. Therefore I don't know how rational it is to hate a person just for being rich (which is the main issue here, because you say nobody can get rich ethically).

Because a lot of people equate "some are less harmful than others" with "I fucking love this guy and think he's a harmless saint!"

It’s 2026, open a window.

I'm on Linux, I do not use Windows.

They're still called windows in Linux.

Those are applications, not the operating system. (Edit: I mean yes, you are right. I just desperately try to dodge it anyway.)

Just one of the reasons Windows should have had its trademark removed.

You forgot to say the distro, but it’s ok I know it’s Arch.

I don't know if I am allowed to say that, because I use EndeavourOS which is almost exactly Arch. Maybe I should start doing that with an asterisk attached to it, as I use Arch, BTW*.

^*EndeavourOS

thx, bro

ha, gotta give you that one :)

(obligatory - same here)

We acting like people in the art community weren't hyped up over AI until they started generating images. Before chatgpt, it was all about automating coding/it and other jobs that arent considered art. Back then it was all about how everyone could pursue their passions. The only people not excited were all the transportation employees and factory workers that had been told by the general public how excited they were to replace them

As a social scientist, pre Chat GPT NLP was like opening a whole new world of possibilities. We could finally at scale analyze one of the richest sources of behavioural data in an empirical statistically driven manner.

Now, even as I do research with NLP to continue these goals, I can't bring myself to every defend these tools. If they disappeared tomorrow, we'd lose a tool but we'd prevent so much undue suffering

The writing was on the wall for years. I remember memes about Altman in machine learning forums/chatrooms circa 2020, and especially 2021.

Nothing's changed. Anyone in the space who actually looked at what he was doing, knew. Yet the bulk of the public (and investors) lapped the Tech Bro stuff up.

Aaron Swartz said Altman was a sociopath years before AI was a gleam in anyone's eye.

The technologies with the worst potential outcomes will always be pioneered by people with no ethical or moral hangups getting in the way.

Which unfortunately are the same techs that will be elevated by our present economic structure, precisely because those traits are what enable them to make (or grift) a shitload of money.

see:

Leaded Fuel and CFCs - the same fuckin guy!? goddamn hope there is a hell

I'll save you a seat.

gee thanks, what the fuck did I do to you?

I mean, I probably would have invested in AI prior to seeing LLMs in action, too, hoping I was funding the cool kind of AI, not this lame shit.

Look, there is one thing if does incredibly well. It makes a fantastic spelling checker.

I've also found a niche use if you are bi- or polylingual:

Write a paper, letter, etc. Paste it in and ask Chat to translate it into another language you know. Then translate it yourself back into your target language. All of the phrasing and word choice will be yours and consistent since it is done in one go, while your original paper may have been done in spurts over weeks.

Hmm. I don't know enough to comment. While it sounds likely, I have heard complaints about translation, such as unexpected shifts in tone, etc.

I’ve done it a couple times and found that it worked well. I’d never use it as a translator for something official or formal but I have used it to help me translate specific words or phrases when I was unsure.

At that time it was still kind of a research project than a "it's going to take over everything" hype and FUD machine.

His opinions on AI today seem more enthusiastic than I would be, but well clear of the delusional level of AI-boosters.

Was this article commissioned by Tim Sweeney?

Before OpenAI about faced on being open?

Back then they were still deep into research and the Open part in their name actually meant something. I don't like much about Musk but I feel like its true that they deceived people that supported their initial mission just to go private when the market went haywire for AI. I feel like them shedding their non-profit status shouldn't have been an option as so many people donated to them in good faith

Well aim FPS bro.

midwest.social

Rules

  1. No porn.
  2. No bigotry, hate speech.
  3. No ads / spamming.
  4. No conspiracies / QAnon / antivaxx sentiment
  5. No zionists
  6. No fascists

Chat Room

Matrix chat room: https://matrix.to/#/#midwestsociallemmy:matrix.org

Communities

Communities from our friends:

Donations

LiberaPay link: https://liberapay.com/seahorse