https://zomglol.wtf/@jamie/116059523957674208

https://www.congress.gov/crs_external_products/LSB/PDF/LSB10922/LSB10922.8.pdf

As it should. All the idiots calling themselves programmers, because they tell crappy chatbot what to write, based on stolen knowledge. What warms my heart a little is the fact that I poisoned everything I ever wrote on StackOverflow just enough to screw with AI slopbots. I hope I contributed my grain of sand into making this shit little worse.

Do it in a way that a human can understand but AI fails. I remember my days and you guys are my mvp helping me figure shit out.

Most "humans" don't understand reality. So you're postulative challenge invention isn't going find a break you seek to divine. Few exist. I'm yet to find many that can even recognize the notion that this language isn't made to mean what think you're attempting to finagle it into.

Evil Money Right Wrong Need...

Yeah...I could go on and on but there's five sticks humans do not cognate the public consent about the meaning of Will Never be real. Closest you find any such is imagination and the only purpose there is to help the delirious learn to cognate the difference and see reality for what it may be.

Good fucking luck. Half the meat zappers here think I am an AI because break the notion of consent to any notion of a cohesive language. I won't iterate that further because I've already spelt out why.

That's not what that research document says. Pretty early on it talks about rote mechanical processes with no human input. By the logic they employ there's no difference between LLM code and a photographer using Photoshop.

That sounds like complete bullshit to me. Even if the logic is sound, which I seriously doubt, if you use someone's code and you claim their license isn't valid because some part of the codebase is AI generated, I'm pretty sure you'll have to prove that. Good luck.

If there was an actual civil suit you'd probably be able to subpoena people for that information, and the standard is only more likely than not. I have no idea if the general idea is bullshit, though.

IANAL

You forgot the heart

I ♥️ ANAL

Would that be North African Lawyer, or North American Lawyer?

In any case, we're splitting the cheque. /s

By that same logic LLMs themselves (by now some AI bro had to vibe code something there) & their trained datapoints (which were on stolen data anyway) should be public domain.

What revolutionary force can legislate and enforce this?? Pls!?

By that same logic LLMs themselves (by now some AI bro had to vibe code something there)

I'm guessing LLMs are still really really bad at that kind of programming. The packaging of the LLM, sure.

& their trained datapoints

I'm guessing for legal purposes the weights were generated by the human-made training algorithm. I have no idea if that's copyrightable under US law. The standard approach seems to be to keep them a trade secret and pretend there's no espionage, though.

The packaging of the LLM, sure.

Yes, totally, but OP says a small bit affects "possibly the whole project" so I wanted to point out that includes prob AIs, Windows, etc too.

I had a similar thought. If LLMs and image models do not violate copyright, they could be used to copyright-wash everything.

Just train a model on source code of the company you work for or the copyright protected material you have access to, release that model publicly and then let a friend use it to reproduce the secret, copyright protected work.

btw this is happening actuallt AI trained on copyrighted material and it's repeating similar or sometimes verbatim copies but license-free :D

This whole post has a strong 'Sovereign Citizen' vibe.

I do not give Facebook or any entities associated with Facebook permission to use my pictures, information, messages, or posts, both past and future.

The Windows FOSS part, sure, but unenforceable copyright seems quite possible, but probably not court-tested. I mean, AI basically ignored copyright to train in the first place, and there is precedent for animals not getting copyright for taking pictures.

If it's not court tested, I'm guessing we can assume a legal theory that breaks all software licensing will not hold up.

Like, maybe the code snippets that are AI-made themselves can be stolen, but not different parts of the project.

Is Windows FOSS now?

Ew, no, thank you, I don't want it.

Didn't sources leak multiple times

Aren't you all forgetting the core meaning of open source? The source code is not openly accessible, thus it can't be FOSS or even OSS

This just means microslop can't enforce their licenses, making it legal to pirate that shit

It's just the code that's not under copyright, so if someone leaked it you could legally copy and distribute any parts which are AI generated but it wouldn't invalidate copyright on the official binaries.

If all the code were AI generated (or enough of it to be able to fill in the blanks), you might be able to make a case that it's legal to build and distribute binaries, but why would you bother distributing that slop?

Even if it were leaked, it would still likely be very difficult to prove that any one component was machine generated from a system trained on publicly accessible code.

https://reuse.software/faq/#uncopyrightable

The REUSE specification recommends claiming copyright even if it's machine generated. Is this incorrect information?

EDIT: Also, how is copyrighting code from an AI different than copyrighting an output from a compiler?

I believe it was a product of the earlier conflict between copyright owners and AIs on the training side. The compromise was that they could train on copyright data but lose any copyright protections on the output of the AI.

How do you prove some codebase was AI generated?

This might be true, but it is practically unenforceable.

Agentic IDEs like Cursor track usage and how much of the code is LLM vs human generated.

Which probably means it tracks every single keystroke inside it. Which rightfully looks like a privacy and/or corporate code ownership nightmare.

But hey at least our corporate overlords are happy to see the trend go up. The fact that we tech people were all very unsubtly threatened into forced agentic IDEs usage despite vocal concerns about code quality drop, productivity losses and increasing our dependence on US tech (especially openly nazi tech) says it all.

Agentic IDEs like Cursor track usage and how much of the code is LLM vs human generated.

For your code, sure. How do you know someone else's code is LLM generated?

Because it's a surveillance state baby. Everything is uploaded to a central server so our corporate overlords can monitor our usage.

I think expert software developers can quickly address AI code

Public domain ≠ FOSS

You willing to take on Microsoft lawyers to find out?

How the hell did he arrive at the conclusion there was some sort of one-drop rule for non-protected works.

Just because the registration is blocked if you don't specify which part is the result of human creativity, doesn't mean the copyright on the part that is the result of human creativity is forfeit.

Copyright exists even before registration, registration just makes it easier to enforce. And nobody says you can't just properly refile for registration of the part that is the result of human creativity.

Yeah, a lot of copyright law in the US is extremely forgiving towards creators making mistakes. For example, you can only file for damages after you register the copyright, but you can register after the damages. So like if I made a book, someone stole it and starting selling copies, I could register for a copyright afterwards. Which honestly is for the best. Everything you make inherently has copyright. This comment, once I click send, will be copyrighted. It would just senselessly create extra work for the government and small creators if everything needed to be registered to get the protections.

Edit: As an example of this, this is why many websites in their terms of use have something like "you give us the right to display your work" because, in some sense, they don't have the right to do that unless you give them the right. Because you have a copyright on it. Displaying work over the web is a form of distribution.

Is this how it works? I would be shocked if this was actually how it works.

Nah, laws being like they are the copyright probably belongs to whatever cartel owns the bot you used to perpetrate the code, because fuck human people, that's why, laws are for corporations' benefit, not for yours.

Something something "Those the law binds but does not protect and those the law protects but does not bind."

That's not even remotely true....

The law is very clear that non-human generated content cannot hold copyright.

That monkey that took a picture of itself is a famous example.

But yes, the OP is missing some context. If a human was involved, say in editing the code, then that edited code can be subject to copyright. The unedited code likely cannot.

Human written code cannot be stripped of copyright protection regardless of how much AI garbage you shove in.

Still, all of this is meaningless until a few court cases happen.

So wait, if my start up does a human written "Hello World" and the rest is piled on AI slop it can't be stripped of copyright? Or is "Hello World" too generic to hold a copyright at all?

Granted, as you said this all has to be defined and tested in court, I'm just trying to understand where the line as you see it is.

"Hello World" is prior art.

https://www.reinhartlaw.com/news-insights/only-humans-can-be-authors-of-copyrightable-works

https://www.copyright.gov/comp3/chap300/ch300-copyrightable-authorship.pdf

A human must be involved in the creation. A human can combine non-human created things to make something new, but the human must be involved, and the non-human created elements likely lack protection themselves.

People will believe anything if the icon on the tweet looks authoritative and the grammar is sound.

Counterpoint: how do you even prove that any part of the code was AI generated.

Also, i made a script years ago that algorithmically generates python code from user input. Is it now considered AI-generated too?

OP is obviously ignorant of how much tooling has already helped write boiler plate code.

Besides AI code is actually one of the things that’s harder to detect, compared to prose.

And all that said, AI is doing an amazing job writing a lot of the boilerplate TDD tests etc. To pretend otherwise is to ignore facts.

AI can actually write great code, but it needs an incredibly amount of tests wrapped around and a strict architecture that it’s forced to stick to. Yes, it’s far too happy sprinkling magic constants and repeat code, so it needs a considerable amount of support to clean that up … but it’s still vastly faster to write good code with an AI held on a short leash than it is to write good code by hand.

Computer output cannot be copyrighted, don't focus on it being "AI". It's not quite so simple, there's some nuance about how much human input is required. We'll likely see something about that at some point in court. The frustrating thing is that a lot of this boils down to just speculation until it goes to court.

i made a script years ago that algorithmically generates python code from user input. Is it now considered AI-generated too?

No, because you created the generation algorithm. Any code it generates is yours.

Not how I understand it, but I'm not a lawyer. The user that uses the script to generate the code can copyright the output and oop can copyright their script (and the output they themself generate). If it worked like you said, it would be trivial to write a script that generates all possible code by enumerating possible programs, then because the script will eventually generate your code, it's already copyrighted. This appear absurd to me.

Relevant: https://www.vice.com/en/article/musicians-algorithmically-generate-every-possible-melody-release-them-to-public-domain/

If the script copies chunks of code under the copyright of the original script writer, I typically see for those parts that the original owner keeps copyright of those chunks and usually license it in some way to the user. But the code from the user input part is still copyrightable by the user. And that's that last part that is most interesting for the copyright of AI works. I'm curious how the law will settle on that.

I'm open to counterarguments.

While nobody created neural nets and back propagation

Guess you can't really prove that, unless you leave comments like "generated by Claude" in it with timestamp and whatnot 😁 Or one can prove that you are unable to get to that result yourself.

So nonsense, yes.

Or one can prove that you are unable to get to that result yourself.

Oh shit… I’ve got terabytes of code I’ve written over the years that I’d be hard-pressed to even begin to understand today. The other day I discovered a folder full of old C++ libraries I wrote 20+ years ago, and I honestly don’t remember ever coding in C++.

There is absolutely no way you wrote terabytes of code lmao.

True enough, and I expected to get checked on that.

Regardless… along with the archives, assets and versioned duplicates, my old projects dating back to the 90s somehow now fill multiple TB of old hard drives that I continue to pack-rat away in my office. Useless and pointless to keep, but every piece was once a priority for someone.

Cursor, an ai/agentic-first ide, is doing this with a blame-style method. Each line as it's modified, added DOES show history of ai versus each human contributor.

So, not nonsense in probability, but in practice -- no real enforcement to turn the feature on.

Why would you ever want this?

If you pushed the bug that took down production - they aren't gonna whataboutism the AI generated it. They're still going to fire you.

Sorry, but as another reply: pushing bugs to production doesn't immediately equate to firing. Bug tickets are common and likely addressing issues in production.

Hence the "took down production"

It makes little difference IMHO. If you crash the car, you can’t escape liability blaming self driving.

Likewise, if you commit it, you own it, however it’s generated.

It's mainly for developers to follow decisions made over many iterations of files in a code base. A CTO might crawl the gitblame...but it's usually us crunchy devs in the trenches getting by.

Uh, yes, that's what they call a generative ai

windows would be OSS, not FOSS.

if you can't enforce copyright, how do you stop others from giving it away for free and editing it, making it foss..?

Patents, trademarks and ToS.

This is why CC0 should not be used for code. Its public license fallback explicitly does not give patent rights. Compare that to MIT which implicitly does by saying you can use the software however you want. CC0 literally has this clause in the public license fallback.

No trademark or patent rights held by Affirmer are waived, abandoned, surrendered, licensed or otherwise affected by this document.

Oh darn, our CEO told us to use LLMs to write all this code, and now the good parts might be used for something that helps people. Not our copyrights!

To the CEO, "helps people" means "spreadsheet line goes up".

how can you tell if it's AI generated? you can't

The same way you tell if it's copy & pasted from Stackoverflow or some other search result!

that's not my experience, it codes in your style if you give it the correct pointers, examples, and so on

1- Code it in fortran 77.

2- DO NOT use DO or WHILE or any other non-compliant f77 code. Instead, use GOTO

3- makes the spaghetti so spaghetty that it curls in itself.

4- Make sure to use COMMON blocks everyhere! Not only for efficiency purposes but to also holds all that spaghetti in a tiny Schrondinger cat's box.

5- last step is to do all that to write "Hello World!"

This means copyright notices and even licenses folks are putting on their vibe-coded GitHub repos are unenforceable. The Al-generated code, and possibly the whole project, becomes public domain.

I license my vibe-coded projects with the MIT license, so it's working either way.

That's terrible news. There's no way I want my code to be open source. Then other people would see just how much spaghetti you can have in a codebase and still have it run.

I think anyone forced to use Windows 11 in 2026 already knows that. Although there the term "run" is stretched to the limit.

As much as I wish this was true, I don't really think it is.

It's just unsettled law, and the link is basically an opinion piece. But guess who wins major legal battles like this - yep, the big corps. There's only one way this is going to go for AI generated code

Worst case is that its the owner of the agent that recieves the copyright, so all vibe coded stuff outside local ai will be claimed by the big corpos

I actually think that's the best case because it would kill enterprise adoption of AI overnight. All the corps with in-house AI keep using and pushing it, but every small to medium business that isn't running AI locally will throw it out like yesterday's trash. OpenAI's stock price will soar and then plummet.

The big AI companies would just come out with a business subscription that explicitly gives you copyright.

Damn

Unlikely since, as you say, it would deter business. OpenAI already assigns rights of output to the end user according to their licensing and terms.

It is true that AI work (and anything derived from it that isn't significantly transformative) is public domain. That said, the copyright of code that is a mix of AI and human is much more legally grey.

In other work, where it can be more separated, individual elements may have different copyright. For example, a comic was made using AI generated images. It was ruled that all the images were thus public domain. Despite that, the text and the layout of the comic was human-made and so the copyright to that was owned by the author. Code, obviously can't be so easily divided up, and it will be much harder to define what is transformative or not. As such, its a legal grey area that will probably depend on a case-by-case basis.

So, you're telling me I can copypaste 100% some of the ai slop books on amazon and resell it as mine? Brb, gonna make a shit site an absolute diarrhea

Yeah, it's like products that include FOSS in them, only have to release the FOSS stuff, not their proprietary. (Was kind of cute to find the whole GNU license buried in the menus of my old TiVo...)

The US copyright office confirms this. https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-2-Copyrightability-Report.pdf

The use of AI tools to assist rather than stand in for human creativity does not affect the availability of copyright protection for the output. Copyright protects the original expression in a work created by a human author, even if the work also includes AI-generated material.

I'm not sure where you get that from, I'm pretty sure vibe coding still complies with these indications

"AI-generated" works can be copyrighted. However, on the condition that the AI-generated elements are explicitly mentioned in the "Excluded Material" field. In other words, the parts generated by AI are not protected, only the parts that are expressed by human creativity. Courts in the U.S have already rejected registration for many AI works because of that. Regardless, it's still a contentious matter.

P.S. I am completely opposed to (generative) AI as well as the copyright system. I'm just stating my findings researching the law and court cases.

As mentioned elsewhere in this thread it won't matter either way unless tested in court and that will never happen for most companies.

Did you even read your own report? It says that AI works are copyrightable in certain circumstances, not that they make a whole project public:

Copyright law has long adapted to new technology and can enable case-by- case determinations as to whether AI-generated outputs reflect sufficient human contribution to warrant copyright protection. As described above, in many circumstances these outputs will be copyrightable in whole or in part—where AI is used as a tool, and where a human has been able to determine the expressive elements they contain. Prompts alone, however, at this stage are unlikely to satisfy those requirements.

"AI-generated" works can be copyrighted. However, on the condition that the AI-generated elements are explicitly mentioned in the "Excluded Material" field. In other words, the parts generated by AI are not protected, only the parts that are expressed by human creativity. Courts in the U.S have already rejected registration for many AI works because of that.

P.S. I am completely opposed to (generative) AI as well as the copyright system. I'm just stating my findings researching the law and court cases.

Even if it were, it would be for you or I, but not for Microsoft, apple, Google, or Amazon.

If the AI generated code is recognisably close to the code the AI has been trained with, the copyright belongs to the creator of that code.

Since AIs are trained on copyleft code, either every output of AI code generators is copyleft, or none of it is legal to use at all.

I may be wrong but I think current legal understanding doesn't support this

Under U.S. law, to prove that an AI output infringes a copyright, a plaintiff must show the copyrighted work was "actually copied", meaning that the AI generates output which is "substantially similar" to their work, and that the AI had access to their work.[4]

Wikipedia – AI and copyright

I've found a similar formulation in a official German document before posting my above comment. Essentially, it doesn't matter if you've ~~"stolen"~~ copied somebody else's code yourself and used it in your work or did so by using an AI.

The part that is untrue is the "public domain" part. If you generate code then you don't own it because the actual human work that went into creating it was done by the owner of the AI Model and whatever they trained on.

Iirc it's even funnier: the relevant case law comes from Naruto v Slater. A case about a monkey taking a selfie and a photographer failing to acquire copyright of it (https://en.wikipedia.org/wiki/Monkey_selfie_copyright_dispute).

The copyright belonged to whoever shot the selfie, but because it was the monkey and animals aren't juristic entities, they can not hold copyright. Therefore, as it stands and as new case law outlines, AIs are compared to monkeys, in that the copyright would fall onto them but it's not a juristic entities either, and therefore copyright just vanishes and no one can claim it.

The wikipedia page suggests current cases on generative AI directly build on this.

It was an especially interesting case because there was a question of whether the photographer lied about who actually took the picture. So he could either claim the monkey took it an lose the copyright or claim he took it and have it lose all value.

See that's kind of what I'm talking about. The monkeys who pressed the buttons to make the AI Generate the code isn't the computer, it isn't the user, it's the employees at the AI company. My advice is that until laws are properly in place that we shouldn't use AI for any generative industry.

The AI company didn't do shit. They stole apples from someone elses tree and threw it in a blender. They didn't make the apples, nor did they buy them, so they don't legally own the juice.

While I agree with the general idea, please don't call piracy "stealing". It's not stealing, whether you do it or some giant corpo.

Don't call copyright infringement "piracy," either.

While I agree², their use of "steal" makes sense in the analogy because the apple doesn't belong to the "thief"; besides, you can't pirate an apple

Evidently you can!

“Apple piracy is a problem,” said Lynnell Brandt, chief executive officer of Proprietary Variety Management, the Yakima firm hired to handle the Cosmic Crisp rollout. “Some are propagating trees without permission. Some are stolen in the middle of the night.”

If the AI produces verbatim the licensed works of others then the others own it.

If the AI took individual unlicensed elements and pieced them together then the AI Company owns it.

In any and every case, neither the User nor the Public Domain owns it. Moral of the story is: never use AI for anything.

The AI company stole other people's code, threw it into a blender, and is selling the output. They didn't do any real work, and they don't own the materials. They have no legal claim over the result. You do not own a car you made from stolen parts, no matter how many cars you stole from.

Stop trying to imply your buddies at AI companies have value.

We appear to be talking in circles.

I'm literally sitting here telling people it isn't safe to use AI Code, you're doing the opposite, and you're accusing me of being buddies with the Slop Companies?

You're just making shit up. The US Court of Appeals for the DC Circuit has affirmed that AI-generated work is in the public domain. Put up or shut up.


Edit: Additionally, the US Copyright Office writes:

As the agency overseeing the copyright registration system, the [Copyright] Office has extensive experience in evaluating works submitted for registration that contain human authorship combined with uncopyrightable material, including material generated by or with the assistance of technology.

Is letting an LSP auto complete for you "assistance of technology"?

How does this work in practice? Someone would have to prove that it's AI generated, which isn't straight forward.

Also, I'm not clear this protects the release of code centered a trade secret or under NDA.

So while the court ruled it's public domain. Could it still be prevented from release? Like a Microsoft employee couldn't just dump sections of the AI code to the internet I imagine.

https://www.upcounsel.com/patents-trademarks-copyrights-and-trade-secrets

Competitive advantage: Trade secrets can cover information that would not qualify for patents or copyright but still has economic value.

I would imagine dumping Microsoft code to the internet would be sued under NDA

The answer is that it's messy and that I'm not qualified to say where the line is (nor, I think, is anyone yet). The generated parts are not copyrightable, but you can still have a valid copyright by bringing together things that aren't individually copyrightable. For example, if I make a manga where Snow White fights Steamboat Willie, I've taken two public domain elements and used them to create a copyrightable work.

So it's not like the usage of AI inherently makes a project uncopyrightable unless the entire thing or most of it was just spat out of a machine. Where's the line on this? Nobody (definitely not me, but probably nobody) really knows.

As for courts ever finding out, how this affects trade secret policy... Dunno? I'm sure a Microsoft employee couldn't release it publicly, because as you said, it'd probably violate an NDA. If there were some civil case, the source may come out during discovery and could maybe be analysed programmatically or by an expert. You would probably subpoena the employee(s) who wrote the software and ask them to testify. This is just spitballing, though, over something that's probably inconsequential, because the end product is prooooobably still copyrightable.

This kind of reminds me of the blurry line we have in FOSS, where everyone retains the copyright to their individual work. But if push comes to shove, how much does there need to be for it to be copyrightable? Where does it stop being a boilerplate for loop and start being creative expression?

It begins by asking “whether the `work' is basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.” [23]

In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of “mechanical reproduction” or instead of an author's “own original mental conception, to which [the author] gave visible form.” [24]

The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work.[25] This is necessarily a case-by-case inquiry.

That's the rest of what you posted. I guess you just didn't read it, right? Even if it comes right after and is part of the same paragraph. What a joke.

I clarified this a bit in a follow-up comment, but my first comment was simplifying for the sake of countering:

[it's not in the public domain] because the actual human work that went into creating it was done by the owner of the AI Model and whatever they trained on.

Their claim that the copyright for AI-generated works belongs to the model creator and the authors of the training material – and is never in the public domain – is patent, easily disprovable nonsense.

Yes, I understand it's more nuanced than what I said. No, it's not nuanced in their favor. No, I'm not diving into that with a pathological liar (see their other comments) when it's immaterial to my rebuttal of their bullshit claim. I guess you just didn't read the claim I was addressing?

Technology is an extremely vague word in this context. If the US Court of Appeals for the DC Circuit has affirmed that then I haven't heard of it, it's not posted here, and most importantly: such rules are not currently enshrined in law.

Technology is an extremely vague word in this context.

Dude, just shut the fuck up and own up to what you were doing. You're acting like a snivelly little child. I've seen you around a couple times before, and it's like all you exist to do on Lemmy is make up and spread misinformation.

The fact that you're this upset about my warning about the use of AI really shows which side you're on, techbro.

No it's that you're trying to walk back a provably false claim and then deflect the claims by pretending the people calling you out are doing so because they like AI instead of, you know, valuing the truth.

I walk back no claims. The AI Companies have more claim on ownership of the output than the public. Don't use Slop Code, it's not safe.

I think, to punish Micro$lop for its collaboration with fascists and its monopolistic behavior, the whole Windows codebase should be made public domain.

Probably wouldn't work, at this point I doubt they're capable of feeling shame anymore, if they ever were.

Haven't we been exposed to enough horror?

does the public really want more garbage than they already has?

Do you not want all the hardware support Linux is missing to suddenly become available?

We don't have to use it for anything other than compatibility.

Personally I'd very much like for ACPI to be un-fucked

The kernel and NTFS seem decent from what i heard. Or at least was (the kernel, no guess what they vibecoded into it now).

About NTFS: it was actually pretty good for it's time (90s), but the tooling makes no use of some of it's better features and abuses some others close to breaking point. Literally pearls for the sows.

There were decent (at least, worked for me) NTFS drivers for Linux like 20 years ago. (Back when I felt the need to dual boot)

windows engineers have probably been copying snippets from stackoverflow for decades, which may have been copied from the kernel or some other copyleft product

How may Windows engineers have you met?

99% of the ones I met and worked with were very, very good.

Don’t confused maintaining backwards compatibility and managing real concerns of large customers with bad engineering.

Is Windows FOSS now?

Couldn't help but read that title in the voice of the young John Connor, asking "Aren't they our friends now?"

Anything built by AI/LLMs should be FOSS by law. Oh I dream of the day.

What a vibe

Hitachi Vibe!

It already is, vibe coders cannot hold copyright on AI-generated code

Not to be pedantic, not holding copyright ≠ FOSS.

FOSS explicitly means that the developer has a copyright and is explicitly giving a license for people to use it with FOSS provisions.

It would be more accurate to say AI Vibe code is in the public domain.

Public domain code is a subset of FOSS code, so I don't think that was inaccurate

In general terms, it isn’t really. Or at least it is a controversial topic still subject to discussion.

https://opensource.org/blog/public-domain-is-not-open-source

As to this specific topic, the fact that all of the code has to be open source is part of the 10 criteria https://en.wikipedia.org/wiki/The_Open_Source_Definition

As such, you can’t consider open source the public domain portions of a codebase that also has proprietary portions.

That's not entirely true, it doesn't make it FOSS.

  1. Vibe coders sign a contract when they use AI to generate stuff, which gives rights away to the company. Regardless of copyright protections from the state, a contract is in most cases legally binding.

  2. Copyright law requires human authorship as opposed to random generation. This doesn't inherently exclude all generative works, algorithms that were carefully crafted and datasets curated can potentially have their results considered "authored" but the AI Company owners that made them.

In order to make it true we need to pass laws that regulate the AI companies and their slop. In the meantime, I recommend nobody uses slop code. Actually, I'd recommend that regardless of ownership rights.

I signed no contracts to download the open source local models I use for code generation, just for what it's worth

Ah yes, that model. Of course. You're right. I would know because I have read the details for every model in existence and can clearly infer which one you're talking about. Yes.

agree

There's still copyrighted code in windows, so no this is bullshit.

Code contains a lot of elements that are not copyrightable (elementary maths etc), that does not prevent the overall program from being copyrighted.

Windows is not even source-available. Windows XP is source-unintentionally-available thanks to a leak but there's no AI loophole in that.

Does anyone know of any place to keep up with what people are doing with the XP leak? I vaguely remember some 4chan threads where people worked to get it compiling properly, and I think someone ported USB 3.0 support, but I lost track after that.

Shouldn't all AI generated code be GPLv3?

Yes I think most of it technically should due to the nature of GPL code.

Why would it be a specific license?

Due to how GPL work, basically if you include a GPL licensed code in your project, you must make all the project code under GPL

Maybe I need to add yet another reason to

https://sciactive.com/2026/01/21/our-stance-on-ai-in-email/

I wish that were true but unfortunately the situation is far more dire than that...

The AI Code belongs to the AI Company more than the public, meaning they can enforce license and usage rights on the entire codebase. If only because of the contract users sign when they agree to ToS.

The opposite is also true, though, that if the AI Company generates licensed works of others due to their training data that they should be held responsible in a court of law.

The claim being made here isn’t wild conjecture. It’s based on a legal analysis done by the congressional research service. That’s a rather authoritative source that Congress itself uses to understand the implications of many things - amongst which are the implications and impacts of the laws it codifies.

What is your supposition based on, such that it’s more authoritative than that?

Just as a sanity check: the person you're responding to is a serial troll and what I can only describe as intellectually dishonest at best or a pathological liar at worst. They make up whatever they want and will never concede that the fucking nonsense they just dreamed up five seconds ago based on nothing is wrong in the face of conclusive proof otherwise.

You shouldn't waste your time responding to this cretin.

I get it; I respond to these things in a cogent and incisive fashion so that other users can see a sane counterpoint, or at least a request for justification or proof that then goes unfulfilled.

Oh, sorry, I said that totally wrong: I meant that I really appreciate your first comment and that it's not worth your time to reply to their bad-faith follow-up comment.

Windows was already copyrighted and registered with the office before ai code was introduced.

The copyright office refusing to register the version with ai code does not affect the already registered copyright.

The version with ai code is a derivative product of the registered version so M$ will get you for copyright infringement.

Not considering this obvious context makes the Twitter poster completely unreliable.

Yeah I read the link at the bottom, it doesn't claim what the post claims at all.

My supposition is that the human element that creates the code is that of the AI Company and not the user, on the basis that the user is actually incapable of doing so.

supposition

Would you be willing to elaborate a little more to raise credibility in light of this comment?

Like, what are the links posted saying then, if not the statement in this post, by your expert analysis?

Oh yarr, heck yeah, here look at this part right here:

" "

That's where the article doesn't say the generative works are public domain. And furthermore this other part:

" "

Is where it doesn't say all terms of service contracts between the user and company are magically invalidated.

Do you have any other questions I can answer by presenting quotes of the parts that are not there?

P.S. Why are you quoting Supposition, I literally used the same term as the comment above it.

🙄 Very mature. 👍

I'm not asking what it doesn't say, in case you missed that. I was asking what it does say, according to you.

(I was quoting your use of supposition because you are supposing stuff instead of making verifiable claims.)

Its right there under OP's post go read it

Hey, so here's the official opinion of the US copyright office on this matter.
If you look at this part right here:

"When an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship. As a result, that material is not protected by copyright and must be disclaimed in a registration application."

That's where it says that right now, generative works are ineligible for copyright.

But it doesn't say it is public domain nor invalidate the contract between the user. It also does not clearly define how much human element does or does not make it copyrightable.

The only two examples in the text were AI Generated Images.

Based on these developments, the Office concludes that public guidance is needed on the registration of works containing AI-generated content. This statement of policy describes how the Office applies copyright law's human authorship requirement to applications to register such works and provides guidance to applicants.

The Office recognizes that AI-generated works implicate other copyright issues not addressed in this statement. It has launched an agency-wide initiative to delve into a wide range of these issues. Among other things, the Office intends to publish a notice of inquiry later this year seeking public input on additional legal and policy topics, including how the law should apply to the use of copyrighted works in AI training and the resulting treatment of outputs.

Also, Not Copyrightable != Public Domain

For example, any new kind of wrench is not copyrightable either but other types of property laws like Patents and Licensure apply to their creation.

My advice, and I say this from a place of goodwill and empathy, is that nobody should use slop code nor treat it as their own code because until congress and legislative bodies around the world clarify this it is not factual.

Not Copyrightable != Public Domain

Not being under copyright means it is in the public domain. That's literally the entire definition.

The public domain is not a place. A work of authorship is in the “public domain” if it is no longer under copyright protection or if it failed to meet the requirements for copyright protection. Works in the public domain may be used freely without the permission of the former copyright owner.


invalidate the contract between the user.

Why do you keep bringing this up? Nobody else here cares and this claim isn't in dispute - open source software can and is licensed all the time. That doesn't change your initial claims about the output from Generative AI not being able to be held under copyright.


The only two examples in the text were AI Generated Images.

Man, it sure is weird how you ignore that they explicitly clarify that this applies to generated text too:

If a work's traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it. For example, when an AI technology receives solely a prompt  from a human and produces complex written, visual, or musical works in response, the “traditional elements of authorship” are determined and executed by the technology—not the human user.


God this is satisfying. Thank you for being like this.

But it doesn't say it is public domain nor invalidate the contract between the user.

Not Copyrightable != Public Domain

Thank you for that! 🙏

For sure! Decent people gotta stick up for each other on here, especially in the face of such prolific (and increasingly amusing) misinformation.

This take is weird, because none of the companies that do inference claim ownership of the generated content in their contracts for one, and because anyone can download open source models and generate code without entering into any ToS, for two.

If you want a fun exercise, take the comments in this thread and replace the word AI with "Stack Overflow"

Devs getting some of their code from a website is not new, even if it's via API

midwest.social

Rules

  1. No porn.
  2. No bigotry, hate speech.
  3. No ads / spamming.
  4. No conspiracies / QAnon / antivaxx sentiment
  5. No zionists
  6. No fascists

Chat Room

Matrix chat room: https://matrix.to/#/#midwestsociallemmy:matrix.org

Communities

Communities from our friends:

Donations

LiberaPay link: https://liberapay.com/seahorse