As of April 24 you'll be feeding the Octocat unless you opt out

I'd say I should pull my code from it, but I'm bad enough at coding that I think I'll leave it to poison the well

of course. that's why microslop has it.

What if people start sending malicious code to github? Poisoning the AI model

That's what I tell myself I'm doing when I push more poorly written code to one of my repos

To opt out, GitHub users should visit /settings/copilot/features and disable "Allow GitHub to use my data for AI model training" under the Privacy heading.

Do you guys really believe these opt-out buttons do anything?

Not /s btw, genuine question.

To mirror your question: do you really believe that a significant fraction of users will uncheck this checkbox?

Personally, I think only a few percent will do this and Microsoft does not care about losing their data.

But why would they honor it in the first place? You have absolutely no way to check what they do with your data.

So we're just supposed to take their word for it?

Maybe. But I wonder how would it apply: are my contributions to another user's repository still used for training if that user didn't opt out?

Placbo buttons to boil the frog.

Color me shocked! Jk everybody saw that coming... It was probably hinted at to get reactions, then went ahead when people didn't bitch too much.

The data GitHub wants includes:

- Model outputs that have been accepted or modified;
- Model inputs including code snippets shown;
- Code context surrounding your cursor position;
- Comments and documentation you've written;
- File names and repo structure;
- Interactions with Copilot features (e.g. chats); and
- Feedback (e.g. thumbs up/down ratings)...

As the FAQs explain: "If a Copilot user has their settings set to enable model training on their interaction data, code snippets from private repositories can be collected and used for model training while the user is actively engaged with Copilot while working in that repository."

Source: https://www.theregister.com/2026/03/26/github_ai_training_policy_changes/

Yay, it's not enough that most companies store highly confidential code on GitHub, now we will let a PUBLIC agent be trained on them.

Wonder how long it will take for people to find ways around guardrails and have the model essentially copy the entire codebase of a specific company with a simple prompt.

Do note that GitHub explicitly excludes those with an Enterprise (or other corporate) plan for precisely those reasons.

They do so now, but here's an update. What's the chance for a few years to reach another one?:
- "To keep up with global changes, we enabled Copilot for everyone, but there's still an option to opt-out! Plan Pro and up offers such an option just you decide on it! Meanwhile, we hope Copilot will help everyone for free, and with whole 10 tokens we gift you right now!"

People still won't leave 🤷

I'm home sick today. Gonna start migrating my shit away from there

I stand corrected. Luckily 👍

midwest.social

Rules

  1. No porn.
  2. No bigotry, hate speech.
  3. No ads / spamming.
  4. No conspiracies / QAnon / antivaxx sentiment
  5. No zionists
  6. No fascists

Chat Room

Matrix chat room: https://matrix.to/#/#midwestsociallemmy:matrix.org

Communities

Communities from our friends:

Donations

LiberaPay link: https://liberapay.com/seahorse