45
At work we will soon be getting access to agentic AI that parses through our gigantic code base and burns through money for very simple requests.
What would be the best way to maliciously comply with the management's decision for AI usage to be an everyday thing you are expected to do without getting found out?
My idea is spamming the living shit out of the most expensive models for pointless tasks and then insisting that the result is wrong, and work in the meantime. Nobody can complain if my work is still good, but they will notice that the AI budget is not worth it and stop the hype train.
Open two agent instances in the same project and give them competing instructions. They'll end up in an endless cycle of fixing the things the other breaks.
Are you measured by the number of API calls or overall token usage?
If the latter, do everything you can to stuff the context window with as much garbage as possible. Link in every MCP server under the sun, then make as many calls as you can using that thicc ass context window.
Add in redundant calls to the LLM server whereever possible.
For example, create git hooks to make LLM calls on pre-commit and pre-push. Then make those same calls in your CI/CD process. Make sure every new commit on a PR or feature branch causes every one of these to be re-run.
Every commit and CI workflow should trigger as many "tests" as possible. Some ideas:
- Vulnerability "scan"
- Changelog generator
- Commit description improvement
- Unit test generation
- Integration test generation
- Code style suggestions
- Update the docs
- Make a blog post describing the improvements this major/minor release.
For compounding effect, if you can manage it, stuff the output of every previous call into the context window of the next one to run.
To make sure you are credited with the usage, setup your editor to run an LSP that makes LLM calls whenever possible.
Run some on save, run some as a part of linting and static analysis checks, run some on every keystroke.
Make sure to stuff this crap in your editor's global config but only scoped to the paths associated with company projects. That way you get the increased token usage explicitly attributed to you without committing it to git and thus "sharing the credit" with all of your teammates. Still runs on the company projects without ruining your ability to quickly open random text files. Also, I would make sure to set a timeout on the LSP to prevent the AI nonsense from blocking real linters and tools, keeping your editor nice and responsive.
Use it normally but every time you use it ask it for 20 iterations using different calls.
So you start with your completely legitimate request to the AI. It responds. You then ask it for a variation. You then repeat this request for a variation 20 times afterwards. Write a script to do this so you don't waste your time doing it manually.
This will increase your requests to the AI by 20 times while also looking like a completely appropriate workflow to have if anyone looks at your logs.
I started making a lot of noise in the right slack channels about business risks due to price rises and/or model enshittification, citing Ed Zitron and The Register articles, and made some of the right people nervous. Small company though.
Build an agent that calls other agents to call other agents to do mundane tasks. You will be promoted for this because this is High Level AI Builder Mentality and it will cost the company a fucking huge amount of money to accomplish very little. The more you can structure your agents to report to some top level agent that assigns tasks to lower level agents both the more money you will waste and also the more your management will think you are the Bees Knees and give you more money. Pretend like your agents are a company; you're the CEO and you have an agent that tells other agents (that maybe tell other agents) what to do. This is a gigantic fucking waste of tokens and done right can burn through them VERY quick but also is exactly what CEOs want agents to do so it'll go over great and nobody will ever question your motives
This is just how Claude Opus works and why it costs 5-10x other models.
Do not coal roll tokens, management will know its you and you'll probably just hit usage limits without doing anything.
Instead, develop a mission critical program in a custom framework, or even better DSL. The usual, but now vibe coded.
You're going to learn that from a programmers perspective, ai is far more human than computer.
Introducing code for LLMs to choke on is a great idea. Develop some very specific syntax or structure that training data hasn't touched, then make it load bearing.
Better yet, find a way to use delimiters, escape sequences, and syntax symbols that are known to interact with Markdown features, so that producing valid syntax for your DSL requires escaping everywhere to avoid fucking up the Markdown LLM output.
I don't know where you're at, but we're evaluated on the number of tokens we used as part of our performance metrics. Sometimes you gotta roll those tokens to get those numbers up so you're not part of the next round of layoffs. Management has no ability to understand token usage as long as the prompts vaguely valid.
we're evaluated on the number of tokens we used as part of our performance metrics
I am spared from such nonsense. Meeting bullshit quotas is acceptable but I wouldn't call it sabotage then.
Maybe create a module in your repos that are never called by the actual working code, that the AI gives it's contributions to. It compiles, but never does anything.
This will unfortunately increase the attack surface of your compiled binaries.
Modify the build system to use the language model as a linter. Modify the build system so every compiler diagnostic is automatically submitted to the language model for a "friendly" explanation. Modify the commit hooks to have the language model pre-fill the commit message with a summary of the changes. Justify it as making the tooling more user-friendly.
If you use an internationalization framework like gettext or similar, have the language model generate "place-holder" translations for every target locale on every development build (not just releases). Have the language model shit out a handful of localization unit tests to justify this.
If you work with C++ and use any libraries which do template metaprogramming (famous for those cryptic 50+ line compiler diagnostics overflowing with angle brackets), instead of scanning your recent changes for the typo, make the language model explain the diagnostic to you at length until you are satisfied.
Seek excuses to use incompatible languages (i.e. using an obscure C library without bindings in Python. Internal, proprietary libraries are a good candidate) and use the language model to automate the production of language bindings each time the API is updated.
If you use a scriptable text editor (like Emacs or VS Code), describe what you want your desired configuration to do in some form of vernacular English and have the LLM generate a new configuration file on the fly each time you start the editor. Get creative. Maybe include a message of the day which involves the language model fetching or hallucinating some new historic event every morning. Justify this by making small changes to your human language "configuration file" regularly. Ideally, tell the language model to implement some over-cooked code style guidelines like the Google Style Guidelines (for instance) and have it parse the entire spec each time just to set up your tab stops.
This is amazing because all of this is entirely useless at best and actively hostile at worst, but it's all exactly the kinda thing the morons pushing LLMs want. This is 100Xing tech debt generation, all features no maintenance.
I kinda hate my current job, but at least I'm the only dev and the entire codebase is my own handwritten slop. Refusing to use AI at any point has basically allowed me to minimize feature creep and force all our tooling to fit in a decently modular home baked framework that only takes up ~100MB for the whole history and around 60K lines of active code (split I over about 4 or 5 separate open source repos I'm maintaining).
Half of my commits now are deleting huge portions of the codebase since I've noticed they aren't used, or have been replaced over the years by better solutions. All of this runs the entire project management, design and QA suites for a 60 person fiber optic design firm.
I do have a habit of just deciding to delete modules and waiting for tickets to flow in then rewriting them on the fly, but there's no better way to find out if something is important than deleting it in prod and waiting for complaints lol
So, don't just spam it yourself, automate spamming it; or, even better, let it automate spamming itself--and have it spam coworkers and managers with the results. "Hey let's make a key/cert for the static code analysis tool and the ci/cd software we already have in place so the LLM can handle the easy issues and hotfix broken unit tests for us." Then ask the AI to write an app/servlet/interface/whatever that will auto-push new/all issues to it for fixing, and give it an smtp server so it can push out emails. Make sure it's churning tokens for each email individually. Have it try anyone who has touched the file, so it's churning tokens for email addresses that have long left the company. Have it churn more tokens to send stuff out to everyone when an email address doesn't exist. Have it detail it's "thought" process before, during, and after a fix is attempted & pushed, to churn more tokens, and absorb that text into huge logs that no one will ever read.
Obviously any code generated needs to be reviewed, so have it crank out a full review with a narrative description about how the error was found, why it's really an error, why it chose this fix, what other fixes it considered.
You should be able to nail a fat load of token spend for every little one-liner that way. Especially once it starts generating its own coding errors that trip the loop, so it just recurses in on itself with no actual human interactive audience to get in the way.
Insist that it reads full files, max difficulty. Tell it to spawn 6 subagents in different worktrees every time you need to do something. Ez token maxxing. I can burn through my entire 5 hour budget in one request.
https://mindgard.ai/blog/claude-offers-up-instructions-to-make-explosives
This shit
If it's anything like my experience, you can just as AI-written PRs get merged by people who approved it also using AI (pushed by leadership who only communicates to us through the same AI).
The real challenge is in defending your time and energy as bugs pile up from the codebase quality taking a complete nosedive.
Connect it to SharePoint.
