Mid stage enshitification of GitHub and GitHub Copilot
2026-04-01
Loading blurb...
2026
2026-04-01
Loading blurb...
2026-03-31
You are not allowed to abdicate your responsibility for what you create.
There is a difference between crafted objects and ones that have been slopped together. You can feel when judgement has been applied and when it's lacking. You know deep down in your core when no human has bothered to look at what was designed, written, or assembled.
This was true long before AI came along.
2026-03-25
Loading blurb...
2026-03-24
Loading blurb...
2026-03-18
Loading blurb...
2026-03-12
Loading blurb...
2026-03-11
Today generating niche interfaces for humans is democratized.
But, really, how much of a leap is it to make this a niche interface for an agent? Not very.
So we're going to see this explosion of amazing tooling compounding daily allowing everyone to focus their energy on what they love (vs being resigned to fighting their software to accomplish their vision.)
2026-03-11
Loading blurb...
Page 1 of 2
Older blurbsMid stage enshitification of GitHub and GitHub Copilot: Ads in pull requests. They walked it back, but the advertisements were an example of what I posted about yesterday: Abdication of responsibility to exercise good judgement.
To the Github Copilot team's credit they explicitly called out 'poor judgement' as the core issue and took some responsibility after the fact.
If you take their stated goal at face value the team was trying to surface 'helpful tips' to users. How the defined a 'tip' and where they chose to surface them both lacked appropriate judgement. To a point a friend recently commented, they met a specification (ads in pull requests) but not the requirements (helpful tips for users).
Zack Manson connected the dots to Doctorow for me with this quote on enshitification:
“
Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.
”
Cory Doctorow
I'm bullish on agentic AI. This turns out to be an unpopular opinion. A recent-ish survey shows only 10% of people in North America and Europe are more excited than concerned about AI.
Given the endless hype, the amount of FUD - fear, uncertainty, and doubt - in media reports, and the sheer volume of slop, I get why people have concerns.
That said, my optimism is 100% grounded in my personal experience over the last three plus years. One person's experience shouldn't convince you on its own, but if you're one of the roughly 50% that are more concerned than excited about AI, its worth asking what it would take to change your mind. Be a bit more curious about the possibilities. Try the contrarian viewpoint.
Should you build for the future or build for right now? I've done both. Plenty. In code and design. You eventually develop an intuition for which path to take. But today, all of those intuitions in your multi-objective optimization have to be completely re-assessed.
Have you ever shipped something without test coverage because the acute pain of not shipping now was greater than the an unknown future pain of a regression? Do you still not have standardized tokens in your design system because the acute pain of listening to developer refactoring complaints is greater than future reward of making their work easier?
Those intuitions about pain don't apply any more. You can have it both ways. You can ship right now, and have your agents continuously improve the testability of your system. You can codify design standards, because agents love that kind of clarity and don't complain about refactoring. In both cases, agents can continuously re-write documentation for future humans and agents alike.
Even your intuition of how hard it might be to build continuous agentic improvements into your day-to-day work is probably wrong. The cost of writing a skill, tool, or a full agent is approaching zero.
You just have to get started, and start building new intuitions.
Design is crucial. Period. Just enough design. Just in time.
What is just enough or just in time?
What problem are you solving? How irreversible are decisions you make? What's the impact of getting it wrong? Who's impacted? When do they need it?
Only you know how to find the answer to questions like these unique to your situation. And, don’t hate me for pointing out that how you get answers to questions like these is a process.
Just enough process.
If your new business idea is to take an existing product and do it the 'new way' while incumbents are still doing it the 'old way' you're cooked.
Its only a matter of time till they do it the 'new way' too and your growth potential vanishes.
Consider solving new problems that didn't exist or weren't as painful until the 'new way' arrived instead. Maybe build something around that.
(This is a note to myself as much as anyone else)
It's really important to give your agents solid success criteria and clear stop conditions.
Today I got a local file search engine tooled for my agents. It could work as part of RAG, or in my case just as a place for a research assistant to pull from and cite my ebooks and transcripts.
But, sometimes without clear instructions for when to stop, agents love to keep crafting 100 variations of search queries trying to find an answer not knowing that “no result” is valid result.
I'm reminded that teams are like this too. If you don't set clear success criteria they'll spin their wheels or keep making up work if you're not careful. It's not their fault they can't read minds. It's a failure of direction.