In early January 2026 I was listening to Transistor Radio, the Semi-Analysis podcast, where Doug O’Laughlin was talking about how good Claude Code had become. I follow a lot of analysis sites covering what’s happening in LLMs, Ben Thompson, Semi-Analysis, and others, and this one stuck with me. Then in February they published “Claude Code is the Inflection Point”, showing you could track AI coding tool usage through Co-Authored-By trailers in commit messages. I wanted to see the numbers for myself, so I built scripts to pull data from GitHub’s search API across every AI coding tool I could find: commits, PRs created, PR interactions, and issue comments.
The week of September 29 2025, there were 27.7 million public commits on GitHub. Claude Code accounted for 180,000 of them, about 0.7%. By the week of March 16 2026, total weekly commits had grown to 57.8 million (itself a 2.1x increase, likely driven in part by AI tooling), and Claude Code accounted for 2.6 million, or 4.5%. All AI coding tools combined now sit at roughly 5% of every public commit on GitHub. For context, GitHub’s Octoverse 2025 report recorded 986 million code pushes for the year, with monthly pushes topping 90 million by May 2025, and that trajectory hasn’t slowed down.
Claude Code went from 0.7% to 4.5% of all public GitHub commits in six months
(You’ll notice a dip in the share chart around early March. That’s not Claude slowing down, it’s total GitHub commits spiking to 61.8 million that week, roughly double the norm. Claude’s absolute numbers kept climbing, but the denominator jumped so the percentage dipped temporarily. The last week is also incomplete, only covering through March 27.)
I enjoyed writing this post, but that doesn't mean I had fun. I come to some conclusions that, at least from the language being used in the industry, I'm not super thrilled about as a future (job displacement).
Two conversations have been rattling around in my head this month, and they frame the same question from completely opposite directions.
The first is Steve Yegge on The Pragmatic Engineer podcast with Gergely Orosz. Steve's argument is blunt. Every company has a dial, and he puts it like this: "Everybody has a dial that they get to turn from zero to 100. And you can keep your hand off the dial, but it just has a default setting of what percentage of your engineers you need to get rid of in order to pay for the rest of them to have AI. Because they're all starting to spend their own salaries in tokens." He thinks the dial is being set at about 50. Half the workforce gets cut to fund AI for the remaining half. And as he points out, "half your engineers don't want to prompt anyway, and they're ready to quit."
Hello! I tend to send the email about any new post sometime after I've published the post, I know that's not traditional, but I tend to change up the post and I worry that email is a bit more permanent. So instead of getting the post before the wider-web, I hope to at least give you a preview into some thinking and some of the rationale behind my work.
For a long time I've been wanting a space to quickly build out ideas through an LLM on a machine that I can run nearly any command on. Jules' repo-less projects and ClawedBot (RIP name) was the inspiration that I needed to just start building something and testing the ideas that I hope to document over the coming weeks, but here's a sneak peak of what I am thinking:
This post - can we build systems that are just prompt? Yes, but they are different types of programs that we experience today.
Memory-first applications - this post and OpenClaw are making me think more about how application logic falls out of the data inside the system vs designing the system first
Claude as my login shell - I replaced Bash with Claude and it's been pretty eye-opening to how a future of computing might look
Agents in a box - Building on the ideas of memory and Claude as the login shell, what if every user on a linux system was an agent. I've spent some time exploring a Docker system where agents communicate with each other over email and have full access to the Linux system as a normal user (it's fun, if the LLM needs some software I get an email asking why). The entire experience reminded my of when I worked in a small business in Liverpool
This isn't the direct intersection of the web and AI, but the improvements to Claude Code and agentic loops have changed how I think about software more than I thought it would.
Hey folks, I got into a bit of a flow at the weekend and created this post to showcase some of what I have been talking about on this blog. I think I struggle to articulate this sometimes, that we have this very malleable platform that is often just used to display the author’s intent, but it’s the only platform that I know of that let’s the user shape their view of it and I believe in this world where LLM’s can manipulate the content that people see this is the best platform to build on and we should be really pushing the boundaries of the capability of Web + LLM.
Anyway, I’d love to get your thoughts and feedback about this post.
NotebookLM is one of my favorite applications in decades. If you haven’t experienced it before, it’s an application that lets you pull in sources from all around - Google Drive, PDFs, public links - collate them into a notebook, and then query or transform that content. Want to turn five research papers into a podcast? Done. Need to extract key takeaways from a collection of articles? Easy. It’s a fundamentally different way of interacting with information that wasn’t possible before large language models.
Hello - Paul here. I wanted to give some extra context about this post that I shared on the blog late last night, that I thought would be useful.
As frustrating as the web can be to build for sometimes, it's come on leaps and bounds in the last 10 years. We have huge leaps in capability tied to massive strides in security that has been hard-fought over the years.
What I talk about in this post, I think shows some of the possibilities of what is possible at the intersection of the Web and AI platforms (specifically LLMs).
There's still plenty for Browser vendors to do, but I think we have an excellent runtime and huge amounts of investment across the many companies building the engines that power the web.
It’s been nearly 9 months since I started this blog and I feel that while I kept up a good pace of articles and I’ve dived deeper in to my thoughts on the intersection of web and AI (specifically LLMs), however a lot of what I’ve done is hidden away because they are the things that I've been building that help me test my ideas and hypothesis'.
To set some context, I’m the manager and lead of the Chrome Developer Relations team. My day job is to help my team be successful (they are successful when they help developers build amazing websites and help the web to thrive). Up until 2024 I’d been personally very pessimistic about the health and future of the Web. The platform is competing against mobile platforms (specifically Apps) and the platforms defined by those Apps (Facebook, Instagram, TikTok) and not really succeeding. These new platforms made it even easier to share ideas and content, and the general thought was that all use of computing by the billions of people on the planet will move to these new platforms and you could see and feel this slow decline of the web.
While LLMs have enabled me to be incredibly productive both in helping me do my day job, they have revitalised my passion for the web because 1) I think it’s the most versatile medium that we have ever seen (and will ever see), and the ability for LLMs to parse and manipulate content give us an ability to build entirely new experiences instantly for anyone with a computer and internet connection, and 2) it rekindled my love of experimenting and pushing the boundaries of what is possible on the medium that is called “The Web”.
I certainly don’t dismiss the challenges that LLMs might also present for the medium, but I’m also happy to work out how to tackle these while also building and pushing the capabilities of browsers.
Hey hey - this has been a post that has been on my mind for a while but I couldn't quite work out what I wanted to say. It wasn't until I got a demo working last night (you will see the first image and video it produced after working).
I'm really interested in the future of generative UI and the potential for a "Generative Web", however I think there are a lot of unsolved issues.
I'm looking for feedback (for and against), ideas and suggestions as I dive into what hypermedia might be like now we have LLMs.
I remember my early days building for the web. We had no separation of concerns. We used <font> and <center> tags, transparent spacer.gifs, and complex table layouts to force our content into a shape. Presentation and content were a single, messy soup.
My first encounter with CSS in Netscape Navigator 4 was a mind-blowing moment. It was the first time I was confronted with the idea that you could (and should) separate the document’s structure (HTML) from its presentation (CSS).
This concept was cemented for the entire industry by the CSS Zen Garden. It was the ultimate demo: one single HTML file, hundreds of completely different visual designs. This idea, that content and presentation are two different things, has stuck with me ever since.
Hello! Paul here with a quick note before the email. I'm still trying to get the hang of Buttondown email. I thought it had processed all of my previous posts, but nope. I have some thoughts on token processing and living dangerously and would love your thoughts and feedback.
This is a very quick post. I had an idea as I was walking the dog this evening and I wanted to build a functioning demo and write about it within a couple of hours.