Last weekend I hit job application #2,000.
Two thousand. Over roughly two and a half years of job hunting.
And before you do the math β yes, I’ve read a lot of job listings to get there. On a good day, I’m reading 5 to 7 postings for every one I can actually apply to. Typically, it’s closer to 10-to-1. The requirements don’t match the reality of what I bring, or the stack is three left turns away from anything I’ve touched, or the role is “senior” in title and entry-level in comp. So I keep scrolling.
Conservatively? I’ve read somewhere north of 20,000 job listings at this point.
Which means when I say I’ve noticed some patterns… I’ve got the sample size.
The Requirement That Kept Showing Up
Somewhere in 2024 β during a stretch where I was applying to 600+ jobs in a few months β a particular line started appearing with enough frequency that it stopped being surprising and started being insulting:
“3-5 years of experience with LLMs required.”
Sometimes it’s 7.
Every time I read it, the same thought: really? Because something about that number never passed the smell test. I couldn’t place exactly why it felt wrong, but it did.
So I finally just pulled the actual data.
When These Tools Were Actually Useful
ChatGPT launched in November 2022 on GPT-3.5, which was rough for real technical work. The version developers started actually trusting was GPT-4 β March 2023. Even then, it took months before it stopped feeling like a toy. Realistically, ChatGPT became useful for dev work in mid-to-late 2023.
GitHub Copilot went generally available in June 2022. Early reviews were… generous. One 2021 writeup called it “still somewhat useful.” The reliable version came in 2023 with GPT-4 under the hood. Dependable Copilot is a 2023 thing.
Building LLM features into actual products β not using ChatGPT as a tool, but integrating language models into software β has an even shorter real history. The GPT-3 API launched in 2020 but was invite-only, unreliable, and firmly in research territory. The ecosystem of tools that made it practical β LangChain, vector databases, reliable orchestration frameworks β barely existed before 2022. GPT-4 in early 2023 is genuinely when a normal dev team could pull this off without it being a research project.
Real window for production-grade LLM integration: two to three years, tops.
The Actual Adoption Numbers
The Stack Overflow Developer Survey is the gold standard here β and they didn’t even start tracking AI tool usage until 2023. Here’s the curve:
2022 β effectively zero meaningful usage. The tools barely worked.
2023 β ~70% using or planning to use. Trust is sitting around 40%. That “planning to” is doing a lot of heavy lifting.
2024 β 62% actively using. In the first year, “most developers” could honestly call themselves active users.
2025 β 51% using AI tools daily. First-time daily use cracked a majority.
The whole industry went from curious experimenters to daily workflow users in roughly two years. And here’s the part that doesn’t get talked about enough: as usage climbed, trust dropped β down to 29% by 2025. The more seriously developers actually use these tools, the more they run into the real limitations. 66% say AI solutions are close but miss the mark. 45% say debugging AI-generated code takes longer than just writing it themselves.
Nobody quietly got good at this while you weren’t paying attention. We’ve all been figuring it out together, in real time, while the tools were still maturing underneath us.
The Math On Those Job Postings
Let’s just do it plainly β because the math is actually kind of stunning.
I was reading these postings heavily throughout 2024. So let’s use 2024 as the baseline.
A 2024 job posting requiring 5 years of LLM experience is asking for experience going back to 2019 β when GPT-2 had just been released as a research curiosity that OpenAI was actually afraid to fully publish because they thought it was too dangerous. It couldn’t reliably write a coherent paragraph. Nobody was building production software with it. Nobody.
A 2024 posting requiring 7 years goes back to 2017 β the same year the landmark “Attention Is All You Need” paper was published, which is the foundational research that makes modern LLMs possible in the first place. GPT-1 didn’t even exist yet. The technology was being invented in academic labs at that exact moment. “LLM experience” as a hireable job skill was not a thing any developer on earth had β because the field was still being born.
Let that sink in.
In 2024, employers were posting requirements for skills that β by their own math β would have needed to begin accumulating before the underlying technology existed.
It’s the same classic recruiter move as demanding 5 years of React experience one year after React launched. Requirements copied and pasted by someone who heard “LLM” in a meeting and typed it into the job req without thinking too hard about whether it was even possible.
I’ve read 20,000 job listings. I’ve seen this pattern more times than I can count. And I can tell you with some confidence: a meaningful percentage of those requirements don’t describe a real candidate. They’re describing a fantasy β written by someone optimizing for the idea of a hire rather than the reality of one.
As recently as 2024, nearly one in four developers had zero plans to use AI coding tools at all. If you’re actively using them now β even imperfectly, even still figuring it out β you’re ahead of a significant chunk of the field.
What This Actually Means
Those requirements aren’t a bar you failed to clear.
They’re a bluff. Written fast, reviewed never, posted with confidence by people who don’t fully understand what they’re gatekeeping.
The gap you’re feeling β the “I must be so far behind” feeling β is real in the sense that time passed. It’s not real in the sense that there’s a large, experienced population of LLM developers who got a years-long head start on you. That population doesn’t exist at scale. The math doesn’t allow for it.
The whole industry has been learning on the fly for about two years.
You’re not late. The posting is just badly written.
Previously: AI Triage: What Goes Where β the four-tool setup I actually use.
Next up: the data that actually made me feel better about where I am in all of this.