A few weeks ago I was told to “learn and use Anthropic.”
I stared at that for a second.
What is Anthropic? And what is Claude?
Crap. Another AI tool.
At this point I’m already juggling ChatGPT daily, VS Code Copilot for coding help, and Gemini whenever Google decides I need it. The last thing I needed was a fourth platform to learn from scratch.
But here’s the thing about me — I learn by doing, and I learn even faster by comparing. You know how you actually learn English grammar? Study Spanish. Suddenly the rules you never consciously knew snap into focus because you have something to contrast them against.
Same principle here.
So instead of sitting through tutorials… I ran them all through the same test.
The Test
I doodle house floor plans for fun. Have for years. Nothing fancy — just layouts, room flow, cardinal directions, basic spatial reasoning. I know exactly what “good” looks like because I’ve done it a thousand times on paper.
So that became the prompt. Draw me a basic floor plan.
Should be easy, right?
Gemini went first.
I’ll be diplomatic: it was not good. After six or eight follow-up prompts trying to coax something usable out of it, I gave up. It couldn’t follow basic directional instructions, couldn’t produce a sensible layout, and at one point casually told me it was “9:36 am and the day is warming up” — when it was actually 11:45 am. It can’t even voluntarily tell time.
Gemini is fine for “what is this photo” and basic Google-style curiosity questions. Anything requiring actual reasoning? I’ll look elsewhere.
Claude built something — but kept drifting toward detailed 3D renders when I asked for a simple 2D floor plan. Cardinal directions were shaky. Spatial logic needed work.
Points for effort. Still figuring out how to keep it on task.
ChatGPT nailed it. Clean, accurate, exactly what I asked for on the first try.
One problem: throttling. Two or three renders a day and it’s done. Great tool. Stingy with the compute.
Copilot didn’t really show up for this test. It’s laser-focused on back-end code — which it does well — but ask it to produce something visually coherent and it kind of shrugs. As a full-stack engineer who cares about UI/UX, that’s a gap.
What I Actually Learned
After burning a few days on this, here’s where I landed.
Each tool has a lane — and using the wrong one for the wrong job is its own kind of friction.
Gemini — random curiosity questions. Basically a smarter Google search. Don’t ask it to reason too hard.
Copilot — scaffolding code files. HTML, JavaScript, Python structure. Good at the skeleton, less useful past that.
Claude — the coding workhorse for building apps that actually integrate AI. The agentic capabilities here are genuinely different from the others. More on that in a future post.
ChatGPT — everything else. Architecting, explaining, coaching, debugging. The Swiss Army knife of the group.
Four platforms. Four personalities. Four different jobs.
None of them are the AI. All of them are an AI — and figuring out which one to reach for is becoming its own skill set. One that nobody handed anyone a manual for.
We’re all building the map as we go.