AIPromptingProductivity

Prompting Is the New Literacy — And Most People Are Writing Badly

Typing used to be a paid skill. Then everyone learned it and it disappeared below the line. Prompting is on the same arc in 2026 — except most people are still writing prompts the way they write Google searches. This post unpacks what actually separates a good prompt from a bad one, why context beats cleverness, and the five-line structure I use every day that turns "meh" AI output into "wait, that was actually useful."

Siddharth PuriApril 15, 20268 min read
AI & Future of Work

Prompting Is the New Literacy — And Most People Are Writing Badly

April 15, 2026 · 8 min read · Siddharth Puri

Ten years ago, "typing fast" was a paid skill. You put "60 WPM" on your resume. Then everyone learned to type and the skill vanished below the line of what anyone paid for. Prompting is on the same arc — just compressed into eighteen months instead of fifty years.

In 2026, "I can use ChatGPT" is the new "I can send an email." Nobody is impressed. But most people are still writing prompts the way they write Google searches — three keywords and a question mark. And then they wonder why the output is mediocre.

Google is a search box. AI is a colleague.

That framing shift is everything. A search box rewards keywords. A colleague rewards context. If you walked up to a senior engineer and said "auth flow?" they would ask fifteen follow-up questions before writing a single line. A good prompt front-loads those fifteen questions.

The people who get "bad output from AI" are almost always giving it 10% of the context they would give a new hire. Then they are surprised when the result reads like something written by someone who just joined the company today and missed the kickoff.

The five-line structure

This is the skeleton I use for almost every serious prompt. It takes 45 seconds longer than "just ask the thing" and produces output that is usually 3x better.

  • Who is the output for (audience, level, context)
  • What is the deliverable (format, length, tone)
  • What constraints matter (style, things to avoid, non-negotiables)
  • Relevant context (code, past decisions, background the model cannot guess)
  • One example of "this is the shape of the answer I want"

You will notice none of these are clever tricks. They are the same things a good brief contains. Prompting well is mostly "writing a clear brief," which is a skill that has always been valuable — just now the audience is a model instead of a human.

Context is the cheat code

The single largest delta between amateur and professional prompting is how much context gets included. Amateurs write "refactor this function." Professionals paste the function, the file it lives in, the test that is breaking, a description of the system it is part of, and a sentence about the style the codebase uses.

Both will get code back. One will get code that fits. The other will get code that looks plausible and breaks in the third edge case.

The three prompts that keep showing up in my work

  • The "explain this to me like I need to defend it in a meeting" prompt — forces the model to produce the reasoning, not just the answer
  • The "steelman the opposite" prompt — surfaces what you are missing before someone else does
  • The "act as a skeptical reviewer" prompt — turns the model into a critic of its own first draft, which usually improves the second draft

Why most prompt guides age badly

Specific magic-word tricks ("act as an expert", "take a deep breath") come and go with each model version. Context, audience, format, constraints, examples — that will still work in 2030. Learn the skills that outlast the model.

Prompting is not a trick. It is a briefing skill — and briefing has always been the senior skill.

Once you realise prompting is just writing a clear brief, two things happen. You get dramatically better output, and you stop looking for the next prompt-hack Twitter thread. Both are improvements.

All postsSiddharth Puri

Keep reading

View all →
AI & Future of Work

Claude 3 vs GPT-5: What Changed and Why It Matters

March 26, 2026 · 9 min

Claude 3 vs GPT-5: What Changed and Why It Matters

They both claim to be the smartest thing ever built, and both demos look suspiciously similar. This is a ground-level look at how Claude 3 and GPT-5 actually differ in reasoning depth, long-context reliability, code quality and tool use — plus a blunt cheat sheet for which one to pick for which job. Written in English, without the benchmarks theatre.

AI & Future of Work

Will AI Really Replace Developers or Just Upgrade Them?

March 18, 2026 · 8 min

Will AI Really Replace Developers or Just Upgrade Them?

The internet has been burying developers every year since 1998 and we keep showing up for breakfast. Here is the honest split — which parts of the job AI genuinely eats (boilerplate, docs, test scaffolding, Stack Overflow archaeology) and which parts quietly get harder and more valuable (product judgement, architecture, ambiguity). Short answer: it replaces the parts of your job you hated, and the parts that pay you get more fun.

AI & Future of Work

Jobs That Will Survive the AI Revolution (And Why)

March 8, 2026 · 9 min

Jobs That Will Survive the AI Revolution (And Why)

Forget the "creative vs repetitive" framing — the real line is "work customers trust a machine with vs work they do not." This post maps three tiers: jobs that will stay deeply human (health, founding sales, investigative journalism, skilled trades), jobs that will transform rather than disappear (design, engineering, teaching, support), and jobs that are quietly becoming the best bets of the decade. A calmer, less LinkedIn-flavoured take on the next ten years.