AIFocus

AI Dependency: The New Digital Addiction

We replaced TikTok dopamine with "one more prompt" dopamine — the interface changed, the dependency did not. This piece lists the early-warning signs (you open the AI tab before opening the problem, you treat "AI said so" as evidence, you feel a small panic when the service is down), and walks through a genuinely boring cure: friction, ritual, and a world where your first move is always to think. If this tool vanished tomorrow for a week, what would you do?

Siddharth PuriFebruary 18, 20267 min read
Skill Loss & Learning

AI Dependency: The New Digital Addiction

February 18, 2026 · 7 min read · Siddharth Puri

If you have ever felt mildly panicked when the AI service was down, congratulations — you have a dependency. That is not a moral failing. That is a behavioural pattern. Dependencies form around anything that reliably reduces friction in exchange for a small dopamine hit. AI tools are an almost perfect fit for that loop.

Why this is specifically addictive

Social media is addictive because it hits you with variable rewards — sometimes great content, sometimes garbage, never predictable. AI is addictive differently: it hits you with consistent relief. You have a stuck thought, you prompt, relief arrives. That is classic negative-reinforcement conditioning and it is arguably stickier than variable rewards because the relief is so reliable.

In six months of heavy use, you will be instinctively reaching for an AI tab whenever you feel any stuck sensation, which used to be the signal to think harder.

The early-warning signs

  • You open the AI tab before opening the problem — the first move is to ask, not to look
  • You cannot start a blank page, email or code file without scaffolding from a model
  • You treat "AI said so" as evidence, as if the model were a peer-reviewed source
  • You feel a small panic or irritation when the service is slow or down
  • You no longer bother to remember things, because you "can just ask"
  • You have stopped building mental models of problems, because the model can do it for you on demand

Any two of these is a mild dependency. Any four is advanced. Any six and you should actively worry about what will happen to your career when (not if) you are asked to produce original thinking under pressure.

The cure is boring

The cure is always boring. It is friction, ritual, and a default reordering of your moves. You are not going to delete AI — that is not the goal. You are going to make AI the second move, not the first.

Friction tactics

  • Do not keep AI as a pinned tab. Make yourself actually open a new tab or window. The extra two seconds matter more than you think
  • Remove mobile apps for AI tools if you catch yourself prompting on the phone
  • Use a browser that requires you to type the URL — auto-complete is a dependency accelerator
  • For the tools you most overuse, add a 10-second "are you sure" ritual — a screen you have to click through

Ritual tactics

  • Every morning: 30 minutes of work before opening any AI tool. Non-negotiable
  • Every problem: 5 minutes of thinking / writing before prompting, no exceptions
  • Every prompt: before hitting send, write down the answer you expect. This forces thinking
  • Every week: one full "AI-free day." Just one. It will feel ridiculous. Do it

The question that rebuilds independence

The single most useful question every AI user should ask themselves regularly: what would I do if this tool vanished tomorrow for a week? If your honest answer is "I would panic and most of my work would stop," that is worth fixing while you still have the slack to.

The first question every AI user should ask is: what would I do if this tool vanished for a week?

A softer framing

This is not about being anti-AI. I use it every day; I will use it tomorrow. This is about staying the senior partner in the relationship. The model is a tool. You are the person. When you invert that — when you become the assistant to a model — the career arc is short and unpleasant.

Stay dependent on your own thinking. Let AI be the accelerant, not the engine.

All postsSiddharth Puri

Keep reading

View all →
AI & Future of Work

Claude 3 vs GPT-5: What Changed and Why It Matters

March 26, 2026 · 9 min

Claude 3 vs GPT-5: What Changed and Why It Matters

They both claim to be the smartest thing ever built, and both demos look suspiciously similar. This is a ground-level look at how Claude 3 and GPT-5 actually differ in reasoning depth, long-context reliability, code quality and tool use — plus a blunt cheat sheet for which one to pick for which job. Written in English, without the benchmarks theatre.

AI & Future of Work

Will AI Really Replace Developers or Just Upgrade Them?

March 18, 2026 · 8 min

Will AI Really Replace Developers or Just Upgrade Them?

The internet has been burying developers every year since 1998 and we keep showing up for breakfast. Here is the honest split — which parts of the job AI genuinely eats (boilerplate, docs, test scaffolding, Stack Overflow archaeology) and which parts quietly get harder and more valuable (product judgement, architecture, ambiguity). Short answer: it replaces the parts of your job you hated, and the parts that pay you get more fun.

AI & Future of Work

Jobs That Will Survive the AI Revolution (And Why)

March 8, 2026 · 9 min

Jobs That Will Survive the AI Revolution (And Why)

Forget the "creative vs repetitive" framing — the real line is "work customers trust a machine with vs work they do not." This post maps three tiers: jobs that will stay deeply human (health, founding sales, investigative journalism, skilled trades), jobs that will transform rather than disappear (design, engineering, teaching, support), and jobs that are quietly becoming the best bets of the decade. A calmer, less LinkedIn-flavoured take on the next ten years.