back to top
HomeTechAI Search Is Eating the Web. Here’s What It’s Doing to Small...

AI Search Is Eating the Web. Here’s What It’s Doing to Small Sites

- Advertisement -

A few years ago, running a small site felt simple. You wrote something useful, Google sent people your way, and a handful of those people stuck around. That loop is broken now.

AI search tools don’t send visitors. They take your words, compress them into an answer box, and move on. No click. No context. If you’re a small publisher, it feels less like competition and more like extraction.

What surprised me is this: even as AI search started eating the web, our site didn’t collapse. Sessions dropped in some places, sure, but the people who did arrive stayed longer, clicked deeper, and actually cared. After digging into our 2026 data, I realized why & it has nothing to do with ranking #1 anymore.

When ranking stopped meaning what it used to

There was a time when ranking #1 actually meant something. You earned it, you saw the traffic, and you could feel the impact almost immediately.

That relationship is gone.

Today, you can rank well and still feel invisible. The search result looks fine on paper, but the clicks barely arrive. AI Overviews answer the question before anyone needs to visit. Answer engines rephrase your work, cite it vaguely, and move on. Your article becomes raw material.

At first, I thought this meant the site was losing. Fewer clicks had to mean failure, right? But when I stopped staring at rankings and started looking at behavior, the story flipped.

The people who still click aren’t skimming. They’re reading. Three-minute sessions. Multiple pages. Bookmarks. These aren’t accidental visits, they’re intentional ones. AI didn’t took those readers because it can’t replace the reason they came in the first place.

That’s when it clicked for me: the goal isn’t to win the results page anymore. The real challenge is to be remembered by the system doing the answering & by the humans who want more than a summary.

If an AI can fully satisfy a search with a paragraph, that traffic was never yours to keep. The sites that survive are the ones giving people something AI can’t compress without losing the point.

And that changes how you write everything.

Commodity vs experience (and why most pages quietly lose)

Here’s the uncomfortable truth I had to accept: a lot of what we publish online is easy to compress.

If a page exists mainly to explain how something works, list steps, or summarize prices, an answer engine can handle that just fine. It doesn’t need context. It doesn’t need a voice. It doesn’t need you.

Once I started looking at content through that lens, the pattern was obvious.

Content typeThe summary (commodity)The small site (experience)
How-to guidesA clean 5-step listThe part where something broke and why
Pricing infoEstimated monthly costThe weird discount that only works once
Tool comparisonsFeature-by-feature gridWhy one option felt wrong after a week
ValuePure logicOpinion, frustration, preference

The left side is useful. I still read it. I still use it. But I don’t remember where it came from five minutes later.

The right side sticks because it’s messy. It has judgment baked in. It includes the stuff you usually cut because it feels “unprofessional” or hard to quantify.

That’s the difference.

If your article can be reduced to a tidy paragraph without losing anything important, it was always disposable. Not bad, just interchangeable. The web is full of that now.

The pages that still earn real attention are the ones that resist compression. They include the trade-offs, the regret, the “this worked but I wouldn’t do it again” moments. Those don’t summarize cleanly, and that’s the point.

Once you see this, content strategy stops being about volume or optimization. It becomes a simple question:

What part of this only makes sense if a human wrote it?

Also Read: The Internet Is Quietly Changing and Most People Haven’t Noticed

Why people still stay on small sites

Here’s the part that surprised me.

Even as AI answers get faster and cleaner, people still spend time on small sites.

When I looked at our analytics, the pattern was obvious. Traffic wasn’t exploding, but the sessions were longer. People scrolled. They clicked links. Some stayed for three, four minutes. That doesn’t happen if someone just wants a quick answer.

I think it’s because AI already won the “best answer” game. If all you want is a definition, a checklist, or a rough price estimate, an AI summary does the job & moves on.

Small sites survive for a different reason.

When someone lands on a blog, they’re usually looking for judgment, not just information. They want to know what broke, what felt wrong, what you wouldn’t do again. They want to borrow someone else’s thinking so they don’t have to start from zero.

This is where AI summaries fall apart. They flatten everything.

On small sites, that messiness is the value.

I’ve noticed people trust posts more when the author admits uncertainty. When a tool almost worked. When the conclusion isn’t clean. It is a signal that a real person was involved.

In a weird way, AI made this clearer. The more perfect the summaries get, the more obvious it becomes when something was written by someone who actually had to live with the decision.

That’s why people still stay. They’re not chasing answers anymore. They’re looking for filters.

What I’m doing differently now

I stopped chasing volume.

For a while, it felt logical to publish more. Cover more queries. That instinct doesn’t survive contact with AI search. If a page can be flattened into a clean paragraph, it probably will be.

Now I write fewer pieces, but I stay with them longer. I only publish when I have something I’d tell a friend after actually using a tool.

I also stopped writing neutral content. If I don’t have an opinion yet, I don’t force one. I wait. That sounds inefficient, but it saved me from shipping pages that look good but say nothing

Another shift: I assume readers already saw an AI summary before landing on my site. So I don’t repeat the basics. I start where summaries usually stop like the trade-offs, the odd frustrations, the moments where something felt great on day one and annoying by day seven.

And instead of obsessing over rankings, I pay attention to what people actually engage with. Which pages they stick around on. Which posts spark emails or replies like “this helped” messages. Those signals matter more to me now than position numbers.

It’s about writing in a way that still feels worth reading when answers are everywhere and cheap.

Closing Thoughts

Search is changing. That part is done. Wishing it back won’t help.

What still works is adapting without losing your voice. Writing less, but saying more. Sharing things you actually lived with, not just looked up. Giving readers something they can’t get from a summary box.

If you adjust how you write & what you offer, there’s still room to win.

That’s the game now. And honestly, it’s not the worst one to play.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
YOU MAY ALSO LIKE
Reka Edge is The 7B Multimodal AI Model That Beats Gemini 3 Pro on Object Detection

Reka Edge: The 7B Multimodal AI Model That Beats Gemini 3 Pro on Object...

0
Most people assume beating a Google model requires another massive frontier model. More parameters. More compute. That is just how the hierarchy usually works. Reka Edge is a 7-billion-parameter model. Yet it manages to outperform Gemini 3 Pro on object detection benchmarks, and with quantization it can even run on devices like the Samsung S25. That combination should not exist. A model small enough to fit on a phone outperforming a frontier AI system from Google on a specific but genuinely useful task is not something you expect to see in 2026. Yet here we are. This is not a model that beats Gemini at everything. It does not. But where it wins it wins convincingly.
Helios 14B AI Model That Generates Minute-Long Videos in Real Time

Helios: The 14B AI Model That Generates Minute-Long Videos in Real Time

0
Most open source video generation models make you wait. You write a prompt, hit generate, and then sit there hoping the output is what you imagined. If it is not you tweak the prompt and wait again. That loop gets old fast. Helios works differently. It generates video in real time at 19.5 frames per second on a single GPU. You can see it being created, interrupt mid generation if something looks off, tweak and continue. Up to a full minute of video without starting over every time something does not look right. With group offloading it runs on around 6GB of VRAM. Consumer GPU territory.
Open Source LLMs That Rival ChatGPT and Claude

7 Open Source LLMs That Rival ChatGPT and Claude

0
Two years ago if you wanted a genuinely capable AI model your options were basically ChatGPT, Claude, Gemini or Grok. Open source existed but the gap was real and everyone knew it. That gap is closing faster than most people expected. In some areas it is already gone. Today open source models do not just compete with closed source. Some of them beat closed source on specific benchmarks that actually matter. And the list of categories where that is true keeps getting longer. If you are curious about what open source AI actually looks like at full power or you are building something serious and evaluating your options this list is for you. One thing worth saying upfront, these are not consumer GPU friendly models. You will need serious hardware to run them at full capacity. Quantized versions exist for most of them but expect performance and quality to reflect that. I went through a lot of options to put this list together. These seven are the ones that actually made me stop and pay attention.

Don’t miss any Tech Story

Subscribe To Firethering NewsLetter

You Can Unsubscribe Anytime! Read more in our privacy policy