Skip to main content
Explainer

The AI search gap and how to close it safely

  • Danielle Mee

    Zengenti

27 March 2026

Something has shifted in the last two years. Web visitors don’t just browse websites any more, they ask questions. ChatGPT, Perplexity, and Gemini have changed how people interact with information, shaping what they expect from every digital interaction.

Increasingly, visitors arrive on your website ready to ask a question, and what they find is a search box. Sometimes a good one, but most often not. Almost always, the results are a list of links they have to dig through themselves.

The evolving expectations of your users

Whether it’s students asking "What are the entry requirements for your Computer Science foundation year?" or a resident trying to find out what day their bin collection moves to after a bank holiday, the expectation is the same: a clear answer, straight away.

When that expectation isn’t met, it creates frustration. The negative experience drives people to pick up the phone, which costs everyone time and money.

The reality is that user behaviour has changed; they ask AI for answers because the responses are better. Visitors want conversational, answer-led experiences, especially for high-intent queries, not a list of links. Here’s why many organisations are stalling and how to change that.

Jump to:


Why hasn't everyone already deployed AI search?

The hesitation is understandable, and frankly, it's the right instinct. Public sector and higher education organisations can't treat AI as a fast-moving experiment where the occasional wrong answer is an acceptable cost. The stakes are different here.

The fear of fake news

We’ve considered what's on the line. A prospective student given incorrect information about tuition fees during clearing might make a life-altering decision based on a hallucination. Or a resident is given wrong details about benefit eligibility, could fail to claim support they're entitled to, or worse, be told they qualify when they don't.

These aren't edge cases without consequences. They're institutional failures that carry potential reputational and legal consequences.

The fear of getting it wrong

The problem is that the expectation gap is growing, and the pressure to respond is real. It's producing two common outcomes:

  • Rushing into AI: A knee-jerk reaction driven by fear of being perceived as "behind on AI", and producing a rushed decision and poor implementation.
  • Freeze: Not doing anything because the risk of deploying a general-purpose LLM on an institutional website is very real.

Many teams are caught between recognising the shift in user expectations and not yet having a safe, practical way to respond.


The distinction that changes everything: ungrounded vs. grounded AI

The reason general AI tools like ChatGPT carry hallucination risk is that they draw answers from an enormous, ungoverned body of training data; the entire internet.

Ungrounded AI

Its strengths (and weaknesses) are in its ability to reference all types of information, some of which is out of date, inaccurate, or simply not applicable to your institution. When a user asks about your university's clearing process, a general LLM does its best to answer based on what it knows about clearing in general. It answers the question, but there is no control over the answer it provides.

Ungrounded AI on your website

  • Draws on training data beyond your content
  • Can hallucinate plausible-sounding wrong answers
  • No control over brand or tone
  • No safety filtering for harmful prompts
  • No insight into what users are actually asking

Grounded AI

Grounded AI, such as Insytful AI Search, works differently. It only ever generates answers from your own published, approved content. If the answer isn't on your website, the search won't invent one. In fact, it'll admit if it doesn't have the answer.

Grounded AI on your website

  • Answers drawn entirely from your approved content
  • If the answer isn't on your site, it'll say. It won't be invented
  • Responses are brand-aligned and institution-specific
  • Intelligent safety filtering is built in
  • Analytics dashboard shows trends, gaps and user intent

Content grounding is what makes AI search safe enough to deploy in environments where accuracy isn't optional. But it's worth noting that although content grounding is a critical safeguard, it is not a substitute for content quality.

AI search answers are only as good as the source material. If approved content is outdated, inconsistent or incomplete, the answers surfaced will be weak, even if they are grounded.


A short explanation on how content grounding works

Content grounding keeps responses tied to your approved content rather than generating answers from elsewhere. Here's how it works:

  1. A user asks a question in natural language, phrased however they like (it even understands spelling mistakes), in any language.
  2. The query is safety-filtered. Harmful, inappropriate, or off-topic prompts are blocked before they are processed.
  3. The search runs only against your approved content. The AI retrieves and synthesises answers exclusively from your published website content and nothing else.
  4. A direct, accurate answer is returned. Not a list of links, but a conversational response, in the user's language, with the ability to ask follow-up questions, with links to content for reference.
  5. Analytics capture what was asked. Popular queries, trending topics, and answer quality feedback surface content gaps and inform your content strategy.

AI Search: The business case with numbers

The operational argument for AI search comes down to reducing avoidable demand on your teams.

Reducing call centre costs

The average cost of a phone call to a UK contact centre is £5.58. For councils fielding repeated queries about planning applications or benefit claims, deflecting even a fraction of those calls adds up fast.

For universities, the pressure hits hardest during clearing. Reducing call volume when admissions teams are swamped with questions that could be answered instantly online also adds up fast.

Search is one factor in whether users self-serve successfully. While it will not eliminate avoidable demand on its own, improving how quickly people can find answers can support channel shift.

Reducing bounce rate

Website bounce rates also affect the number of people who pick up the phone. 53% of mobile users abandon a site if it takes more than 3 seconds to load, and when you add link-heavy search results, the drop-off rate increases.

Simply put, if a user cannot find what they want quickly, they'll pick up the phone instead. For councils, calls over self-service that incur a cost, and for universities, prospective students may simply look elsewhere.

Either way, reducing bounce rate through faster page load times and AI search improves user engagement and can lead to increased revenue.

Moving with the times

AI has changed search behaviour and, as a result, AI-style search is becoming a normal expectation. Thanks to increased interaction with AI overviews and ChatGPT, users have moved away from keyword-based guesswork to expecting natural-language responses. Ofcom has described the shift as a move from search engines to "answer engines".

The evidence is clear, AI-assisted discovery is becoming more common across consumer and institutional contexts:

The UK government is moving in the same direction. The GOV.UK app has already introduced an AI-powered search, indicating that this technology isn't a future consideration. It's happening now. Organisations that have not yet adapted risk offering a search experience that feels out of step with how people now expect to find information online.


The analytics angle

Most teams focus on the user-facing benefits of AI search, but the back-end data is equally compelling. When Insytful AI Search logs what your users are asking, it gives you something traditional analytics never could: direct, unfiltered evidence of intent.

Not which pages people visited, but what they actually want to know. That's a different kind of insight that is genuinely valuable. The search analytics don't just measure performance; they inform your content strategy in real time.


The window of opportunity

The UK government's AI Playbook, published in February 2025, makes clear that public-sector AI adoption is now an expectation, not an aspiration. The institutions that move thoughtfully now will have operational advantages and institutional learning that those who wait will need to catch up on.

More practically, every month that passes with a keyword search box on your site is another month of residents calling instead of finding answers online, prospective students getting better digital experiences from competitor institutions, and content teams without the data to know what their audiences actually need.


What about the stakeholders who need convincing?

This is usually where AI projects stall. Legal wants to know the AI can't go off-script. Brand and communications want control over tone and brand voice. IT want something that doesn't require a major integration project.

These are legitimate concerns, and the good news is that grounded AI search is specifically designed to address them. Content grounding means it only ever answers from approved, published content, so legal has a clear boundary to point to.

When it comes to Insytful AI Search, it sits on top of your existing site without touching your CMS, keeping your digital team happy. It's quick to get started, with most implementations happening in weeks, not months. And because answers are generated from your own content, your tone and messaging always stay yours


The cost of waiting

The expectation gap doesn't close on its own, and your users aren't going to lower their standards because AI is technically complex to deploy. They will continue to compare your website with AI-powered experiences they encounter elsewhere, and will notice when finding an answer on your website becomes hard work. And the longer that gap stays open, the harder it becomes to close.

  • For universities, competitor institutions are already moving and investing in digital experiences. This approach will increasingly shape how prospective students judge an organisation long before they've ever visited a campus.
  • For councils, with Local Government Reorganisation on the horizon, demonstrating digital maturity is not just good practice; it is essential. It is part of showing that services are accessible, efficient and fit for the future when decisions about structures are being made.
  • For the NHS and the wider healthcare sector, helping people find the right answer quickly has always mattered. Better digital journeys can support faster access to information, reduce friction, and improve overall operational efficiency.

Poor digital journeys do not just damage user experience; they also create avoidable phone demand. When users cannot find the answer online, they switch channels. Research from HMRC found more than a third of contact attempts were progress-chasing rather than new enquiries. And for some government services, a digital transaction is almost 20 times cheaper than a phone call.

The good news is that the barrier to safe deployment is lower than most teams assume. The technology exists to close this gap without accepting the risks that come with general-purpose AI. The institutions that move thoughtfully now will be better placed than those who wait. If you'd like to see how grounded AI search would work with your own content, we're happy to show you.


See it in action with your content

The best way to test an AI search's capabilities is to try it for yourself. Tell us which site you'd like to try out with AI search, and we'll get you set up demo using your own content — so you can see exactly how it would work for your institution.

A lady is stood on her phone asking AI search questions such as where the closest store is and when the gym closes
  • Danielle Mee

    Zengenti

Explainer
27 March 2026

Related blog posts