
The four models of search tools
Customer Servant Workers in customer service give people the things they request. If someone asks for a “burger and fries”, they don’t query whether the request is good for the person, or whether they might really be after something else. The search model we call Customer Servant is somewhat like the first computer-aided information retrieval systems introduced in the 1950s. These returned sets of unranked documents matching a Boolean query – using simple logical rules to define relationships between keywords (e.g. “cats NOT dogs”). Librarian As the name suggests, this model somewhat resembles human librarians. Librarian also provides content that people request, but it doesn’t always take queries at face value. Instead, it aims for “relevance” by inferring user intentions from contextual information such as location, time or the history of user interactions. Classic web search engines of the late 1990s and early 2000s that rank results and provide a list of resources – think early Google – sit in this category. Journalist Journalists go beyond librarians. While often responding to what people want to know, journalists carefully curate that information, at times weeding out falsehoods and canvassing various public viewpoints. Journalists aim to make people better informed. The Journalist search model does something similar. It may customise the presentation of results by providing additional information, or by diversifying search results to give a more balanced list of viewpoints or perspectives. Teacher Human teachers, like journalists, aim at giving accurate information. However, they may exercise even more control: teachers may strenuously debunk erroneous information, while pointing learners to the very best expert sources, including lesser-known ones. They may even refuse to expand on claims they deem false or superficial. LLM-based conversational search systems such as Copilot or Gemini may play a roughly similar role. By providing a synthesised response to a prompt, they exercise more control over presented information than classic web search engines. They may also try to explicitly discredit problematic views on topics such as health, politics, the environment or history. They might reply with “I can’t promote misinformation” or “This topic requires nuance”. Some LLMs convey a strong “opinion” on what is genuine knowledge and what is unedifying.meanwhile, on Gemini:
— Vera Kurian (@verakurian.bsky.social) 5 March 2025 at 04:03
[image or embed]
No search model is best
We argue each search tool model has strengths and drawbacks. The Customer Servant is highly explainable: every result can be directly tied to keywords in your query. But this precision also limits the system, as it can’t grasp broader or deeper information needs beyond the exact terms used. The Librarian model uses additional signals like data about clicks to return content more aligned with what users are really looking for. The catch is these systems may introduce bias. Even with the best intentions, choices about relevance and data sources can reflect underlying value judgements. The Journalist model shifts the focus toward helping users understand topics, from science to world events, more fully. It aims to present factual information and various perspectives in balanced ways. This approach is especially useful in moments of crisis – like a global pandemic – where countering misinformation is critical. But there’s a trade-off: tweaking search results for social good raises concerns about user autonomy. It may feel paternalistic, and could open the door to broader content interventions. The Teacher model is even more interventionist. It guides users towards what it “judges” to be good information, while criticising or discouraging access to content it deems harmful or false. This can promote learning and critical thinking. But filtering or downranking content can also limit choice, and raises red flags if the “teacher” – whether algorithm or AI – is biased or simply wrong. Current language models often have built-in “guardrails” to align with human values, but these are imperfect. LLMs can also hallucinate plausible-sounding nonsense, or avoid offering perspectives we might actually want to hear.New research shows that AI chatbots often fail to admit when they couldn't answer a question accurately, instead providing incorrect or speculative responses. Premium chatbots were even more prone to confidently giving wrong answers compared to their free versions. www.cjr.org/tow_center/w…
— Matthew Facciani (@matthewfacciani.bsky.social) 15 March 2025 at 01:29
[image or embed]