For most of the last two decades, SEO mostly meant one thing, where you ranked on Google (and occasionally Bing). The customer journey was familiar. Someone searched, scanned a list of links, clicked and explored. Now the journey is increasingly 'ask, get an answer, take action.' And the platforms shaping that journey include ChatGPT, Claude, Gemini, Perplexity and Google itself, which is inserting AI summaries, what Google calls AI Overviews, into search results.
When AI tools started taking off, Google faced a serious problem: the risk of its search results being flooded with AI-generated spam. If left unchecked, the world's most-used search engine would lose trust - and with it, revenue. Search drives almost 57% of Alphabet's income, totaling over $198bn annually. And that revenue was at risk. AI spam isn't like old-school SEO spam. It's better written, harder to detect, and convincing enough to fool algorithms.
Several weeks after Google rolled out support for Preferred Sources globally, Google added official help documentation for site owners to use to help them understand what it is all about and how to encourage their readers to subscribe to your site as a preferred source. In December, Google rolled out Preferred sources globally after rolling it out in the US and India in August and beta testing it in June. Now the new help documentation is available here if you need it.
What happens when the AI companies (inevitably) encounter spam and attempts at SEO/GEO manipulation in the markdown files targeted to bots? What happens when the .md files no longer provide an equivalent experience to what users are seeing? What happens if they continue crawling those pages but actually toss them out before using the content to form a response? ...And we keep conflating "bot crawling activity" with "the bots are using/liking my markdown content?" How will we know if they're actually using the .md files or not?