Wikipedia in the Age of AI: Why the World’s Largest Knowledge Platform Needs Our Attention More Than Ever

From the outside, all seems well: Wikipedia continues to be used by millions around the world, editorial activity remains strong, and accessing the world’s most comprehensive free encyclopaedia is as natural in 2025 as it ever was. But under the surface, something is shifting. A new study from King’s College London casts a clear, sober light on the subtle power shift taking place in the digital knowledge space – one driven by artificial intelligence.

At first glance, the researchers’ findings are reassuring – but they also sound a quiet alarm.

Wikipedia is alive – for now

Published in the Collective Intelligence journal of the Association for Computing Machinery (ACM), the study analysed user behaviour across twelve different language editions of Wikipedia over a three-year period. Six were from regions with access to ChatGPT, and six without. The result? No drop in views or edits – in fact, most versions saw a slight rise in page visits. Even in areas where ChatGPT is widely available.

This challenges the widely held assumption that tools like ChatGPT are replacing Wikipedia. People still seem to trust the platform – at least as a silent benchmark or background reference. But that’s only half the story.

The invisible erosion: AI uses, but rarely gives back

At the same time, the researchers highlight how AI systems have reshaped their relationship with Wikipedia. Through large-scale scraping, they harvest Wikipedia’s content en masse to train language models – a practice that, while legal, raises important questions.

Wikipedia itself gains very little from this. Most generative AI tools do not cite the platform, don’t link back to the original articles, and don’t drive traffic to the site. In essence: the content is used, but the platform loses visibility – and, over time, relevance.

“AI developers are sending their scrapers to Wikipedia, driving up server loads, then delivering answers that appear to come out of nowhere,” says Professor Elena Simperl, computer scientist at King’s Institute and co-author of the study. This isn’t just a technical or legal issue – it’s a matter of digital fairness.

A question of responsibility – and the future

Study lead Neal Reeves goes further, calling for a “new social contract” between AI firms and Wikipedia. The basic idea: if you use Wikipedia’s content to train commercial systems, you should also contribute back – whether in the form of infrastructure support, transparency, or at the very least, proper attribution.

The urgency of this call is reinforced by recent developments: on the same day the study was published, Wikimedia Germany announced the launch of the Wikidata Embedding Project – a new system designed to make it easier for AI models to access curated, verified Wikipedia content. The aim is to provide artificial intelligence with traceable, reliable sources – and to ensure that Wikipedia remains visible as one of them.

Because the real threat isn’t data theft. It’s the quiet disappearance of Wikipedia from public awareness – even as other platforms build lucrative products on its foundations.

Why this matters to all of us

Wikipedia is not just another website. It’s one of the few cultural pillars of the free internet – non-profit, ad-free, and not driven by opaque algorithms. Its content is created by people who write, edit and review voluntarily. This system is radical – and fragile.

If AI models use Wikipedia’s content without honouring that social contract, we risk losing more than a website. We lose transparency, accountability, and collective control over our digital knowledge base.

The question isn’t whether Wikipedia will “survive”. The question is: in what form, under what conditions, and with whose support?

Conclusion: Wikipedia needs more than clicks – it needs awareness

Wikipedia is alive. Still. It is used, quoted, copied – by humans and machines alike. But it stands at a crossroads. Not because of a drop in traffic, but because the rules of engagement are changing. If AI is to shape the future of knowledge, then Wikipedia must be part of that future – not as a silent data source, but as an active and acknowledged partner.

That means tech companies must take responsibility. Governments and institutions must set fair standards. And we – whether as readers, developers or decision-makers – must understand that open knowledge is not guaranteed.

Because knowledge is power. And if we forget the foundation it rests on, we’ll eventually lose our footing altogether.

Alexander Pinker
Alexander Pinkerhttps://www.medialist.info
Alexander Pinker is an innovation profiler, future strategist and media expert who helps companies understand the opportunities behind technologies such as artificial intelligence for the next five to ten years. He is the founder of the consulting firm "Alexander Pinker - Innovation Profiling", the innovation marketing agency "innovate! communication" and the news platform "Medialist Innovation". He is also the author of three books and a lecturer at the Technical University of Würzburg-Schweinfurt.

Ähnliche Artikel

Kommentare

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Follow us

FUTURing