this post was submitted on 04 Jun 2025
332 points (98.0% liked)

Technology

70916 readers
4640 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

I don't know if this is an acceptable format for a submission here, but here it goes anyway:

Wikimedia Foundation has been developing an LLM that would produce simplified Wikipedia article summaries, as described here: https://www.mediawiki.org/wiki/Reading/Web/Content_Discovery_Experiments/Simple_Article_Summaries

We would like to provide article summaries, which would simplify the content of the articles. This will make content more readable and accessible, and thus easier to discover and learn from. This part of the project focuses only on displaying the summaries. A future experiment will study ways of editing and adjusting this content.

Currently, much of the encyclopedic quality content is long-form and thus difficult to parse quickly. In addition, it is written at a reading level much higher than that of the average adult. Projects that simplify content, such as Simple English Wikipedia or Basque Txikipedia, are designed to address some of these issues. They do this by having editors manually create simpler versions of articles. However, these projects have so far had very limited success - they are only available in a few languages and have been difficult to scale. In addition, they ask editors to rewrite content that they have already written. This can feel very repetitive.

In our previous research (Content Simplification), we have identified two needs:

  • The need for readers to quickly get an overview of a given article or page
  • The need for this overview to be written in language the reader can understand

Etc., you should check the full text yourself. There's a brief video showing how it might look: https://www.youtube.com/watch?v=DC8JB7q7SZc

This hasn't been met with warm reactions, the comments on the respective talk page have questioned the purposefulness of the tool (shouldn't the introductory paragraphs do the same job already?), and some other complaints have been provided as well:

Taking a quote from the page for the usability study:

"Most readers in the US can comfortably read at a grade 5 level,[CN] yet most Wikipedia articles are written in language that requires a grade 9 or higher reading level."

Also stated on the same page, the study only had 8 participants, most of which did not speak English as their first language. AI skepticism was low among them, with one even mentioning they 'use AI for everything'. I sincerely doubt this is a representative sample and the fact this project is still going while being based on such shoddy data is shocking to me. Especially considering that the current Qualtrics survey seems to be more about how to best implement such a feature as opposed to the question of whether or not it should be implemented in the first place. I don't think AI-generated content has a place on Wikipedia. The Morrison Man (talk) 23:19, 3 June 2025 (UTC)

The survey the user mentions is this one: https://wikimedia.qualtrics.com/jfe/form/SV_1XiNLmcNJxPeMqq and true enough it pretty much takes for granted that the summaries will be added, there's no judgment of their actual quality, and they're only asking for people's feedback on how they should be presented. I filled it out and couldn't even find the space to say that e.g. the summary they show is written almost insultingly, like it's meant for very dumb children, and I couldn't even tekk whether it is accurate because they just scroll around in the video.

Very extensive discussion is going on at the Village Pump (en.wiki).

The comments are also overwhelmingly negative, some of them pointing out that the summary doesn't summarise the article properly ("Perhaps the AI is hallucinating, or perhaps it's drawing from other sources like any widespread llm. What it definitely doesn't seem to be doing is taking existing article text and simplifying it." - user CMD). A few comments acknowlegde potential benefits of the summaries, though with a significantly different approach to using them:

I'm glad that WMF is thinking about a solution of a key problem on Wikipedia: most of our technical articles are way too difficult. My experience with AI summaries on Wikiwand is that it is useful, but too often produces misinformation not present in the article it "summarises". Any information shown to readers should be greenlit by editors in advance, for each individual article. Maybe we can use it as inspiration for writing articles appropriate for our broad audience. β€”Femke 🐦 (talk) 16:30, 3 June 2025 (UTC)

One of the reasons many prefer chatGPT to Wikipedia is that too large a share of our technical articles are way way too difficult for the intended audience. And we need those readers, so they can become future editors. Ideally, we would fix this ourselves, but my impression is that we usually make articles more difficult, not easier, when they go through GAN and FAC. As a second-best solution, we might try this as long as we have good safeguards in place. β€”Femke 🐦 (talk) 18:32, 3 June 2025 (UTC)

Finally, some comments are problematising the whole situation with WMF working behind the actual wikis' backs:

This is a prime reason I tried to formulate my statement on WP:VPWMF#Statement proposed by berchanhimez requesting that we be informed "early and often" of new developments. We shouldn't be finding out about this a week or two before a test, and we should have the opportunity to inform the WMF if we would approve such a test before they put their effort into making one happen. I think this is a clear example of needing to make a statement like that to the WMF that we do not approve of things being developed in virtual secret (having to go to Meta or MediaWikiWiki to find out about them) and we want to be informed sooner rather than later. I invite anyone who shares concerns over the timeline of this to review my (and others') statements there and contribute to them if they feel so inclined. I know the wording of mine is quite long and probably less than ideal - I have no problem if others make edits to the wording or flow of it to improve it.

Oh, and to be blunt, I do not support testing this publicly without significantly more editor input from the local wikis involved - whether that's an opt-in logged-in test for people who want it, or what. Regards, -bɜ:Κ³kΙ™nhΙͺmez | me | talk to me! 22:55, 3 June 2025 (UTC)

Again, I recommend reading the whole discussion yourself.

EDIT: WMF has announced they're putting this on hold after the negative reaction from the editors' community. ("we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together")

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 6 points 12 hours ago

Who exactly asked for this? Wikipedia isn't publicly traded, they aren't a for profit company, why are they trying to shove Ai into people's faces?

For those few who wanted it, there are dozens of bots who can summarize the (already kinda small) Wikipedia articles

[–] [email protected] 5 points 12 hours ago

My immediate thought is that the purpose of an encyclopaedia is to have a more-or-less comprehensive overview of some topic of interest. The reader should be able to look through the page index to find the section they care about and read that section.

Its purpose is not to rapidly teach anyone anything in full.

It seems like a poor fit as an application for LLMs

[–] [email protected] 4 points 13 hours ago

fucking disgusting. no place should have ai but especially not an encyclopedia.

[–] [email protected] 38 points 1 day ago

Wikipedia articles are already quite simplified down overviews for most topics. I really don't like the direction of the world where people are reading summaries of summaries and mistaking that for knowledge. The only time I have ever found AI summaries useful is for complex legal documents and low-importance articles where it is clear the author's main goal was SEO rather than concise and clear information transfer.

[–] [email protected] 29 points 1 day ago* (last edited 1 day ago) (3 children)

TIL: Wikipedia uses complex language.

It might just be me, but I find articles written on Wikipedia much more easier to read than shit sometimes people write or speak to me. Sometimes it is incomprehensible garbage, or without much sense.

[–] [email protected] 4 points 20 hours ago (1 children)

You've clearly never tried to use Wikipedia to help with your math homework

[–] [email protected] 2 points 13 hours ago

I never did any homework unless absolutely necessary.

Now I understand that I should have done it, because I am not good at learning shit in classrooms where there is bunch of people who distract me and I don't learn anything that way. Only many years later I found out that for most things it's best for me to study alone.

That said, you are most probably right, because I have opened some math-related Wikipedia articles at some point, and they were pretty incomprehensible to me.

[–] [email protected] 17 points 1 day ago* (last edited 1 day ago) (1 children)

It really depends on what you're looking at. The history section of some random town? Absolutely bog-standard prose. I'm probably missing lots of implications as I'm no historian but at least I understand what's going on. The article on asymmetric relations? Good luck getting your mathematical literacy from wikipedia all the maths articles require you to already have it, and that's one of the easier ones. It's a fucking trivial concept, it has a glaringly obvious example... which is mentioned, even as first example, but by that time most people's eyes have glazed over. "Asymmetric relations are a generalisation of the idea that if a < b, then it is necessarily false that a > b: If it is true that Bob is taller than Tom, then it is false that Tom is taller than Bob." Put that in the header.

Or let's take Big O notation. Short overview, formal definition, examples... not practical, but theoretical, then infinitesimal asymptotics, which is deep into the weeds. You know what that article actually needs? After the short overview, have an intuitive/hand-wavy definition, then two well explained "find an entry in a telephone book", examples, two different algorithms: O(n) (naive) and O(log n) (divide and conquer), to demonstrate the kind of differences the notation is supposed to highlight. Then, with the basics out of the way, one to demonstrate that the notation doesn't care about multiplicative factors, what it (deliberately) sweeps under the rug. Short blurb about why that's warranted in practice. Then, directly afterwards, the "orders of common functions" table but make sure to have examples that people actually might be acquainted with. Then talk about amortisation, and how you don't always use hash tables "because they're O(1) and trees are not". Then get into the formal stuff, that is, the current article.

And, no, LLMs will be of absolutely no help doing that. What wikipedia needs is a didactics task force giving specialist editors a slap on the wrist because xkcd 2501.

[–] [email protected] 5 points 1 day ago

As I said in an another comment, I find that traditional encyclopedias fare better than Wikipedia in this respect. Wikipedians can muddle even comparatively simple topics, e.g. linguistic purism is described like this:

Linguistic purism or linguistic protectionism is a concept with two common meanings: one with respect to foreign languages and the other with respect to the internal variants of a language (dialects). The first meaning is the historical trend of the users of a language desiring to conserve intact the language's lexical structure of word families, in opposition to foreign influence which are considered 'impure'. The second meaning is the prescriptive[1] practice of determining and recognizing one linguistic variety (dialect) as being purer or of intrinsically higher quality than other related varieties.

This is so hopelessly awkward, confusing and inconsistent. (I hope I'll get around to fixing it, btw.) Compare it with how the linguist RL Trask defines it in his Language and Linguistics: The Key Concepts:

[Purism] The belief that words (and other linguistic features) of foreign origin are a kind of contamination sullying the purity of a language.

Bam! No LLMs were needed for this definition.

So here's my explanation for this problem: Wikipedians, specialist or non-specialist, like to collect and pile up a lot of cool info they've found in literature and online. When you have several such people working simultaneously, you easily end up with chaotic texts with no head or tails, which can always be expanded further and further with new stuff you've found because it's just a webpage with no technical limits. When scholars write traditional encyclopedic texts, the limited space and singular viewpoint force them to write something much more coherent and readable.

[–] [email protected] 4 points 1 day ago (1 children)

I'm from a country where English isn't the primary language, people tend to find many aspects of English complex

[–] [email protected] 3 points 1 day ago

I am also from a country that English is not widely spoken, in fact most people are not able to make a simple conversation (they will tell you they know ""basic English"" though).

I still find it easier to read Wikipedia articles in English, than than understand some relatives, because they never precisely say what the fuck they want from me. One person even say such incomprehensible shit, that I am thinking their brain is barely functional.

[–] [email protected] 18 points 1 day ago (1 children)

If you can't make people smarter, make text dumber.

[–] [email protected] 7 points 1 day ago

Problem: Most people only process text at the 6th grade level

Proposal: Require mainstream periodicals to only generate articles accessible to people at the 6th grade reading level

Consequence: Everyone accepts the 6th grade reading level as normal

But... New Problem: We're injecting so many pop-ups and ad-inserts into the body of text that nobody ever bothers to read the whole thing.

Proposal: Insert summaries of 6th grade material, which we will necessarily have to reduce and simplify.

Consequence: Everyone accepts the 3rd grade reading level as normal.

But... New Problem: This isn't good for generating revenue. Time to start filling those summaries with ad-injects and occluding them with pop ups.

[–] [email protected] 8 points 1 day ago

Thanks, I hate it.

[–] [email protected] 72 points 2 days ago* (last edited 1 day ago) (6 children)

Et tu, Wikipedia?

My god, why does every damn piece of text suddenly need to be summarized by AI? It's completely insane to me. I want to read articles, not their summaries in 3 bullet points. I want to read books, not cliff notes, I want to read what people write to me in their emails instead of AI slop. Not everything needs to be a fucking summary!

It seriously feels like the whole damn world is going crazy, which means it's probably me... :(

[–] [email protected] 3 points 1 day ago (1 children)
[–] [email protected] 1 points 13 hours ago

For those of us who do skip the AI summaries it's the equivalent of adding an extra click to everything.

I would support optional AI, but having to physically scroll past random LLM nonsense all the time feels like the internet is being infested by something equally annoying/useless as ads, and we don't even have a blocker for it.

[–] [email protected] 12 points 1 day ago

It's not you.

"It is no measure of health to be well-adjusted to a profoundly sick society." Krishnamurti

load more comments (3 replies)
[–] [email protected] 18 points 1 day ago

This is not the medicine for curing what ails Wikipedia, but when all anyone is selling is a hammer....

[–] [email protected] 165 points 2 days ago (5 children)

If they add AI they better not ask me for any money ever again.

[–] [email protected] 59 points 2 days ago (3 children)

Or moderators. Why would they need those people when the AI can fix everything for free and even improve articles?

load more comments (3 replies)
load more comments (4 replies)
[–] [email protected] 6 points 1 day ago* (last edited 1 day ago) (1 children)

Honestly, I think it's a good idea. As long as it's clearly highlighted that "this is an AI generated summary", it could be very useful. I feel like a lot of people here have never tried to e.g. read a maths article without having a PHD in mathematics. I would often find myself trying to remember what a term means or how it works in practice, only to be met by a giant article going into extreme technical detail that I for the life of me cannot understand, but if I were to ask ChatGPT to explain it I would immediately get it.

[–] [email protected] 11 points 1 day ago

People will believe the AI summary without reading the article, and AI hallucinates constantly. Never trust an output from a LLM

[–] [email protected] 117 points 2 days ago (8 children)

Wikipedia articles already have lead in summaries.

Fuck right off with this

A future experiment will study ways of editing and adjusting this content.

[–] [email protected] 35 points 2 days ago

These LLM-page-summaries need to be contained and linked, completely separately, in something like llm.wikipedia.org or ai.wikipedia.org.

In a possible future case, that a few LLM hallucinations have been uncovered in these summaries, it would cast doubts about the accuracy of all page content in the project.

Keep the generated-summaries visibly distinct from user created content.

[–] [email protected] 6 points 1 day ago

πŸͺ¦πŸͺ¦πŸͺ¦πŸͺ¦

RIP Wikipedia, we will miss you

[–] [email protected] 7 points 1 day ago (6 children)

It's kind of indirectly related, but adding a query parameter udm=14 to the url of your Google searches removes the AI summary at the top, and there are plugins for Firefox that do this for you. My hopes for this WM project are that similar plugins will be possible for Wikipedia.

The annoying thing about these summaries is that even for someone who cares about the truth, and gathering actual information, rather than the fancy autocomplete word salad that LLMs generate, it is easy to "fall for it" and end up reading the LLM summary. Usually I catch myself, but I often end up wasting some time reading the summary. Recently the non-information was so egregiously wrong (it called a certain city in Israel non-apartheid), that I ended up installing the udm 14 plugin.

In general, I think the only use cases for fancy autocomplete are where you have a way to verify the answer. For example, if you need to write an email and can't quite find the words, if an LLM generates something, you will be able to tell whether it conveys what you're trying to say by reading it. Or in case of writing code, if you've written a bunch of tests beforehand expressing what the code needs to do, you can run those on the code the LLM generates and see if it works (if there's a Dijkstra quote that comes to your mind reading this: high five, I'm thinking the same thing).

I think it can be argued that Wikipedia articles satisfy this criterion. All you need to do to verify the summary is read the article. Will people do this? I can only speak for myself, and I know that, despite my best intentions, sometimes I won't. If that's anything to go by, I think these summaries will make the world a worse place.

[–] [email protected] 1 points 13 hours ago

Thank you so much for this!!!

load more comments (5 replies)
[–] [email protected] 3 points 1 day ago* (last edited 1 day ago)

There was this fucking functionality of the browser called ctrl+f where you can find anything in text. But fucking no, people can't use it on mobile easily so instead of fucking teaching users how they can find fucking content we will get generated slop... Also fucking websites started implementing stupid shit like loading dynamically or override ctrl+f with stupid javascript popup, so ctrl+f gets broken all the time. And now ctrl+f will be fucking broken because first thing will be fucking AI bullshit. Fuck You. I just hope I will be able hide AI with extension.

load more comments
view more: next β€Ί