this post was submitted on 07 Jun 2023
11 points (100.0% liked)

World News

22155 readers
187 users here now

Breaking news from around the world.

News that is American but has an international facet may also be posted here.


Guidelines for submissions:

These guidelines will be enforced on a know-it-when-I-see-it basis.


For US News, see the US News community.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 
all 3 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 2 years ago (1 children)

So, there's a fundamental issue here. A lot of the systems that Amanda is talking about aren't actually AI.

Chat-GPT, contrary to the blogosphere, is not actually AI. It does not have the capability for thought. It doesn't have the capacity to understand truth or fiction as concepts, let alone tell them apart.

Chat-GPT and similar systems are probabilistic language models. Essentially, I start it off with sentences (a list of tokens, if you want to get technical). Then it responds by essentially looking at the training data it's been supplied with and picking out the sequence of tokens that most likely is the answer the user is expecting, given the input. Notice that bolded text? The user is expecting. Not anything else. These language models are trained to spit out what users expect, nothing more, nothing less. If a user doesn't like the response, they give a thumbs down and the model recalibrates, introducing more noise and randomness into the result.

These language models are actually really great at reducing manual labor at certain tasks (writing cover letters, delivering predictable essays, I've personally used Chat-GPT for Shadowrun world-building) but they need to have a knowledgeable person using them because they absolutely will not reliably say true things. They will say whatever their training data says is the most likely thing the user is asking for.

[–] [email protected] 3 points 2 years ago

A lot of the talk around AI has made this mistake. I think its in part because Chat-GPT sounds very smart and human, especially to average people.