this post was submitted on 04 Dec 2023
55 points (100.0% liked)

Tech News

1029 readers
7 users here now

What is this?

A new place to discuss Tech News

Rules

  1. No NSFW content
  2. No conspiracy theory articles
  3. No politics unless it involves tech
  4. Don't be mean!
  5. Nothing illegal can be posted here because it's illegal!
  6. Follow the post guidelines which are pinned in the community

Who runs this lemmy community?

Me! Sandro Linux, a youtuber who does tech news videos as well as other tech videos

Will any of these articles be used in your show?

If they are good yes :)

founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 11 points 1 year ago (1 children)

Honestly, I'm increasingly feeling that things like this are a decent use for a technology like ChatGPT. People suck and definitely have ulterior motives to forward their group. With AI, there's at least some degree of impartiality. We definitely need to regulate the shit out of it and make clear expectations for transparency in its use, but we're not necessarily doomed. (At least in this specific case.)

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago) (1 children)

There's no impartiality in the training data an LLM derives it's answers from. This is no better than anyone who owns a media consortium or lobbying group writing a bill for a politician. An LLM can easily be directed to reflect or mirror the prompts that it is given. Prime example are the exploit prompts that have been found that can get chat gpt to reveal training data.

https://www.businessinsider.com/google-researchers-openai-chatgpt-to-reveal-its-training-data-study-2023-12?op=1

https://news.mit.edu/2023/large-language-models-are-biased-can-logic-help-save-them-0303

https://www.technologyreview.com/2020/12/10/1013617/racism-data-science-artificial-intelligence-ai-opinion/

https://arxiv.org/pdf/2304.00612.pdf

[–] [email protected] 2 points 1 year ago (1 children)

I think that's where the transparency comes it. What prompts exactly were used? Is it at all independently repeatable?

That's where the advantage lies. With humans, the reasoning is truely a black box.

Also, I'm not arguing that LLMs are free of bias, just that they have a better shot at impartiality than any given politician.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

The issue is when bills are not written by politicians or when they skirt committee which is what lobbyists do. LLMs are just another tool for that, except they're even worse as there are fewer humans employed in the process.

As far as answering

*What prompts exactly were used? Is it at all independently repeatable? *

That's all in the provided links.