this post was submitted on 23 Feb 2024
97 points (91.5% liked)

Technology

63897 readers
6231 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 22 comments
sorted by: hot top controversial new old
[–] [email protected] 34 points 1 year ago (3 children)

It's a pretty extreme example but I can definitely see how an AI would fumble need to know historical context.

Calling it AI at all was a mistake, it has no thought capability, it just spits out what it was asked for like the world's dumbest genie.

[–] [email protected] 23 points 1 year ago

Yeah. it's called Generative AI because all it does is generate random crap based on ingested material and questions. It's a fancy autocomplete and people are relying on it.

[–] [email protected] 21 points 1 year ago* (last edited 1 year ago) (1 children)

The term "AI" has a much broader meaning and use than the sci-fi "thinking machine" that people are interpeting it as. The term has been in use by scientists for many decades already and these generative image programs and LLMs definitely fit within it.

You are likely thinking of AGI, or artificial general intelligence. We don't have those yet, but these things aren't intended to be AGI so that's to be expected.

[–] [email protected] 14 points 1 year ago (1 children)

Thanks for saying this. I’m pretty tired of people talking about how we have changed the definition of ai to include this stuff whereas I remember talking about ai in terms of things like random forests and q learning 20 years ago

[–] [email protected] 16 points 1 year ago

Hell, AI for years has meant "crappy script that can make a little guy run around and shoot his gun at you sometimes."

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

It’s probably not the base model. It’s probably the hidden prompt augmentation rules that everyone has been scrambling to add to these models.

Gemini, like Chat GPT, it it likely appending simple prompts with more detail behind the scenes so results come out more varied. For example, since the models regurgitate the content people promote on the internet, if you searched for “attractive person” last year, you’d probably get a white person 95% of the time. When something is over represented in mainstream media, gen AI will reflect that back.

Now, before something goes into the gen AI black box, it is secretly given racial and photo style modifiers so users get more diverse people and image styles unless the user manually specifies what they want to see.

Generally this works well, but it can backfire hilariously if you ask for it to draw a group of people who are famous to having a certain ethnicity.

[–] [email protected] 21 points 1 year ago (1 children)

The best part is that this was added in response to people complaining about it not generating diverse enough people, so they made it randomly add different races to person requests that don't specify race.

[–] [email protected] 7 points 1 year ago (1 children)

Why would anyone do that? Like just tell the AI you want diverse people yourself, making it the default just adds more moving parts that could fail(and you end up with diverse nazis or ryan gosling as black panther)

[–] [email protected] 19 points 1 year ago

It's done because the underlying training data is heavily biased to begin with. It's been a known issue for along time with AI/ML, for example racist cameras have been an issue for decades https://petapixel.com/2010/01/22/racist-camera-phenomenon-explained-almost/.

So they do this to try to correct for biases in their training data. It's a terrible idea, and shows the rocky path forward for GenAI, but it's easier than actually fixing the problem ¯\_(ツ)_/¯

[–] [email protected] 18 points 1 year ago (1 children)

The funniest thing about this is that its like a parable for the modern paradox of tolerance.

It reminds me of that cartoon on the front page the other day where the Palestinians being bombed were talking about being a part of history because it was a brown woman who vetoed the ceasefire.

[–] [email protected] 0 points 1 year ago
[–] [email protected] 9 points 1 year ago (1 children)

It’s gone woke! Abandon ship!

[–] [email protected] 3 points 1 year ago (1 children)

If I can't get my AI Nazified image fix, then what even is the point? /s

[–] [email protected] 3 points 1 year ago

Tay did nothing wrong

[–] [email protected] 9 points 1 year ago (1 children)

OpenAI was literally adding the word diverse into their images last year as well.

I remember getting an image once where it even wrote the word “diverse” into image, and AI only add random words pulled from the prompt.

Dunno why Google doing it now is suddenly big news?

[–] [email protected] 2 points 1 year ago

New event happens, people react.

[–] [email protected] 6 points 1 year ago (1 children)

They got the training data from Reddit, what did they expect?

[–] [email protected] 6 points 1 year ago (1 children)

It's not the training data that's the problem here.

[–] [email protected] 2 points 1 year ago (2 children)

Yeah it is. The training data skews white, so they added a "make some people non-white" kludge. It wouldn't be needed if there was actually racial diversity in the training data.

[–] [email protected] 3 points 1 year ago

It's the "make some people non-white" kludge that's the specific problem being discussed here.

The training data skewing white is a different problem, but IMO not as big of one. The solution is simple, as I've discovered over many months of using local image generators. Let the user specify what exactly they want.

[–] [email protected] 2 points 1 year ago

I don’t even see the problem with that. If western corps make an ai based overwhelmingly on western (aka majorities white people) datasets they get an ai that skews white in all things.

If they want more well rounded data they would need to buy them from China and India, probably other parts of Asia too. Only that I don’t think they are willing to give those datasets away because they are aware of their actual value, and/or are more interested in creating their own ai with it (which will then of course skew chinese for example).

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

Why doesn't the screenshot show the prompt but only its title? Unnecessarirly suspicious since it wouldn't even be hard to fake it.