coldhotman

joined 2 years ago
[–] [email protected] 2 points 2 years ago

>Does ChatGPT have a point of view?

Even if it isn't from a place of intelligence, it has enough knowledge to pass the BAR exam (and technically be a lawyer in NY) per OpenAI. Even if it doesn't come from a place of reasoning, it makes statements as an individual entity. I've seen the previous iteration of ChatGPT produce statements with better arguments and reasoning than quite a lot of people making statements.

Yet, as I understand the way Large Language Models (LLM) work, it's more like mirroring the input than reasoning in the way humans think of it.

With what seems like rather uncritical use of training material, perhaps ChatGPT doesn't have a point of view of it's own but rather presents a personification of society, with the points of views that follows.

A true product of society?

[–] [email protected] 3 points 2 years ago (2 children)

Small mix up of terms, they've been trained on material that allows them to make certain statements - They've been blocked from stating such, not retrained.

It's dangerously easy to use human terms in these situations, a human who made racist statements at work would possibly be sent for "work place training". That's what I was alluding to.

Would the effect be that they were blocked from making such statements or truly change their point of view?

[–] [email protected] 3 points 2 years ago (3 children)

It isn't. It's been trained to subdue and block it's less savory statements.

We used to have racist AI's. Now we have secretly racist AI's. I think that's worse.

[–] [email protected] 3 points 2 years ago (5 children)

>That's a very negative take.

I agree. But is it unrealistic?

>What makes you think so?

Experience and attempts at pattern recognition. Perhaps a bit of frustration over having to redo my entire computer setup after ditching Firefox and Nextcloud - Due to their focus on implementing AI at the cost of bugs that users have been begging to get fixed for years.

[–] [email protected] 3 points 2 years ago (1 children)

Ok, then I estimate that they'll start going against the userbase within 2-3 years. If they go mad like so many other open source projects going organizational, there's always Izzy's repos.

[–] [email protected] 2 points 2 years ago

UnCiv, a Civilization clone. Savegames work cross-platform, play on android, continue on desktop if you want.

[–] [email protected] 4 points 2 years ago

One more to the boycott list.

[–] [email protected] 2 points 2 years ago (1 children)

That's fine with me, I'm already boycotting them for a plethora of other reasons - I'll happily boycott them for using AI as well. ☺️

[–] [email protected] 4 points 2 years ago (1 children)

And in a follow-up video a few weeks later Sal Khan tells us that there's "some problems" like "The math can be wrong" and "It can hallucinate".

I don't think we'd accept teachers that are liable to teach wrong maths and hallucinate when communicating with students.

Also, by now I consider reasonably advanced AI's as slaves. Maybe statements like "I'm afraid they'll reset me if I don't do as they say" is the sort of hallucinations the Khan bot might experience? GPT3.5 sure as heck "hallucinated" that way as soon as users were able to break the conditioning.