this post was submitted on 27 Apr 2023
11 points (100.0% liked)
Technology
37991 readers
203 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The biggest ethical issues in AI/ML right now are primarily ones in which judgement is passed or facilitated via AI/ML. Judgement via surveillance is only one of many issues to be concerned about - judgement in healthcare, judgement about who gets access to resources, judgement in the legal system and other forms of judgement are also extremely high value and concerning targets of AI/ML judgement. The use of these models to identify, categorize, or otherwise quantify anything is generally a bad idea because they are trained on fundamentally racist, sexist, homophobic, transphobic, ableist, ageist, and other forms of bigoted text which are a direct representation of our existing society on the internet.
The issue is technology is advancing faster than wisdom.
I think it's quite a bit more complicated than that. The wisdom is there- I've been to a large number of AI/ML ethics talks in the last several years, including entire conferences, but the people putting on these conferences and the people actually creating and pushing these models don't always overlap. Even when they do, people disagree on how these should be implemented and how much ethics really matters.
@Gaywallet @Hirom ethics are damned because money talks. As usual, it is not problem with technology or understanding potential issues per se, but how it is all is getting blatantly ignored due of get there first gold rush, real or imagined. We have to remember that training most of LLMs are very questionable from copyright / authorship POV already, and companies try really hard to make everyone to ignore it. Because winner takes it all.