Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
view the rest of the comments
in the case of google translate (or any translation tool for that matter) it's not even an issue with the ml algorithm itself: separate translations can be created specifically for languages that have non-gendered pronounces, to say something like
they
orhe/she
or whatever, for other non-concrete cases it's a different issue of coursei am actually against using ml wherever it is remotely makes sense, imo the entire movement has been made worse by the hype around it and skewed it's applications away from topics where its use could be very helpful and bring improvements to society (things like science and medicine), toward things which are easy to monetize, and we now have people with phds in ml trying to discover new ways to keep users longer on youtube to watch more ads
my point being that, if you could throw away all the unnecessary applications of ml where gender/race/ethnicity bias could be a problem (like automated job hiring, crime profiling, information gathering for monetization purposes), there aren't that many things left, and the ones that left the easy fix would be just getting more non standard data, where [semi]supervised learning is concerned of course
but maybe i'm wrong, i'm curious what you think
I'm not that knowledgeable about ML but from what I've seen, I wholeheartedly agree. For tasks where any bias is an issue it shouldn't be used, unless it can be developed in a way that properly deals with those biases. The lack of doing so always end up reinforcing the issues you mentioned.