169
In NYC, companies will have to prove their AI hiring software isn't sexist or racist
(www.nbcnews.com)
This is a most excellent place for technology news and articles.
Isn't the whole point of AI decision making to provide plausible deniability for these sort of things?
Yes, but if you train an AI on racist/sexist data, it will naturally do the same.
Depends how the law is applied...
Kinda like if a self driving car kills someone, who is liable, driver, manufacturer, seller?
I guess you pay insurance and they take on liability is another option.