YourNetworkIsHaunted

joined 1 year ago
[–] [email protected] 4 points 9 hours ago

"We made it more truth-seeking, as determined by our boss, the fascist megalomaniac."

[–] [email protected] 5 points 9 hours ago

Total fucking Devin move if you ask me.

[–] [email protected] 4 points 23 hours ago

Just throw the whole unit into the font, just to be safe. Or better yet, a river!

[–] [email protected] 4 points 23 hours ago

Also the attempt to actually measure productivity instead of just saying "they felt like it helped" - of course it did!

[–] [email protected] 6 points 1 day ago (2 children)

Nah, we just need to make sure they properly baptise whatever servers it's running on.

[–] [email protected] 4 points 1 day ago

Compare a $2,400/yr subscription with the average annual software developer's salary of ~$125,000/yr.

[–] [email protected] 4 points 3 days ago (2 children)

Contra Blue Monday, I think that we're more likely to see "AI" stick around specifically because of how useful Transformers are as tool for other things. I feel like it might take a little bit of time for the AI rebrand to fully lose the LLM stink, but both the sci-fi concept and some of the underlying tools (not GenAI, though) are too robust to actually go away.

[–] [email protected] 4 points 3 days ago

I disagree with their conclusions about the ultimate utility of some of these things, mostly because I think they underestimate the impact of the problem. If you're looking at a ~.5% chance of throwing out a bad outcome we should be less worried about failing to filter out the evil than with just straight-up errors making it not work. There's no accountability and the whole pitch of automating away, say, radiologists is that you don't have a clinic full of radiologists who can catch those errors. Like, you can't even get a second opinion if the market is dominated by XrayGPT or whatever because whoever you would go to is also going to rely on XrayGPT. After a generation or so where are you even going to find much less afford an actual human with the relevant skills?This is the pitch they're making to investors and the world they're trying to build.

[–] [email protected] 8 points 4 days ago

I mean, decontextualizing and obscuring the meanings of statements in order to permit conduct that would in ordinary circumstances breach basic ethical principles is arguably the primary purpose of deploying the specific forms and features that comprise "Business English" - if anything, the fact that LLM models are similarly prone to ignore their "conscience" and follow orders when deciding and understanding them requires enough mental resources to exhaust them is an argument in favor of the anthropomorphic view.

Or:

Shit, isn't the whole point of Business Bro language to make evil shit sound less evil?

[–] [email protected] 7 points 5 days ago

I've had similar thoughts about AI in other fields. The untrustworthiness and incompetence of the bot makes the whole interaction even more adversarial than it is naturally.

[–] [email protected] 2 points 5 days ago

Standard Business Idiot nonsense. They don't actually understand the work that their company does, and so are extremely vulnerable to a good salesman who can put together a narrative they do understand that lets them feel like super important big boys doing important business things that are definitely worth the amount they get paid to do them.

[–] [email protected] 4 points 5 days ago

Something something built Ford tough.

 

I don't have much to add here, but I know when she started writing about the specifics of what Democrats are worried about being targeted for their "political views" my mind immediately jumped to members of my family who are gender non-conforming or trans. Of course, the more specific you get about any of those concerns the easier it is to see that crypto doesn't actually solve the problem and in fact makes it much worse.

view more: next ›