abucci

joined 2 years ago
[–] [email protected] 2 points 1 week ago (1 children)

@[email protected]

So let me make sure I understand your argument. Because nobody can be held liable for one hypothetical death of a child when an accident happens with a self driving car, we should ban them so that hundreds of real children can be killed instead. Is that what you are saying?

No, this strawman is obviously not my argument. It's curious you're asking whether you understand, and then opining afterwards, rather than waiting for the clarification you suggest you're seeking. When someone responds to a no-brainer suggestion, grounded in skepticism but perfectly sensible nevertheless, with a strawman seemingly crafted to discredit it, one has to wonder if that someone is writing in good faith. Are you?

For anyone who is reading in good faith: we're clearly not talking about one hypothetical death, since more than one real death involving driverless car technology has already occurred, and there is no doubt there will be more in the future given the nature of conducting a several-ton hunk of metal across public roads at speed.

It should go without saying that hypothetical auto wreck fatalities occurring prior to the deployment of technology are not the fault of everyone who delayed the deployment of that technology, meaning in particular that these hypothetical deaths do not justify hastening deployment. This is a false conflation regardless of how many times Marc Andreesen and his apostles preach variations of it.

Finally "ban", or any other policy prescription for that matter, appeared nowhere in my post. That's the invention of this strawman's author (you can judge for yourself what the purpose of such an invention might be). What I urge is honestly attending to the serious and deadly important moral and justice questions surrounding the deployment of this class of technology before it is fully unleashed on the world, not after. Unless one is so full up with the holy fervor of technoutopianism that one's rationality has taken leave, this should read as an anodyne and reasonable suggestion.

[–] [email protected] 3 points 1 week ago (4 children)

@[email protected] @[email protected]
to amplify the previous point, taps the sign as Joseph Weizenbaum turns over in his grave

A computer can never be held accountable

Therefore a computer must never make a management decision

tl;dr A driverless car cannot possibly be "better" at driving than a human driver. The comparison is a category error and therefore nonsensical; it's also a distraction from important questions of morality and justice. More below.

Numerically, it may some day be the case that driverless cars have fewer wrecks than cars driven by people.(1) Even so, it will never be the case that when a driverless car hits and kills a child the moral situation will be the same as when a human driver hits and kills a child. In the former case the liability for the death would be absorbed into a vast system of amoral actors with no individuals standing out as responsible. In effect we'd amortize and therefore minimize death with such a structure, making it sociopathic by nature and thereby adding another dimension of injustice to every community where it's deployed.(2) Obviously we've continually done exactly this kind of thing since the rise of modern technological life, but it's been sociopathic every time and we all suffer for it despite rampant narratives about "progress" etc.

It will also never be the case that a driverless car can exercise the judgment humans have to decide whether one risk is more acceptable than another, and then be held to account for the consequences of their choice. This matters.

Please (re-re-)read Weizenbaum's book if you don't understand why I can state these things with such unqualified confidence.

Basically, we all know damn well that whenever driverless cars show some kind of numerical superiority to human drivers (3) and become widespread, every time one kills, let alone injures, a person no one will be held to account for it. Companies are angling to indemnify themselves from such liability, and even if they accept some of it no one is going to prison on a manslaughter charge if a driverless car kills a person. At that point it's much more likely to be treated as an unavoidable act of nature no matter how hard the victim's loved ones reject that framing. How high a body count do our capitalist systems need to register before we all internalize this basic fact of how they operate and stop apologizing for it?

(1) Pop quiz! Which seedy robber baron has been loudly claiming for decades now that full self driving is only a few years away, and depends on people believing in that fantasy for at least part of his fortune? We should all read Wrong Way by Joanne McNeil to see the more likely trajectory of "driverless" or "self-driving" cars.
(2) Knowing this, it is irresponsible to put these vehicles on the road, or for people with decision-making power to allow them on the road, until this new form of risk is understood and accepted by the community. Otherwise you're forcing a community to suffer a new form of risk without consent and without even a mitigation plan, let alone a plan to compensate or otherwise make them whole for their new form of loss.
(3) Incidentally, quantifying aspects of life and then using the numbers, instead of human judgement, to make decisions was a favorite mission of eugenicists, who stridently pushed statistics as the "right" way to reason to further their eugenic causes. Long before Zuckerberg's hot or not experiment turned into Facebook, eugenicist Francis Galton was creeping around the neighborhoods of London with a clicker hidden in his pocket counting the "attractive" women in each, to identify "good" and "bad" breeding and inform decisions about who was "deserving" of a good life and who was not. Old habits die hard.

[–] [email protected] 2 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

@[email protected] @[email protected] I didn’t fully follow the connection between the social media post and multi-armed bandit problems. Is the idea that a user has k options about what to view, chooses one, and experiences some kind of payoff from the choice? If so I’m not sure the situation is well-modeled by bandits, since the typical social media user is presented with a smallish set of options chosen for them by an algorithm, with each user choice resulting in an algorithm presenting them with another smallish set of options that might be of different size and comprise different options. That kind of situation might be better modeled as an extensive form game of user against “the algorithm” with a finite but variable set of choices for the player at each ply. It’s common in a turn-taking game for both player’s and opponent’s choice to affect the choices available to player next ply, which is why this feels like a better model to me than k-armed-bandits or the POMDP type setups usually explored in RL.

If what the algorithm does can be approximated that way (as a reward-maximizing player in a multi-ply game that chooses what category of content to show a user at each turn), then you can get partway towards understanding how it works functionally by understanding how the tradeoffs between monetization, data gathering, and maximizing surprisal (learning) in its reward function are struck. I suspect that splitting the bins/categories more and more finely sometimes makes the tradeoffs look better, which might explain why social media companies tend to do this (if you have one bin of stuff with red and blue objects, and people choose randomly from it, they’ll be less happy on average than if you have a bin of red objects and a bin of blue objects and are able to direct red-preferring and blue-preferring users to the appropriate bin better than a coin flip would).

People are not static utility maximizers, but these types of algorithms assume we are. So I think they tend to get stuck in corners both because of how they strike tradeoffs (you get manosphere content because that’s what’s most monetizable) and because people’s preferences aren’t expressed consistently in their actions and change through time (you keep getting shown scifi content because you looked at a few scifi videos in a row awhile ago when you were feeling nostalgic but you don’t usually prefer it).

That’s what I have for now. Sorry for length.

[–] [email protected] 5 points 4 weeks ago (3 children)

@[email protected] @[email protected] I have a lot to say on this subject but unfortunately do not have the time right now to write out anything worth reading! I will return perhaps tomorrow.

[–] [email protected] 1 points 4 weeks ago

@[email protected] Thank you. It sounds like a reasonable way to go. In my case I'm using Sublime Text, and it picks up which JVM to use from the user environment.

[–] [email protected] 2 points 4 weeks ago

@[email protected] The purpose of my post is to be a post.

[–] [email protected] 2 points 4 weeks ago

@[email protected] Any sufficiently complex linked list is indistinguishable from a toolchain.

 

Nowadays programming in a programming language I don't use daily seems to always require an upgrade cascade of editors, tools, plugins, dependencies, libraries, my DNA, ??? I put some effort into keeping my environment static but all it takes is one autoupgrading thing I missed to kick off one of these cascades, and it feels like whack-a-mole trying to find and lock down every possible cause. This time it looks like a newer version of scala metals might have stopped supporting Java 11 and somehow got updated without my knowledge (maybe? I'm guessing).

P.S. This is not an invitation to post critiques about any of these technologies or recommendations about what I should be doing instead.

#scala #dev #tech #SoftwareDevelopment #coding #programming

[–] [email protected] 4 points 4 months ago (1 children)

@[email protected] @[email protected] @[email protected] @[email protected] This one generated a mention in my fediverse server, for what it's worth.

[–] [email protected] 3 points 5 months ago (1 children)

@[email protected] @[email protected] Though I'm probably a bit older than you both, occupy was also the moment where I first engaged in a protest for a sustained period of time and then continued to do so after. There was a lot of incoherence around occupy that took me years to get my head around. But I've come to believe a totally horizontal, leaderless movement organized through social media platforms is dead on arrival. I thought I'd throw a few observations into the mix if that's OK.

It was pointed out above that such a thing is like shouting "NO!" at the government; I fully agree with that. Bevins argues (at least in interviews; haven't had a chance to read his book yet) that these spontaneous NOs can be dangerous: if they go far enough they can create a power vacuum that the most prepared (read: organized and ruthless) forces quickly move to fill. This is the real story of what happened in several countries during the Arab Spring, by Bevins's read (I take it). So while folks are excitedly believing they're participating in the birth of a new form of democracy, what they're really doing is inflicting a dark Shock Doctrine on themselves. I have to confess that I, too, did not see this at the time.

There must be some kind of theory of change, pre-organizing to build power, and a clear-eyed recognition of the situation to avoid these DOA movements and have some hope of bringing lasting, meaningful change for a lot of people. Much of the US left (such as it is) seems allergic to looking reality squarely in the face. I'd almost go so far as saying there should not be attempts at lefty mass protest until such power is built, such theory is developed, and widespread recognition of our situation, grounded in reality, exists, exactly because of the danger that actors with very different goals from ours are better positioned to take advantage of the chaos mass protests generate.

Personally I'd refer to (what used to be) social media as "surveillance media". The form the modern US state takes is public-private partnership, with many state functions dispatched by private corporations and actors. Though Musk clearly has his own aims, he is almost surely playing a state role with Twitter not too different from the one he plays through SpaceX. So, though social media's always been corporate mediated, I'd add that recognizing the role of public-private partnerships in the modern US context leads to the probability that Twitter has become something else. In that view, the finances are almost irrelevant, and LOLing about this or that number going down or this or that many advertisers leaving the platform amounts to copium. If Twitter really is performing useful functions for the state then it will continue to exist no matter how much money it "loses"; failing to perform those functions is what would put it in jeopardy, not revenue figures.

[–] [email protected] 2 points 7 months ago

@[email protected] That'd be such a great thing to see in data. I was alluding more to the theory of voting systems, like rational choice theory. The setup in those is something like you have a set of people, and there's a choice they need to make collectively. Each person can have a different preference about what the choice should be. Arrow's impossibility theorem states, roughly, that in most cases no matter what system you use to take account of the people's preferences and make the final choice, at least one person's preferences will be violated (they won't like the choice).

What I was imagining was, in the same setup, everybody modifies their preferences based on what they think the other people's preferences are. So now the choice isn't being made based on their preferences, it's being made on the modification of their preferences. Arrow's impossibility theorem still holds, so no matter how the final choice is made some people will still be unhappy with it. But, I think it's possible that even more people will be unhappy than if they'd just stuck with their original preferences. Or, maybe the people who'd already have been unhappy are even more unhappy. I'd have to actually sit down and work it out though, which I haven't.

The example of your dad talking himself out of voting for Buttigieg because he thinks other people won't vote for Buttigieg is exactly the kind of case I was thinking of! Except I was thinking more theoretically than data-wise. It'd be great to see data on it too, for sure.

[–] [email protected] 6 points 7 months ago (3 children)

@[email protected] @[email protected] I swear one day I'm going to sit down and do the actual math to prove that voting systems are broken by having a majority of voters factor their perception of "electoral math" into their preferences even when their perceptions are accurate. Arrow's impossibility theorem is already pretty discouraging without all this meta stuff.

[–] [email protected] 16 points 11 months ago (1 children)

@[email protected] @[email protected] Since most people spend most of their best hours at the workplace, what this person is really saying is that there shouldn't be any politics at all. I.e., this is a confession: "I am an authoritarian".

view more: next ›