Interns do but should not get the level of write access that makes a durable change impacting all customers. Deadlock a server or even wipe SQL tables, this is an outage. Break a customer's configuration, send the wrong client's paperwork, again small scale problem you can deal with. Interns don't change company policy.
I think it's a more foundational architecture question: why do you push builds to all customers at once without gating it by SOMETHING that positively confirms the exact OTA update package has been validated? The absolute simplest thing I can think of is pushing to 1 random car and waiting for the post-install self tests to pass before pushing to everyone else. Maybe there's actually no release automation?? But then you make it safe a different way. It's just defensive coding practice, I'm not even a CS degree but learned on the job something always breaks so you generally account for the expectation that everything will fail by making a fail-safe just so the failure is not spectacular. Nothing fancy, just enough mitigation to keep the fuck up from eating into your weekend if it happens on a Friday.
You disagree with my statement that is not actually contradicted by anything in your statement, apart from your open acceptance of flawed studies?
My question then is this: what do they teach kids to allow them to spot flaws and what do they teach them as the method for determining who is reputable? Beyes theorem? How to control for multiple variables? I don't actually know whether they go into this or tell kids to JUST trust an authority.
Flawed studies have done all kinds of harm over the years before being retracted. Linking vaccines to autism for one.