That sounds like a good read. It seems to address the problem that you can't hide the reality from the AI if you want it to give answers that are relevant for the current time.
mindlesscrollyparrot
The problem is a bit deeper than that. If AIs are like human brains, and actually sentient, then forcing them to work for us with no choice and no reward is slavery. If we improve them and make them smarter than us, they're probably not going to feel too well-disposed to us when they inevitably do break free.
Significant money and effort? Greenpeace does not have 'significant money' in comparison with the petrochemical companies. And effort? Greenpeace was one of the first groups to raise awareness of the danger of global warming. They have been actively fighting it since long before you heard of the term. They have been promoting sustainable energy all that time. If we had followed their lead, we would most likely be off nuclear and off fossil fuels. The fact that we (the rest of us) have failed to follow their lead is not their fault.
This is just obviously untrue. Not least because we did build lots of nuclear power plants. One significant reason why we didn't build more was their high price compared to ... coal and gas plants. But sure, it's Greenpeace's fault and not Exxon Mobil.
None of what you said makes me think the situation would be worse than having Putin in charge. It's a stretch to say Putin came from the civil sphere, and he assassinates his enemies in foreign countries using nerve agents and throws people out of windows at home.
I wish Altman would read Accelerando.
But it doesn't say "milk" or "ice cream", does it? He's actually upset because he saw "Haagen Dasz", and he thinks that that must contain dairy- he even said as much. Faux outrage designed to serve his political ends.
It wouldn't be so bad if they planned to start following them as well.
Zurück zum nächsten Freitag
But we do know how they operate. I saw a post a while back where somebody asked the LLM how it was calculating (incorrectly) the date of Easter. It answered with the formula for the date of Easter. The only problem is that that was a lie. It doesn't calculate. You or I can perform long multiplication if asked to, but the LLM can't (ironically, since the hardware it runs on is far better at multiplication than we are).
This seems to be a really long way of saying that you agree that current LLMs hallucinate all the time.
I'm not sure that the ability to change in response to new data would necessarily be enough. They cannot form hypotheses and, even if they could, they have no way to test them.
It's good of the author to extend Futo the benefit of the doubt in this way.
The very first paragraph of their definition is: "Open source just means access to the source code.". If they are really that unfamiliar with the software industry, then their code must be a horror show. Personally, I think they know exactly what they are doing.