The underlying research story is interesting, but the way it's written up actively makes it worse.
The researchers based s1 on Qwen2.5, an open-source model from Alibaba Cloud.
Watch me create a racing car for less than $50. Step 1: start with a Mercedes F1 racer...
Aww come on. There's plenty to be mad at Zuckerberg about, but releasing Llama under a semi-permissive license was a massive gift to the world. It gave independent researchers access to a working LLM for the first time. For example, Deepseek got their start messing around with Llama derivatives back in the day (though, to be clear, their MIT-licensed V3 and R1 models are not Llama derivatives).
As for open training data, its a good ideal but I don't think it's a realistic possibility for any organization that wants to build a workable LLM. These things use trillions of documents in training, and no matter how hard you try to clean the data, there's definitely going to be something lawyers can find to sue you over. No organization is going to open themselves up to the liability. And if you gimp your data set, you get a dumb AI that nobody wants to use.