All repositories related to emulation and Nintendo, some of which I backed up on a self-hosted Forgejo instance.
Also, everything that you use and doesn't have more than 2 or 3 maintainers.
All repositories related to emulation and Nintendo, some of which I backed up on a self-hosted Forgejo instance.
Also, everything that you use and doesn't have more than 2 or 3 maintainers.
Afaik, LGPL means that the library has to remain dynamically linked. That's it. No static linking is allowed and no embedding (i.e. hardcoding) is allowed unless also the outer project is also in a LGPL-compatible license.
So, no, they wouldn't be legally allowed to steal your source by hardcoding it, if that's what you are worried about.
The issue is with code and resources that cannot be dynamically linked. I called them "glue code", that's the stuff developers need, in order to use your library. That is not directly your library, but you will be shipping it with your library, most likely. You will need a different license for those resources, maybe MIT or even a public domain license such as CC0.
EDIT: I noticed you mentioned Steamworks SDK in another comment. I know Steam provides an optional DRM solution which wraps games in their own proprietary system. That might be forbidden by LGPL, I'm not sure. But linking an LGPL library to the same game that links to the proprietary Steamworks SDK shouldn't be a problem, as long as the linking is dynamic and not static.
Why not LGPL the Rust code, and CC0 the glue code?
If your question is about the legal difference: this fork is licensed as GPL 2 (free libre open-source software), while the OG is proprietary (albeit source-available).
This means that everyone is allowed to do anything with this version and nobody can ever prevent them from doing so, while the OG doesn't have such freedom.
The original authors might one day decide to halt the development and pull the source code, and/or decide to start "enshittifying" Aseprite, but LibreSprite will forever remain free and available to everyone.
For me it was the same drive. I remember I had to generate a special file to convince VirtualBox to use the physical partition as if it was part of a different drive. I don't remember the details. Quite hacky perhaps, but it worked.
Iirc I had a Windows 7 (maybe 8 or 10) Home OEM, original (not cracked), but it still worked. Perhaps if I had kept using it for long periods in the VM it would have started complaining? Anyways I booted it baremetal from time to time, so maybe that's why it kept working.
That would definitely be a technical challenge, but also it's absolutely possible.
I used to do dual-boot Windows + Linux and I could run the Linux installation from a VM in Windows as well as the Windows installation from a VM in Linux.
When rebooting between metal and VM, Windows would always spend a few minutes "doing things" before continuing to boot, but it worked.
Linux would not even fret. It would just boot normally without any complaints.
I don't remember exactly which distro I had at the time, but probably it was Linux Mint.
If you don't want proprietary drivers the choice is quite straightforward: AMD. The official drivers are open source.
As for my experience, I've had absolutely no problems in the last few years with AMD, but I have to admit that I have always been using an iGPU, which has always been good enough for my needs.
I used to have problems with Nvidia proprietary drivers, but that was at least a couple years ago, things might have changed. I've never had issues with the free unofficial drivers, besides worse performance.
I hope this doesn't ruin a magic moment, but... seems like this image might actually be incorrect. There's a more modern paper in the description of this video that analyses the data with a different approach and they get something that looks quite different.
https://www.youtube.com/watch?v=cdeee7tZ8QY
In any case, one way or the other, we got the correct image of such blackhole now. Just it's not this one.
Understood, thanks 👍
Mmh, okay that makes sense. Especially the multilinguality would be pretty important. As for the legality, we'll see how it goes. Do we even know if it's really possible to build a good model with only legally acquired data?
As for the censorship, as far as I know, for DeepSeek's models it's injected in the prompt after the training is completed, so it shouldn't really be censored if you run it locally.
But yeah, you have raised good points. Thanks.
Never happened in my life. Personally, if something breaks I just wait for it to get fixed.