
ell1e
How would such limited use fix the plagiarism? Here's a lawyer demo'ing the issue: https://github.com/mastodon/mastodon/issues/38072#issuecomment-4105681567
This isn't a legal advice. Check out the link, form your own opinion.
Some highlights from this talk: https://github.com/LemmyNet/lemmy-docs/issues/413#issuecomment-4105667974 Quote: "Obvious, this is a copyright infringement."
Sadly, it seems to be fairly common to have at least some AI slop code now. E.g. lemmy itself appears to be planning to do so too.
It's like having slop would get you some prize.
Ableism in regards to immigration is sadly very common.
Kate is a great minimal VS Code alternative. Sure, it's less features, but it has the basics.
Relevant article: https://www.gnu.org/philosophy/you-the-problem-tpm2-solves.en.html
And if anybody thought TPM provides security: https://www.elevenforum.com/t/tpm-2-0-is-a-must-they-said-it-will-improve-windows-security-they-said.13222/ https://gist.github.com/osy/45e612345376a65c56d0678834535166 https://www.sophos.com/en-us/blog/serious-security-tpm-2-0-vulns-is-your-super-secure-data-at-risk https://www.covertswarm.com/post/how-secure-are-tpm-chips
Reader, you know what's likely most secure? FOSS code, peer-reviewed and regularly patched.
I don't get why one would trust security theater, aka TPM and secureboot.
That doesn't take into account the extensively researched plagiarism concerns. It's not just that LLMs make low quality slop but that some of us think the GPL won't work if you can train LLMs on GPL, then have it spit out GPL snippets un-GPL'ed.
Some people literally un-GPL projects via AI in one go. While that's the egregious version, any LLM use seems to risk having a similar effect on a smaller scope.
This isn't only a legal question. At least if you think the GPL has societal and moral value.
Problem is, LLM code prediction will likely plagiarize too. Some argue "it's too short I can't get sued", but even if that were universally true (don't know, IANAL) that still leaves the ethics and morals of seemingly stealing some lines hook and sinker with every punctuation bit and intricancy from GPL code bases, without attribution.
Some simply think that's bad for FOSS, notwithstanding other ways LLMs seem to harms FOSS.
(And oldschool "IntelliSense" is semantics based and doesn't do that.)
There is a growing list of projects to collaborate with that reject LLM code: Asahi Linux, elementaryOS, Gentoo, GIMP, GoToSocial, Löve2D, Loupe, NetBSD, postmarketOS, Qemu, RedoxOS, Servo, stb libraries, Zig.
My opinion is that the data disagrees with you: 1. https://www.psu.edu/news/research/story/beyond-memorization-text-generators-may-plagiarize-beyond-copy-and-paste 2. https://dl.acm.org/doi/10.1145/3543507.3583199 3. https://www.sciencedirect.com/science/article/pii/S2949719123000213#b7 4. https://www.theatlantic.com/technology/2026/01/ai-memorization-research/685552/ 5. Related high profile incident that is very telling: https://www.pcgamer.com/software/ai/microsoft-uses-plagiarized-ai-slop-flowchart-to-explain-how-github-works-removes-it-after-original-creator-calls-it-out-careless-blatantly-amateuristic-and-lacking-any-ambition-to-put-it-gently/
In the US at least, there’s clear legal precedent that LLM fabrications are not copyrightable.
I see many people doubt this says anything about training data copyright, beyond AI user copyright.
This isn't legal advice, I'm not a lawyer.
So is
function isEven()a prompt with exact wording from an example, too?