funny how they never make errors in favour of anyone else but themselves
vrighter
i don't believe it's possible either. For example the tree walker of the ast module takes the node passed to it, checks its type, gets its name, then looks for the method with that dynamically looked up name in your implementation of the tree walker and if it does (the user might not have implemented a visit method for that type of node), calls it and passes the node to it. All of this at runtime.
so? it won't have any effect on china, because last i checked, us laws apply only in the us
sure, if you have enough memory to store a list of all guids.
an infinite loop detector detects when you're going round in circles. They can't detect when you're going down an infinitely deep acyclic graph, because that, by definition doesn't have any loops for it to detect. The best they can do is just have a threshold after which they give up.
I'm not seeing any reasoning, that was the point of my comment. That's why I said "supposed"
they also call "outputs that fit the learned probability distribution, but that I personally don't like/agree with" as "hallucinations". They also call "showing your working" reasoning. The llm space has redefined a lot of words. I see no problem with defining words. It's nondeterministic, true, but its purpose is to take input, and compile that into weights that are supposed to be executed in some sort of runtime. I don't see myself as redefining the word. I'm just calling it what it actually is, imo, not what the ai companies want me to believe it is (edit: so they can then, in turn, redefine what "open source" means)
it's just a different paradigm. You could use text, you could use a visual programming language, or, in this new paradigm, you "program" the system using training data and hyperparameters (compiler flags)
so.... with all the supposed reasoning stuff they can do, and supposed "extrapolation of knowledge" they cannot figure out that a tail is part of a cat, and which part it is.
no, it's not. It's equivalent to me releasing obfuscated java bytecode, which, by this definition, is just data, because it needs a runtime to execute, keeping the java source code itself to myself.
Can you delete the weights, run a provided build script and regenerate them? No? then it's not open source.
that's why gen ai models are not "open source", ever. If they were, this group would't have to "try", they could just run the build script.
Of course, the training data and software is not available. The weights are just a binary blob. It's not the source, but merely the "compiled binary"
what part of "they do not repeat" do you still not get? You can put them in a list, but you won't ever get a hit ic it'd just be wasting memory