What local models are you using that are better? Not trying to argue, honest interest
Zeth0s
I find that recently the effort needed to get the "right" answer is much more than in the past for gpt-4. That's my impression. At the end I am finding myself more often going back to google, stack overflow, manuals, medium...
I believe they distilled the model to much for performances, or the rlhf is really degrading the model performances
Thanks, I'll try
How does one post from mastodon to a lemmy community?
Buy better pasta! I'd suggest rummo or de Cecco, they are good and easy to find outside Italy
I solved lag by changing instance, moving out of lemmy.world helps a lot
What's the other half?
Because everything needs supervision, even the president of the United States is supervised by judges and voters.
AI will need supervision in 5 years. Even a completely autonomous agent will need a supervisor
Are they renaming the role? LLM supervisor?
Here it is, all for you https://open-assistant.io/
You also get a useless leader board to replace reddit karma!
It is anyway a legit initiative to help for open source LLMs
Because one loses subscriptions by moving to a new server
Thanks! I tried vicuna, but I didn't find it very good for programming. I will keep searching :)