I don't know about you, but when I watch a video I'm not there to watch an ad.
Also don't forget about the bad companies and scams (example: Established Titles).
I don't know about you, but when I watch a video I'm not there to watch an ad.
Also don't forget about the bad companies and scams (example: Established Titles).
Yep, hoping to flair up on jerboa soon (I really only use Lemmy on mobile).
Really unfortunate that there are people who would protest open source. I need to read some wholesome cat stories or something to restore my faith in humanity.
Knowledge level: Enthusiastic spectator, I don't make or finetune llms, but I do watch AI news, try out local llms, and use things like Github copilot and chat gpt.
Question: Is it better to use code llama 34b or llama2 13b for a non coding related task?
Context: I'm able to run either model locally, but I can't run the larger 70b model. So I was wondering if running the 34b code llama would be better since it is larger. I heard that models with better coding abilities are better for other types of tasks too and that they are better with logic (I don't know if this is true I just head l heard it somewhere).
Are the llama2 models Apache 2.0 compatible? I think they use a custom license with some restrictions, could be totally wrong though.
I appreciate your sacrifice!
Anything based on llama2 tbh. It's fast enough and logical enough to handle the kinds of programming related tasks I want to use a llm for (writing boilerplate code, generating placeholder data, simple refactoring). With the release of the vicuna and codellama models things are getting even better.
it's a cool thing you've made, but where's the joke?
That applies to a lot of open source apps.
Even though I don't agree with some of the points, I would still hope to see discussion rather than unexplained downvotes.
Nice summary
Here are some ideas:
Comparisons/reviews of different AI models. (Example: llama2 is better than llama1 because x)
Tutorials for how to apply AI (Example: Making a song shuffling system with an llm and music metadata)
I've seen you doing a lot of stuff on here, so I also wanted to say: Don't overwork yourself, and I appreciate the effort you already put in.