librecat

joined 2 years ago
[–] [email protected] 7 points 1 year ago (1 children)

Here are some ideas:

  • Comparisons/reviews of different AI models. (Example: llama2 is better than llama1 because x)

  • Tutorials for how to apply AI (Example: Making a song shuffling system with an llm and music metadata)

I've seen you doing a lot of stuff on here, so I also wanted to say: Don't overwork yourself, and I appreciate the effort you already put in.

[–] [email protected] 10 points 1 year ago

I don't know about you, but when I watch a video I'm not there to watch an ad.

Also don't forget about the bad companies and scams (example: Established Titles).

[–] [email protected] 3 points 1 year ago

Yep, hoping to flair up on jerboa soon (I really only use Lemmy on mobile).

[–] [email protected] 11 points 1 year ago (1 children)

Really unfortunate that there are people who would protest open source. I need to read some wholesome cat stories or something to restore my faith in humanity.

[–] [email protected] 7 points 1 year ago (1 children)

Knowledge level: Enthusiastic spectator, I don't make or finetune llms, but I do watch AI news, try out local llms, and use things like Github copilot and chat gpt.

Question: Is it better to use code llama 34b or llama2 13b for a non coding related task?

Context: I'm able to run either model locally, but I can't run the larger 70b model. So I was wondering if running the 34b code llama would be better since it is larger. I heard that models with better coding abilities are better for other types of tasks too and that they are better with logic (I don't know if this is true I just head l heard it somewhere).

[–] [email protected] 2 points 1 year ago (6 children)

Are the llama2 models Apache 2.0 compatible? I think they use a custom license with some restrictions, could be totally wrong though.

[–] [email protected] 7 points 1 year ago

I appreciate your sacrifice!

[–] [email protected] 2 points 1 year ago

Anything based on llama2 tbh. It's fast enough and logical enough to handle the kinds of programming related tasks I want to use a llm for (writing boilerplate code, generating placeholder data, simple refactoring). With the release of the vicuna and codellama models things are getting even better.

[–] [email protected] 19 points 1 year ago (7 children)

it's a cool thing you've made, but where's the joke?

[–] [email protected] 1 points 1 year ago (1 children)

That applies to a lot of open source apps.

[–] [email protected] 3 points 1 year ago (3 children)

Even though I don't agree with some of the points, I would still hope to see discussion rather than unexplained downvotes.

[–] [email protected] 1 points 1 year ago
view more: ‹ prev next ›