this post was submitted on 22 Jan 2025
65 points (98.5% liked)

AI

4302 readers
27 users here now

Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen.

founded 3 years ago
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 16 points 1 week ago (5 children)

There's a lot of explaining to do for Meta, OpenAI, Claude and Google gemini to justify overpaying for their models now that there's l a literal open source model that can do the basics.

[–] [email protected] 4 points 1 week ago

I'm testing right now vscode+continue+ollama+gwen2.5-coder. With a simple GPU it's already OK.

[–] [email protected] 4 points 1 week ago (2 children)

You still need an expensive hardware to run it. Unless myceliumwebserver project will start

[–] [email protected] 5 points 1 week ago* (last edited 1 week ago) (1 children)

I'm testing 14B Qwen DeepSeek R1 through ollama and it's impressive. I would think I could switch most of my current usage of chatgpt to this one (not alot I should admit though). Hardware is amd 7950x3d with nvidia 3070 ti. Not the cheapest hardware but not the most expensive either. It's of course not as good as the full model on deepseek.com but I can run it truly locally, right now.

[–] [email protected] 2 points 1 week ago (2 children)

How much vram does your TI pack? Is that the standard 8gb ddr6?

I will because I'm surprised and impressed that a 14b model runs smoothly.

Thanks for the insights!

[–] [email protected] 2 points 1 day ago

i dont even have a GPU and the 14b model runs at an acceptable speed. but yes, faster and bigger would be nice.. or knowing how to distill the biggest one, cuz I only use it for something very specific.

[–] [email protected] 2 points 6 days ago (1 children)

sorry it should have said 3080 ti which has 12 GB of Vram. Also I guess the model is Q4.

[–] [email protected] 1 points 6 days ago

No worries, thank you!

[–] [email protected] 2 points 1 week ago

Correct. But what's more expensive a single computing instance that's local or cloud based credit eating SAS AI that does not produce significantly better results?

[–] [email protected] 3 points 1 week ago

Yes GPT4All of you want to try for yourself without coding know how.

[–] howrar 1 points 1 week ago

The same could be said for when Meta "open sourced" their models. Someone has to do the training, or else these models wouldn't exist in the first place.

[–] [email protected] 1 points 1 week ago

The cost is a function of running an LLM at scale. You can run small models on consumer hardware, but the real contenders are using massive amounts of memory and compute on GPU arrays (plus electricity and water for cooling).

ChatGPT is reportedly losing money on their $200/mo pro subscription plan.