jacksilver

joined 2 years ago
[–] [email protected] 2 points 1 day ago

I was going to mention this. I started watching the old night court when the new one started airing and was blown away at how well they handled that episode given the time period.

[–] [email protected] 1 points 2 days ago (1 children)

True but if I own the .exe or physical disk, it's going to be a lot harder to stop me playing the game than if I'm accessing it through a platform.

[–] [email protected] 1 points 2 days ago (3 children)

Yeah, that's the point I and the person above were stating.

[–] [email protected] 3 points 2 days ago (3 children)

UK went through industrialization leading to its empire, and the US was the industrial power during its ascent. Same thing with Japan before WWII.

Many imoeralistic powers seem to go through big industrial growth before expansion.

[–] [email protected] 6 points 3 days ago

Yeah, I don't get the downvotes, this is literally showing an example where the model is falling flat on its face.

I think much like the model, no one read the actual prompt and response.

[–] [email protected] 2 points 3 days ago (1 children)

I'll have to check this out.

Also, can't help but callout another wholesome Gator game - https://store.steampowered.com/app/1586800/Lil_Gator_Game/

[–] [email protected] 3 points 3 days ago* (last edited 3 days ago)

I'm not sure how good a source it is, but Wikipedia says it was multimodal and came out about two years ago - https://en.m.wikipedia.org/wiki/GPT-4. That being said.

The comparisons though are comparing the LLM benchmarks against gpt4o, so maybe a valid arguement for the LLM capabilites.

However, I think a lot of the more recent models are pursing architectures with the ability to act on their own like Claude's computer use - https://docs.anthropic.com/en/docs/build-with-claude/computer-use, which DeepSeek R1 is not attempting.

Edit: and I think the real money will be in the more complex models focused on workflows automation.

[–] [email protected] 2 points 3 days ago (4 children)

My main point is that gpt4o and other models it's being compared to are multimodal, R1 is only a LLM from what I can find.

Something trained on audio/pictures/videos/text is probably going to cost more than just text.

But maybe I'm missing something.

[–] [email protected] 2 points 3 days ago (1 children)

Everything I've seen from looking into it seems to imply it's on par for training and performance as other (LLM only) models.

I feel like I'm missing something here or that the market is "correcting" for other reasons.

[–] [email protected] 26 points 3 days ago (7 children)

My understanding is it's just an LLM (not multimodal) and the train time/cost looks the same for most of these.

I feel like the world's gone crazy, but OpenAI (and others) is pursing more complex model designs with multimodal. Those are going to be more expensive due to image/video/audio processing. Unless I'm missing something that would probably account for the cost difference in current vs previous iterations.

[–] [email protected] 1 points 3 days ago

My point was a mixture of Experts model could suffer from generalization. Although in reading more I'm not sure if it's the newer R model that had the MoE element.

[–] [email protected] 5 points 3 days ago

You are right - https://www.pcgamingwiki.com/wiki/The_Big_List_of_DRM-Free_Games_on_Steam.

My main arguement though was that it's not like your steam library is yours without restrictions. You're agreeing to Steams terms and services and there are lots of ways they can prevent you from playing (most) games you "own".

 

Not sure if a perfect fit, but a comedy western rpg with a bit of a supernatural element. If you haven't already you should check it out!

view more: next ›