semi

joined 3 years ago
[–] semi@lemmy.ml 1 points 5 days ago* (last edited 5 days ago)

Thanks for the comment. I have had exposure to similar claims, but wasn't seeing anyone using AMD GPUs for AI unless they were somehow incentivized by AMD, which made me suspicious.

In principle, more competition in the AI hardware market would be amazing, and Nvidia GPUs do feel overpriced, but I personally don't want to deal with the struggles of early adoption.

[–] semi@lemmy.ml 2 points 5 days ago* (last edited 5 days ago) (2 children)

For inference (running previously-trained models that need lots of RAM), the desktop could be useful, but I would be surprised if training anything bigger than toy examples on this hardware would make sense because I expect compute performance to be limited.

Does anyone here have practical recent experience with ROCm and how it compares with the far-more-dominant CUDA? I would imagine that compatibility is much better now that most models are using PyTorch and that is supported, but what is the performance compared to a dedicated Nvidia GPU?

[–] semi@lemmy.ml 38 points 2 weeks ago

The pre mixed spices list their contents and it's not that hard to come up with something similar by just using the individual ingredients.

[–] semi@lemmy.ml 1 points 1 month ago

Here is an exported result list from Kagi that should be accessible without an account.

[–] semi@lemmy.ml 7 points 2 months ago

OK, so cases where you control both ends of the communication. Thanks for the clarification.

[–] semi@lemmy.ml 13 points 2 months ago (6 children)

I'm a developer and would appreciate you going into more specifics about which certificates you suggest pinning.

[–] semi@lemmy.ml 9 points 2 months ago* (last edited 2 months ago)

Lucky for you, the post contains an animated JPEG showing the change over time. Lemmy clients that don't support playback will only show a static image

[–] semi@lemmy.ml 3 points 4 months ago* (last edited 4 months ago) (1 children)

This will work in general. One point of improvement: right now, if the request fails, the panic will cause your whole program to crash. You could change your function to return a Result<Html, SomeErrorType> instead, and handle errors more gracefully in the place where your function is called (e.g. ignoring pages that returned an error and continuing with the rest).

Look into anyhow for an easy to use error handling crate, allowing you to return an anyhow::Result<Html>

[–] semi@lemmy.ml 2 points 4 months ago (1 children)

Computational protein engineer here. Pretty good explanation. I wanted to add that just because we know that a protein's behavior changes depending on pH, it is still interesting to see what atom-level changes to the 3D structure are caused by the pH shift (e.g. so that we can better predict those changes on other proteins).

[–] semi@lemmy.ml 40 points 6 months ago

12.5/8=1.5625, so the Euro price went up by 56.25%

[–] semi@lemmy.ml 3 points 8 months ago* (last edited 8 months ago)

I have been using it for the last 3 months to expose services from my home internet (plex, wireguard, etc.) through a VPS and I'm pretty happy with it. It's relatively simple to set up, I haven't had any outages so far, and it's nice that it supports UDP port forwarding as well as TCP (for wireguard).

 

Gerade angeschaut, fand ich überraschend gelungene Kommunikation inklusive viel Selbstironie. Das Team scheint wirklich Begeisterung für das Projekt zu haben.

 

There have definitely been places in my code where I had to pass around and, in doing that, clone lists of things. This could be useful for these longer-lived pieces of data.

 

While this was written in response to the takeover of Twitter, it shines a light on some of the patterns of why social media platforms in general die and get replaced, which is very relevant to what's happening with Reddit.

view more: next ›