howrar

joined 2 years ago
MODERATOR OF
[–] howrar 2 points 6 hours ago

So is it sort of like shooting your self the foot long term?

I'm not sure what you're referring to here. Masking or not masking? I would say that masking all the time would qualify as shooting yourself in the foot long term. It's a lot of wasted energy that could be spent doing something else. When you get sufficient time to turn off and relax, it really does feel like autism is a superpower.

[–] howrar 2 points 17 hours ago (2 children)

To avoid exhaustion and burnout

[–] howrar 2 points 17 hours ago (1 children)

Very interesting to see how these articles are written. All it took was two words to take it from an unbiased report to a biased one: "lipstick-wearing".

Does anyone know if there is there a name for this technique?

[–] howrar 3 points 20 hours ago

Productivity is how fast I'm moving towards my goal. Its end goal is to reach my goal.

[–] howrar 1 points 22 hours ago

About three times per day during the work day makes for ~800 times per year. Seems to be on the right order of magnitude to me.

[–] howrar 2 points 1 day ago

Easy enough to write. But reading and maintaining? That's the hard part.

[–] howrar 1 points 1 day ago

I find it amazing how little space corn syrup takes up relative to how much is produced. It's no wonder we use it in everything.

[–] howrar 1 points 1 day ago

It's the only time where it's relevant to the conversation, no? Why would you bring it up anywhere else?

[–] howrar 2 points 1 day ago

Milk first makes it possible to get the wrong ratio of cereal to milk because

  1. the cereal floats and you have no idea how much you put in there
  2. You can underestimate how much volume the cereal takes up and not leave enough room in your bowl
[–] howrar 2 points 1 day ago (2 children)

Ah, the age-old unpopularopinions dilemma. Do I upvote because I agree, or upvote because it is unpopular and I disagree?

[–] howrar 1 points 2 days ago

The community I'm currently subscribed to for this: [email protected]

[–] howrar 2 points 2 days ago (2 children)

I like the one(s) that bring(s) in posts from Hacker News since they have a high likelihood of being interesting, and I like seeing what the people of Lemmy think of them. Other than that, I don't think I've seen any others that add value to my Lemmy experience.

5
Factorio Learning Environment (jackhopkins.github.io)
5
Open Sourcing π₀ (www.physicalintelligence.company)
 

https://bsky.app/profile/natolambert.bsky.social/post/3lh5jih226k2k

Anyone interested in learning about RLHF? This text isn't complete yet, but looks to be a pretty useful resource as is already.

 

Apparently we can register as a liberal to vote in the upcoming leadership race. What does it mean if I register? What do I gain (besides the aforementioned voting) and does it place any kind of restrictions on me (e.g. am I prevented from doing the same with a different party)?

 

An overview of RL published just a few days ago. 144 pages of goodies covering everything from basic RL theory to modern deep RL algorithms and various related niches.

This manuscript gives a big-picture, up-to-date overview of the field of (deep) reinforcement learning and sequential decision making, covering value-based RL, policy-gradient methods, model-based methods, and various other topics (including a very brief discussion of RL+LLMs).

 

If there's insufficient space around it, then it'll never spawn anything. This can be useful if you want to keep a specific spawner around for capture later but don't want too spend resources on killing the constant stream of biters.

10
submitted 5 months ago* (last edited 5 months ago) by howrar to c/[email protected]
 

I'm looking to get some smart light switches/dimmers (zigbee or matter if that's relevant), and one of the requirements for me is that if the switches aren't connected to the network, they would behave like regular dumb switches/dimmers. No one ever advertises anything except the "ideal" behaviour when it's connected with a hub and their proprietary app and everything, so I haven't been able to find any information on this.

So my question: is this the default behaviour for most switches? Are there any that don't do this? What should I look out for given this requirement?


Edit: Thanks for the responses. Considering that no one has experienced switches that didn't behave this way nor heard of any, I'm proceeding with the assumption that any switch should be fine. I got myself some TP Link Kasa KS220 dimmers and it works pretty well. Installation was tough due to its size. Took me about an hour of wrangling the wires so that it would fit in the box. Dimming also isn't as smooth as I'd like, but it works. I haven't had a chance to set it up with Home Assistant yet since the OS keeps breaking every time I run an update and I haven't had time to fix it after the last one. Hopefully it integrates smoothly when I do get to it.

 

This is a video about Jorn Trommelen's recent paper: https://pubmed.ncbi.nlm.nih.gov/38118410/

The gist of it is that they compared 25g protein meals vs 100g protein meals, and while you do use less of it for muscle protein synthesis at that quantity, it's a very minor difference. So the old adage still holds: Protein quantity is much more important than timing.

While we're at it, I'd also like to share an older but very comprehensive overview of protein intake by the same author: https://www.strongerbyscience.com/athlete-protein-intake/

 

Ten years ago, Dzmitry Bahdanau from Yoshua Bengio's group recognized a flaw in RNNs and the information bottleneck of a fixed length hidden state. They put out a paper introducing attention to rectify this issue. Not long after that, a group of researchers at Google found that you can just get rid of the RNN altogether and you still get great results with improved training performance, giving us the transformer architecture in their Attention Is All You Need paper. But transformers are expensive at inference time and scale poorly with increasing context length, unlike RNNs. Clearly, the solution is to just use RNNs. Two days ago, we got Were RNNs All We Needed?

 

Recordings for the RLC keynote talks have been released.

Keynote speakers:

  • David Silver
  • Doina Precup (Not recorded)
  • Peter Stone
  • Finale Doshi-Velez
  • Sergey Levine
  • Emma Brunskill
  • Andrew Barto
0
submitted 7 months ago* (last edited 7 months ago) by howrar to c/reinforcement_learning
 

OpenAI just put out a blog post about a new model trained via RL (I'm assuming this isn't the usual RLHF) to perform chain of thought reasoning before giving the user its answer. As usual, there's very little detail about how this is accomplished so it's hard for me to get excited about it, but the rest of you might find this interesting.

view more: next ›