howrar

joined 2 years ago
MODERATOR OF
[–] howrar 3 points 1 day ago

My experience has been that you basically restart the process of building a new social circle every few years. Life circumstances change. People move away. Some relationships grow apart. Some start families. So there's always going to be others in the same boat as you looking for new connections.

[–] howrar 19 points 1 day ago

Being alive probably poses the greatest risk.

5
Open Sourcing π₀ (www.physicalintelligence.company)
[–] howrar 3 points 2 days ago

Ah, I didn't realize the moon could look bigger/smaller at different times. I thought you were saying that the moon is actually the same size as the sun or something like that.

[–] howrar 2 points 2 days ago (2 children)

that the moon's apparent size is due to how close it is to earth (same for seasons and the sun)

Explain?

Also, what's the size/proximity of seasons?

[–] howrar 3 points 3 days ago (1 children)

I know you mean this as in "no one remembers that dumb thing you said last month", but I can't help reading it as "no one cares enough about you to give you any thought."

[–] howrar 3 points 3 days ago (3 children)

How does one go about finding these liquidators?

[–] howrar 1 points 3 days ago

Yes? I get the impression that you mean to disagree with me, but I can't tell how.

I don't know if my explanation of the phenomenon is correct or not. I don't know much about the science of traffic dynamics. All I know is that when you're on the road, pretty much everyone ends up at approximately the same speed. That speed can differ relative to the speed limit depending on time of day, road and weather conditions, which road you're on, etc. and there's no one to tell me what speed to aim for. I just look at the flow of traffic and follow it. That's all.

[–] howrar 3 points 4 days ago (1 children)

You weren't kidding. 12 emails from them within the last week.

[–] howrar 6 points 4 days ago

Montreal's a pretty big city. If you're willing to do it, sharing your local expertise can help a lot of people.

 

https://bsky.app/profile/natolambert.bsky.social/post/3lh5jih226k2k

Anyone interested in learning about RLHF? This text isn't complete yet, but looks to be a pretty useful resource as is already.

[–] howrar 1 points 5 days ago (1 children)

If there's no puddle forming around the humidifier, doesn't that mean all the water is dissolved in the air? Where else could it go?

[–] howrar 2 points 5 days ago

They'll come out of that job with a whole taxonomy for farts and grunts.

[–] howrar 1 points 5 days ago

So essentially, they set the party platform and everyone in the party follows that?

 

Apparently we can register as a liberal to vote in the upcoming leadership race. What does it mean if I register? What do I gain (besides the aforementioned voting) and does it place any kind of restrictions on me (e.g. am I prevented from doing the same with a different party)?

 

An overview of RL published just a few days ago. 144 pages of goodies covering everything from basic RL theory to modern deep RL algorithms and various related niches.

This manuscript gives a big-picture, up-to-date overview of the field of (deep) reinforcement learning and sequential decision making, covering value-based RL, policy-gradient methods, model-based methods, and various other topics (including a very brief discussion of RL+LLMs).

 

If there's insufficient space around it, then it'll never spawn anything. This can be useful if you want to keep a specific spawner around for capture later but don't want too spend resources on killing the constant stream of biters.

10
submitted 3 months ago* (last edited 2 months ago) by howrar to c/[email protected]
 

I'm looking to get some smart light switches/dimmers (zigbee or matter if that's relevant), and one of the requirements for me is that if the switches aren't connected to the network, they would behave like regular dumb switches/dimmers. No one ever advertises anything except the "ideal" behaviour when it's connected with a hub and their proprietary app and everything, so I haven't been able to find any information on this.

So my question: is this the default behaviour for most switches? Are there any that don't do this? What should I look out for given this requirement?


Edit: Thanks for the responses. Considering that no one has experienced switches that didn't behave this way nor heard of any, I'm proceeding with the assumption that any switch should be fine. I got myself some TP Link Kasa KS220 dimmers and it works pretty well. Installation was tough due to its size. Took me about an hour of wrangling the wires so that it would fit in the box. Dimming also isn't as smooth as I'd like, but it works. I haven't had a chance to set it up with Home Assistant yet since the OS keeps breaking every time I run an update and I haven't had time to fix it after the last one. Hopefully it integrates smoothly when I do get to it.

 

This is a video about Jorn Trommelen's recent paper: https://pubmed.ncbi.nlm.nih.gov/38118410/

The gist of it is that they compared 25g protein meals vs 100g protein meals, and while you do use less of it for muscle protein synthesis at that quantity, it's a very minor difference. So the old adage still holds: Protein quantity is much more important than timing.

While we're at it, I'd also like to share an older but very comprehensive overview of protein intake by the same author: https://www.strongerbyscience.com/athlete-protein-intake/

 

Ten years ago, Dzmitry Bahdanau from Yoshua Bengio's group recognized a flaw in RNNs and the information bottleneck of a fixed length hidden state. They put out a paper introducing attention to rectify this issue. Not long after that, a group of researchers at Google found that you can just get rid of the RNN altogether and you still get great results with improved training performance, giving us the transformer architecture in their Attention Is All You Need paper. But transformers are expensive at inference time and scale poorly with increasing context length, unlike RNNs. Clearly, the solution is to just use RNNs. Two days ago, we got Were RNNs All We Needed?

 

Recordings for the RLC keynote talks have been released.

Keynote speakers:

  • David Silver
  • Doina Precup (Not recorded)
  • Peter Stone
  • Finale Doshi-Velez
  • Sergey Levine
  • Emma Brunskill
  • Andrew Barto
0
submitted 4 months ago* (last edited 4 months ago) by howrar to c/reinforcement_learning
 

OpenAI just put out a blog post about a new model trained via RL (I'm assuming this isn't the usual RLHF) to perform chain of thought reasoning before giving the user its answer. As usual, there's very little detail about how this is accomplished so it's hard for me to get excited about it, but the rest of you might find this interesting.

 

Following up on another question about open source funding, how does it usually work when there is funding to pay for the dev's work, then someone new joins in and makes significant contributions? Does the original dev still keep everything? Do you split the funds between the devs? If so, how do you decide how much each person gets? Are there examples of projects where something like this has happened?

10
(OTHER) How are we doing? (self.actual_discussion)
submitted 6 months ago by howrar to c/actual_discussion
 

This community has been around for a few months now. How do we feel about it? Are things working out? Any plans for further growing the community?

This is one of the topics I’ve been thinking a lot about quite a bit for the past few years (i.e. how to set up a community that values discussions with diverse viewpoints), so I thought I’d share some of my thoughts in relation to what I’m seeing here.

  1. I think such a community necessarily needs to be a full self-contained instance, or else you’ll get very little activity. Think about how these discussions usually start. Someone posts an article/meme/question/etc, a few people show up and comment with similar thoughts about it worded in slightly different ways, then another shows up and goes against the grain, everyone dogpiles on them, and that’s when the real discussion starts. Very rarely do people go out of their way to ask “what do you think of X controversial topic?” And even if you do, that only leads to a very high level discussion that very quickly gets stale. If you get discussion in the context of specific events, then these discussions can be grounded in reality and lead to more unique context-dependent takes each time it comes up.

  2. Regarding upvotes/downvotes: as stated in the rules, they should be used to measure whether a post/comment is a positive contribution to the discussion rather than the number of people who agree with your viewpoint. I don’t believe there’s a way to actually enforce this with the voting system we currently have, but I also think a relatively simple change can fix it. It will require a bit of coding.

    My proposal is a voting system with two votes: one to say that you agree/disagree, and another to say good/bad contribution. With this system, you can easily see if someone only thinks posts they agree with are good contributions, and you can use that information to calculate a total score that weighs their votes accordingly. It’s also small enough of a change that I think most people won’t have a problem figuring it out.

Thoughts?

Also, thank you Ace for taking the initiative in creating this place. It makes me happy to see that others want to see this change too.

view more: next ›