tal

joined 2 years ago
[–] [email protected] 7 points 16 hours ago* (last edited 16 hours ago)

The new lawsuit said Li began working as an engineer for xAI last year, where he helped train and develop Grok. The company said Li took its trade secrets in July, shortly after accepting a job from OpenAI and selling $7 million in xAI stock.

I must say that it's going to be a bitch-and-a-half to retain core engineers if people are walking away at what amounts to $7 million/year in effective compensation.

kagis

https://www.lxuechen.com/

Looks like he only started working in industry in 2023, too (though was doing relevant work as a graduate student prior to that).

[–] [email protected] 0 points 16 hours ago* (last edited 16 hours ago)

If I recall correctly, at least for non-group chats they do use end-to-end encryption. That being said, obviously there are some practical limitations on the impact if you think that WhatsApp would actively try to be malicious, since they're also providing the client software and could hypothetically backdoor that.

kagis

According to this, they do use end-to-end encryption for group chats too.

Maybe I'm recalling some other service or a default setting or something. Some service had non-e2e-encrypted-group messages for at least some period of time.

[–] [email protected] 4 points 17 hours ago* (last edited 17 hours ago)

$3-10k...not getting the speeds and quality

I mean, that's true. But the hardware that OpenAI is using costs more than that per pop.

The big factor in the room is that unless the tech nerds you mention are using the hardware for something that requires keeping the hardware under constant load


which occasionally interacting with a chatbot isn't going to do


it's probably going to be cheaper to share the hardware with others, because it'll keep the (quite expensive) hardware at a higher utilization rate.

I'm also willing to believe that there is some potential for technical improvement. I haven't been closely following the field, but one thing that I'll bet is likely technically possible


if people aren't banging on it already


is redesigning how LLMs work such that they don't need to be fully loaded into VRAM at any one time.

Right now, the major limiting factor is the amount of VRAM available on consumer hardware. Models get fully loaded onto a card. That makes for nice, predictable computation times on a query, but it's the equivalent of...oh, having video games limited by needing to load an entire world onto the GPU's memory. I would bet that there are very substantial inefficiencies there.

The largest GPU you're going to get is something like 24GB, and some workloads can be split that across multiple cards to make use of VRAM on multiple cards.

You can partially mitigate that with something like a 128GB Ryzen AI Max 395+ processor-based system. But you're still not going to be able to stuff the largest models into even that.

My guess is that it is probably possible to segment sets of neural net edge weightings into "chunks" that have a likelihood to not concurrently be important, and then keep not-important chunks not loaded, and not run non-loaded chunks. One would need to have a mechanism to identify when they likely do become important, and swap chunks out. That will make query times less-predictable, but also probably a lot more memory-efficient.

IIRC from my brief skim, they do have specialized sub-neural-networks, which are called "MoE", for "Mixture of Experts". It might be possible to unload some of those, though one is going to need more logic to decide when to include and exclude them, and probably existing systems are not optimal for these:

kagis

Yeah, sounds like it:

https://arxiv.org/html/2502.05370v1

fMoE: Fine-Grained Expert Offloading for Large Mixture-of-Experts Serving

Despite the computational efficiency, MoE models exhibit substantial memory inefficiency during the serving phase. Though certain model parameters remain inactive during inference, they must still reside in GPU memory to allow for potential future activation. Expert offloading [54, 47, 16, 4] has emerged as a promising strategy to address this issue, which predicts inactive experts and transfers them to CPU memory while retaining only the necessary experts in GPU memory, reducing the overall model memory footprint.

[–] [email protected] 3 points 17 hours ago* (last edited 17 hours ago) (2 children)

I don't think that RFK Jr. is a good choice for the spot at all, but the point is pretty much moot, because Trump cut a deal with him to give him the spot in exchange for RFK's support in the election (which Trump probably needed), and that probably isn't gonna change unless Trump is willing to stab him in the back.

The time to have this discussion was back before RFK cut the deal (hoping to discourage Trump from doing the deal) or at least prior to the 2024 general election (hoping to have Harris elected instead of Trump). As long as Trump is in the White House and he wants RFK there, RFK can be there. Congress could theoretically impeach RFK if he broke laws, but I don't think that the issue here is RFK breaking laws, but rather RFK doing things that are bad for the US.

In theory, the Democrats could take both the House (likely) and Senate (not likely) in the midterms, then try to pass a law against whatever vaccine nuttiness RFK is doing, but they won't have a veto-proof majority, and I would expect Trump to veto any such law. So unless RFK has a falling out with Trump sufficient for Trump to eject him or actually breaks existing laws and is impeached, I expect that RFK is going to be in his office doing RFK things for the next three-and-a-half years.

EDIT: Oh, or prior to his Senate confirmation as HHS secretary to try to convince the Senate not to confirm him, though I have pretty grave doubts that a Republican Senate would block his confirmation if it was part of a deal important to Trump taking the White House.

But point is, we're past all those points. In late August 2025, it's not really a terribly-interesting argument to have.

[–] [email protected] 2 points 18 hours ago

Oh, wait, yeah, you're right, and in fact a number of packages do take that when binding to an address. Sorry, that's on me.

[–] [email protected] 3 points 18 hours ago* (last edited 18 hours ago) (1 children)

cannot bind to local IPv4 socket: Cannot assign requested address

inet 169.254.210.0

Yeah. That'll be that you're needing an interface with that address assigned.

ifconfig

Going from memory, I believe that if you've got ifconfig available and this is a Linux system and you need to keep the address on the current interface to keep the system connected to the Internet or something, you can use something like ifconfig enp7s0:0 10.10.10.3 to use an interface alias, use both addresses (169.254.210.0 and 10.10.10.3) at the same time. Might also need ifconfig enp7s0:0 up after that. That being said, (a) I don't think that I've set up an interface alias in probably a decade, and it's possible that's something has changed, (b) that's a bit of additional complexity, and if you aren't super familiar with Linux networking, you might not want to add more complexity if you don't mind dropping just setting the address on the interface to something else.

Probably an iproute2-based approach to do this too (the ip command rather than the ifconfig command) but I haven't bothered to pick up iproute2 equivalents for a bunch of stuff.

EDIT: Sounds like you can assign the address and bring the interface alias up as one step (or could a decade ago, when this comment was written):

https://askubuntu.com/questions/585468/how-do-i-add-an-additional-ip-address-to-an-interface-in-ubuntu-14

To setup eth0:0 alias type the following command as the root user:

# ifconfig eth0:0 192.168.1.6 up

So probably give ifconfig enp7s0:0 10.10.10.3 up a try, then see if the TFTP server package can bind to the 10.10.10.3 address.

[–] [email protected] 0 points 18 hours ago* (last edited 18 hours ago) (2 children)

I haven't done anything with OpenWRT for a lomg time, but...

I have the IP of the server set to 0.0.0.0:69 when I try to set it to 10.10.10.3 (per the wiki) The server on my pc won't start and gives an error.

I'm pretty sure that you can't use all zeroes as an IP address.

kagis

https://en.wikipedia.org/wiki/0.0.0.0

RFC 1122 refers to 0.0.0.0 using the notation {0,0}. It prohibits this as a destination address in IPv4 and only allows it as a source address during the initialization process, when the host is attempting to obtain its own address.

As it is limited to use as a source address and prohibited as a destination address, setting the address to 0.0.0.0 explicitly specifies that the target is unavailable and non-routable.

You probably need to figure out why your TFTP server is unhappy with 10.10.10.3, and there's not enough information here to provide guidance on that. I don't know what OS or software package you're using or the error or the network config.

It may be that you don't have any network interface with 10.10.10.3 assigned to it, which I believe might cause the TFTP server to fail to bind a socket to that address and port when it attempts to do so.

If you are manually invoking the TFTP server as a non-root user and trying to bind to port 69, and this is a Linux system, it will probably fail, as ports below 1024 are privileged ports and processes running as ordinary users cannot bind to them. That might cause a TFTP server package to bail out.

But I'm really just taking wild stabs in the dark, without any information about the software involved and the errors you're seeing. I would probably recommend trying to make 10.10.10.3 work, though, not 0.0.0.0.

If this is a Linux system, you might use a packet sniffer on the TFTP host, like Wireshark or tcpdump, to diagnose any additional issues that come up, since that will let you see how the two devices are talking to each other. But if you can't get the TFTP server to even run on an IP address, then you're not to that point yet.

[–] [email protected] 1 points 19 hours ago

We can do injection molding in the US

We "can". But we end up with Mega Bloks

Hmm. Were they manufactured in the US?

kagis

It sounds like Canada and then China.

https://old.reddit.com/r/megaconstrux/comments/1diz26n/why_exactly_did_mega_bloks_fell_of_so_hard/

Mega Bloks used to be a privately owned company based in Canada. You may remember the old Call of Duty Collector sets had a set of coordinates printed below the logo on their packaging? If you entered those into Google Maps it would show you Mega's factory or home offices in Canada.

I don't remember exactly when, eight years ago maybe, Mattel bought out Mega Bloks/Construx. The Canada factories were closed and manufacturing relocated to China.

That bring said, I doubt that the issue is an inability to do precision molding so much as not doing so at the desired price point. As I recall, Mega Bloks aimed to be cheaper than Legos.

[–] [email protected] 9 points 19 hours ago (1 children)

Stephen Miller is not known for exactly being the soul of veracity.

https://www.salon.com/2017/02/21/stephen-millers-web-of-lies-the-trump-adviser-has-championed-big-tobaccos-alarming-deceptions_partner/

And that was from back in 2017.

Trump just has a number of people working for him who are, like himself, willing to tell very substantial untruths.

[–] [email protected] 14 points 20 hours ago* (last edited 20 hours ago)

Trump had ended Secret Service protection for John Bolton in his first term after he fell out with him. In Bolton's case, that's quite significant, due to Iran.

https://www.bbc.com/news/articles/c78d7x08j7eo

The US is offering a $20m (£15m) reward for information leading to the arrest of an Iranian man accused of plotting to assassinate Donald Trump's former National Security Advisor John Bolton.

Shahram Poursafi, a member of Iran's Islamic Revolution Guard Corps (IRGC), is accused of trying to hire criminals in the US to kill Mr Bolton, a vocal Iran critic, in exchange for $300,000.

Biden had reinstated Bolton's Secret Service protection when Biden was in office. As soon as Trump got back in, he cut it off again.

[–] [email protected] 11 points 1 day ago* (last edited 1 day ago) (1 children)

I don't use this plugin myself, but if you're using Firefox, you might take a look at it, as it provides a bunch of browser-side configurability. I don't know whether the feature you're looking for is there, but as far as I can tell, it aims to be a pretty large bucket of pretty much every add-on YouTube feature one might want.

I was looking at it a while back for something unrelated, a UI tweak that I was hoping that it might do.

[–] [email protected] 2 points 1 day ago* (last edited 1 day ago)

scam I swear, I've seen top of the line $5k laptops that run worse than my $2k desktop

I mean, they can't dissipate anything like the amount of heat or use the amount of power. A desktop is going to outperform a laptop.

And while you don't mention it, most laptops like that can't run long at full CPU+GPU load on a 100Wh battery, and virtually no laptops have more than a 100Wh battery. So you have to plug in or at least carry a power station.

But I don't see how that makes them a scam. There are people who really and honestly have to move around. Windows in particular is not very friendly to being used remotely, so if they can't be mobile, they can't use it, much less for gaming.

The only thing I think I could call scammy is that there are some items that have the same laptop and desktop names, like, oh, the laptop and desktop Geforce RTX 4090, which do not remotely perform the same. I think that it's pretty fair to say that Nvidia did that to exploit user confusion.

I think that it's more accurate to say that laptops come with some substantial performance tradeoffs, and that it's important to be aware of those. It's not that it's unreasonable to play games on a laptop, and there are people who are going to want to do so.

 

cross-posted from: https://lemmy.dbzer0.com/post/52132222

3d printed custom arcade stick i made to use at the local bi weekly guilty gear strive and street fighter 6 brackets.

For the last 10 or 12 years I've been using a Madcatz Fightstick pro xbox 360 arcade stick but I got tired of using adapters to play on ps4/ps5 because they were causing issues with missed inputs and added latency so I built this controller with an open source board that supports usb passthrough authentication so the console gets inputs directly from the controller.

It uses a 16x6 inch aluminum plate for the top and bottom panels and a gp-2040ce for the pcb(with a magicboots adapter for ps5 support), the lever is a crown newhelpme lever and the buttons are 6 Seimitsu snap in buttons and 2 Punkworkshop buttons.

I ran into a skill issue using heat set inserts so i just made everything either screw into plastic or use captive nuts for the more secure bits.

list of tools and stuff used.

  • software: Freecad and orca slicer
  • 3d printer: Elegoo Neptune 4 pro(this thing sucks)
  • 30mm hole cut saw drill bit.
  • random assortment of drill bits for the mounting holes.
  • 1x m8 threaded rod to connect the wrist rest halves
  • 4x m8 bolts
  • 8x m6 bolts
  • 4x m4 bolts for the lever mount
  • 2x 16x6x1/8 inch aluminum plates
366
submitted 1 month ago* (last edited 1 month ago) by [email protected] to c/[email protected]
 

Japan recorded the highest ever temperature of 41.2 degrees Celsius on Wednesday, beating the previous high of 41.1 C marked in 2018 and 2020. Authorities are strongly urging people to take precautions to avoid risks of heatstroke.

The mercury hit the above-human temperature of 41.2 C in the city of Tanba, Hyogo Prefecture, at 14:39, while two cities — Fukuchiyama in Kyoto and Nishiwaki in Hyogo — also recorded extremely high temperatures of 40.6 C and 40 C, respectively.

view more: next ›