Naia

joined 2 years ago
[–] [email protected] 14 points 1 day ago (11 children)

The problem is for organizations it's harder to leave because that is where the people you want to reach are. That's the only reason any org or company is on social media in the first place. If they leave too soon they risk too many people not seeing the things they send out to the community.

It's more an individual thing because so many people just have social inertia and haven't left since everyone they know is already there. The first to leave have to decide if they want to juggle using another platform to keep connections or cut off connections by abandoning the established platform.

[–] [email protected] 10 points 1 day ago (1 children)

I take Adderall (generic) for ADHD and it just makes my brain not feel "numb" and also helps me not feel quite as exhausted as I had been for decades before I got medication.

It helps regulate my mood, executive function, etc.

Right now it's not working quite as well as normal since the last week or so has had a lot of stresses on top of the political anxiety of being a queer woman, but having been in this state before medication I would be useless if I didn't have it.

Same thing happened after the election for a week or so and I had to ration during that time and I got a stark reminder how poorly depression and ADHD go together.

[–] [email protected] 1 points 1 day ago (1 children)

I shouldn't have anything to hide, but I'm part of a group the current fascist leadership in government want's to eradicate, so hide I shall.

That said, I also feel like people acting like the remote server they are connected to is tracking what you do on it as some kind of surprise is so stupid. "Facebook is keeping track of the pictures I uploaded to it!!!!" There's a lot of stuff to complain about Facebook, google, or whoever, but them tracking stuff you send to them willingly isn't one of them.

[–] [email protected] 3 points 2 days ago (1 children)

It doesn't. They run using stuff like Ollama or other LLM tools, all of the hobbyist ones are open source. All the model is is the inputs, node weights and connections, and outputs.

LLMs, or neural nets at large, are kind of a "black box" but there's no actual code that gets executed from the model when you run them, it's just processed by the host software based on the rules for how these work. The "black box" part is mostly because these are so complex we don't actually know exactly what it is doing or how it output answers, only that it works to a degree. It's a digital representation of analog brains.

People have also been doing a ton of hacking at it, retraining, and other modifications that would show anything like that if it could exist.

[–] [email protected] 1 points 2 days ago

Not in the way you think. They aren't constantly training when interacting, that would be way more inefficient than what US AI companies have been doing.

It might be added to the training data, but a lot of training data now is apparently synthetic and generated by other models because while you might get garbage, it gives more control over the type of data and shape it takes, which makes it more efficient to train for specific domains.

[–] [email protected] 8 points 2 days ago (2 children)

Exactly. I'm queer. I'm not scared of China, even if they were doing the same thing the US currently is. Because only one of those actually effects the rights I have and what I do in my day-to-day.

I do not understand how the average person does not realize that.

[–] [email protected] 14 points 2 days ago

As a queer woman in the US, I currently care infinitely more what the US gov and companies track about me than what China does.

[–] [email protected] 33 points 2 days ago (1 children)

I swear people do not understand how the internet works.

Anything you use on a remote server is going to be seen to some degree. They may or may not keep track of you, but you can't be surprised if they are. If you run the model locally, there is no indication it is sending anything anywhere. It runs using the same open source LLM tools that run all the other models you can run locally.

This is very much like someone doing surprised pikachu when they find out that facebook saves all the photos they upload to facebook or that gmail can read your email.

[–] [email protected] 7 points 2 days ago* (last edited 2 days ago) (1 children)

I'm sorry, I'm too busy dealing with my rights being trampled and taken away by these monsters along with other life stressors to care about your moral grandstanding.

Yes. Fuck them. Fuck anyone who supports them for "economic anxiety" or whatever BS rationalization they use to blind themselves, and fuck anyone who is still giving fascists the benifit of the doubt.

There is no middle ground, no high road, when these people literally want to kill me and my friends. I am exhausted by the last week along and have long been sick of people treating fascists with kid gloves.

The idea that people think I'm as bad as they are for standing up for myself while they inflict violence on me. Every step of calling fascists fascists and it's called "hyperbole" or "hysterical". Because acknowledgement of what they say they will do, have been doing, and are doing means their lives might be disrupted, that they would need to do something.

They do. We all do. But most people aren't. So, I'm focused on protecting me and mine. If you wanted a world where all we do is finger wag at fascist then something should have been done to stop what is happening right now...

But now, I have no sympathy anymore and grow tired of liberals, and honestly not a small number of the left, saying we should be above that. That ship sailed a long time ago.

If you are privileged enough to still feel like you are above that, then fucking do something so those of us who are the current targets don't have to defend ourselves alone.

[–] [email protected] 12 points 2 days ago (1 children)

For neural nets the method matters more. Data would be useful, but at the amount these things get trained on the specific data matters little.

They can be trained on anything, and a diverse enough data set would end up making it function more or less the same as a different but equally diverse set. Assuming publicly available data is in the set, there would also be overlap.

The training data is also by necessity going to be orders of magnitude larger than the model itself. Sharing becomes impractical at a certain point before you even factor in other issues.

[–] [email protected] 8 points 2 days ago (1 children)

They might try, but if their goal was to destabilizing western dominance for LLMs making it completely open source was the best way.

This isn't like TikTok. They have a server that hosts it, but anyone can take their model and run it and there are going to be a lot of us companies besides the big Ai ones looking at it. Even the big Ai ones will likely try to adapt the stuff they've spent to long brute forcing to get improvement.

The thing is, it's less about the actual model and more about the method. It does not take anywhere close to as many resources to train models like deepseek compared to what companies in the US have been doing. It means that there is no longer going to be just a small group hording the tech and charging absurd amounts for it.

Running the model can be no more taxing than playing a modern video game, except the load is not constant.

The cat is out of the bag. They could theoretically ban the direct models released from the research team, but retrained variants are going to be hard to differentiate from scratch models. And the original model is all over the place and have had people hacking away at it.

Blocking access to their hosted service right now would just be petty, but I do expect that from the current administration...

[–] [email protected] 10 points 2 days ago (13 children)

I only wish in them what they wish on me. Anyone who has an issue with that is enabling fascism.

view more: next ›