Mechanize

joined 2 years ago
[–] [email protected] 17 points 5 months ago (1 children)

This was all theatre where nothing in reality changed, to show how tough and intransigent this admin wants to be: the real problem is that if formally and informally allied nations start to think that the USA would really commit to bullying as a real negotiation tactic, they will start to divest from it and expand their pool of partners to make these tactics less impactful.

This could be disastrous in the long run, with decades of integration and diplomacy going in smoke. But in the short term there will be a lot of "winning", as they keep saying. I guess.

The first Trump term was seen as a short term derailing. The second one has a complete different international impact.

Goodwill is a real diplomatic form of currency and it is getting burned at a real fast pace.

[–] [email protected] 5 points 6 months ago

Per poterlo fare possono, ma è ovvio che sia una palese presa per i fondelli.

Cambia banca. Se son 4 euro l'uno con tre bonifici mensili hai già pagato il canone mensile per un conto business con zero spese aggiuntive. E puoi certamente trovarne a meno se sei dipendente con stipendio accreditabile.

[–] [email protected] 9 points 6 months ago (1 children)

I've never used oobabooga but if you use llama.cpp directly you can specify the number of layers that you want to run on the GPU with the -ngl flag, followed by the number.

So, as an example, a command (on linux) from the directory you have the binary, to run its server would look something like: ./llama-server -m "/path/to/model.gguf" -ngl 10

Another important flag that could interest you is -c for the context size.

This will put 10 layers of the model on the GPU, the rest will be on RAM for the CPU.

I would be surprised if you can't just connect to the llama.cpp server or just set text-generation-webui to do the same with some setting.

At worst you can consider using ollama, which is a llama.cpp wrapper.

But probably you would want to invest the time to understand how to use llama.cpp directly and put a UI in front of it, Sillytavern is a good one for many usecases, OpenWebUI can be another but - in my experience - it tends to have more half baked features and the development jumps around a lot.

As a more general answer, no, the safetensor format doesn't directly support quantization, as far as I know

[–] [email protected] 17 points 6 months ago (1 children)

If things haven't changed recently: remember that each time you get a giveaway game from the site you are (re)subscribing to the newsletter too

[–] [email protected] 17 points 6 months ago* (last edited 6 months ago) (1 children)

They have finally updated the Status Page

Not a lot of information, but better than nothing

[–] [email protected] 42 points 6 months ago (7 children)

Yeah, incredibly frustrating.
The only acknowledgement is from a volunteer mod on reddit that said an hour ago that "the team is aware and the status page will be updated shortly".

The fact I had to dig around to find that is really not a pleasing experience.

[–] [email protected] 9 points 6 months ago* (last edited 6 months ago)

~~Their systems currently report that everything's fine, which - to be fair - could be a misreporting and change at any moment~~
~~Anecdotally all their landing sites load fine for me~~

~~Tackling it from the other side: could it be a problem with your DNS?~~
~~Did you try with another network? Like wifi or mobile data~~

~~EDIT: Formatting~~

EDIT: Tried again and now the email service seems to not be loading

EDIT 2: It is being reported on other sites too, but currently there's nothing official I could find, not even on their Mastodon or Twitter various accounts

EDIT 3: On reddit the volunteer mod alex_herrero wrote an hour ago that "The team is aware and [the] status page will be updated shortly".

[–] [email protected] 3 points 7 months ago (1 children)

I've read good things about LTX, but I've never used it.

[–] [email protected] 11 points 7 months ago (1 children)

You'll die, just make it matter

[–] [email protected] 1 points 7 months ago

That's the bad thing about social media. If no one was doing it before, someone is now!

Jokes aside it's possible, but with the current LLMs I don't think there's really a need for something like that.

Malicious actors usually try to spend the least amount of effort possibile for generalized attacks, because you end up having to often restart when found out.

So they probably just feed an LLM with some examples to get the tone right and prompt it in a way that suits their uses.

You can generate thousands of posts while Lemmy hasn't even started to reply to one.

If you instead want to know if anyone is taking all the comments on lemmy to feed to some model training.. Yeah, of course they are. Federation makes it incredibly easy to do.

[–] [email protected] 20 points 7 months ago (1 children)

Probably I'm missing something but I've read the parent comment as a way to highlight the hypocrisy behind making extensive use of something while, simultaneously, wanting to bar others from using it

I don't see an insult in there, given the choice of words and the context, but maybe I'm missing something fundamental?

[–] [email protected] 1 points 7 months ago

Ma chi me lo fa fare di pagare le tasse e rispettare la legge, quando così palesemente conviene aspettare l'ennesimo condono o simile porcheria?

Onestamente, che senso ha? Cretino io a dare migliaia d'euro ogni anno allo stato.

Cioè, l'assurdità totale:

Tuttavia, stando a quanto scritto nel comunicato stampa, chi ha già pagato non sarà rimborsato dei 100 euro versati.

Alla fine del 2022, le persone sanzionate erano circa 1,8 milioni, per un totale di 180 milioni di euro che lo Stato avrebbe dovuto incassare.

Lasciando perdere il caso specifico: questa è solo l'ennesima dimostrazione di un sistema paese disfunzionale e frustrante.

view more: ‹ prev next ›