Because it has fewer parameters and (in some cases) it's quantized. The hardware needed to run local inference on the full model is not really feasible to most people. Though, the release of it will probably still make a wide impact on the quality of other upcoming smaller models being distilled from it, or trained on synthetic data from it, or merged with it, etc.
pirat
You're asking the real important questions!
And it's great for sorting by date.
Well, to be pedantic about it, that actually depends on the weather conditions, time, place etc., and the eyesight, color perception, as well as language, of whoever is observing it (or those who unfortunately can nazi) [buh-dum-tsh]
We could say instead, that the sky is the sky (many weirdly seem to think the sky is the limit, but no, I'm prettttty sure, it's the sky)
This adds a lot of confusion, so maybe we should let go of the analogies and just state it directly instead...
Oh, which one are you referring to here – of all those different 750 g supposedly exotic fruitberry-flavoured water beverages, all with 0 kcal? One of those with a dose of factory-added vitamins, or just the funkiest sun-kissed fruit imitation available?
I assume the taste will probably just become increasingly more rancid long before pure (and bacterially uncontaminated) 100% PB goes dangerously bad, if ever.
Surprise, the heaviest $1 item was a 1500 g bottle of still water...
Thanks for sharing. It seems like there's a lot of supported options. Many of them, I have no idea what are, but cars and doorbells are easy enough to understand, at least. Do you have any examples of interesting, less obvious use cases of your own, or of others'?
Thinking a bit outside the box, if your phone is capable of it, you could find a way to run a small local LLM on it. Maybe it can even be done in Termux?
If that's not an option and/or you need a bigger, more capable model, you could host a local Ollama instance, and connect to it from the Ollama (IzzyOnDroid) or GPTMobile (F-Droid). This way you will only connect to yourself instead of some 3rd party translation or LLM provider.
I think that, with a well-written system prompt, you could make it more efficient by concisely instructing it to expect your text input and a language (or include permanent language instructions in system prompt), to then only output the translated version of your input in that language. This will keep the number of input+output tokens low, thereby saving some inference. You can also get creative and instruct it to output multiple variations, change the style/tone/formatting, provide an example sentence containing a single translated word, etc...
For local reception, receivers with RTL2832U chips are a cheap option. They are also called RTL-SDR. I have simply been using a long wire as a "random wire antenna". Some of the older dongles also need an upconverter to be able to tune into low HF frequencies:
An upconverter for the RTL-SDR translates low HF frequencies ‘up’ into ones that are receivable by the RTL-SDR. This is a different method to the direct sampling mode used in the V3 dongles to achieve HF reception.
Quoted source: https://www.rtl-sdr.com/a-homebrew-one-transistor-upconverter-for-the-rtl-sdr/
I keep getting a "Database download failed" error, unfortunately, even with network permission enabled...