autonomoususer

joined 2 years ago
MODERATOR OF
llm
[–] [email protected] 5 points 21 hours ago

Wrong, Starbucks stealth fires workers.

[–] [email protected] -1 points 6 days ago

F-Droid is more secure.

[–] [email protected] 9 points 6 days ago (2 children)
[–] [email protected] 7 points 1 week ago* (last edited 1 week ago)

Spread software freedom ideas to your classmates.

Don't say privacy. Say scam, abuse and control. You got to say it simple, so even a retard can see Zoom is fucked. You got to make it blatant.

File group complaints.

[–] [email protected] 9 points 1 week ago (1 children)

Worse, they ban us from proving. They don't want us to know. 🚩

[–] [email protected] 2 points 1 week ago

They ban us from proving.

[–] [email protected] 10 points 1 week ago (8 children)
[–] [email protected] 9 points 1 week ago (4 children)

Removing anti-libre software, like WhatsApp, Instagram and iOS, from your friend's devices.

[–] [email protected] 1 points 1 week ago

Does it include a libre software license text file?

[–] [email protected] 2 points 2 weeks ago

Find a way to use Instagram to drive them to another app, like this: https://lemmy.world/post/21620691

-4
submitted 1 month ago* (last edited 1 month ago) by [email protected] to c/[email protected]
 

They cry when companies profit from their work, while ignoring the most blatant solution from the start: the AGPL.

Now, its libre software license text file has been replaced with a fake, banning us users from freely forking new versions.

Open WebUI v0.6.6+ ... now adds a ... branding ... clause.

The original BSD-3 license continues to apply for all contributions made to the codebase up to and including release v0.6.5.

30
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]
 

Peer-to-peer as it's easier to get an app over getting a server. Must be libre software and E2EE too, obviously.

 

Example, WhatsApp, use the whole 25 character profile name limit:

Bob Moved To Signal.org
Alice MovedTo Signal.org
CharlieMovedTo Signal.org

Say Signal.org, not Signal, so they see it is an app.

Use your about section too.

Same on Discord, Steam, Instagram, everywhere.

 

cross-posted from: https://lemmy.world/post/28493612

Open WebUI lets you download and run large language models (LLMs) on your device using Ollama.

Install Ollama

See this guide: https://lemmy.world/post/27013201

Install Docker (recommended Open WebUI installation method)

  1. Open Console, type the following command and press return. This may ask for your password but not show you typing it.
sudo pacman -S docker
  1. Enable the Docker service [on-device and runs in the background] to start with your device and start it now.
sudo systemctl enable --now docker
  1. Allow your current user to use Docker.
sudo usermod -aG docker $(whoami)
  1. Log out and log in again, for the previous command to take effect.

Install Open WebUI on Docker

  1. Check whether your device has an NVIDIA GPU.
  2. Use only one of the following commands.

Your device has an NVIDIA GPU:

docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda

Your device has no NVIDIA GPU:

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Configure Ollama access

  1. Edit the Ollama service file. This uses the text editor set in the $SYSTEMD_EDITOR environment variable.
sudo systemctl edit ollama.service
  1. Add the following, save and exit.
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
  1. Restart the Ollama service.
sudo systemctl restart ollama

Get automatic updates for Open WebUI (not models, Ollama or Docker)

  1. Create a new service file to get updates using Watchtower once everytime Docker starts.
sudoedit /etc/systemd/system/watchtower-open-webui.service
  1. Add the following, save and exit.
[Unit]
Description=Watchtower Open WebUI
After=docker.service
Requires=docker.service

[Service]
Type=oneshot
ExecStart=/usr/bin/docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui
RemainAfterExit=true

[Install]
WantedBy=multi-user.target
  1. Enable this new service to start with your device and start it now.
sudo systemctl enable --now watchtower-open-webui
  1. (Optional) Get updates at regular intervals after Docker has started.
docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui

Use Open WebUI

  1. Open localhost:3000 in a web browser.
  2. Create an on-device Open WebUI account as shown.
 

Open WebUI lets you download and run large language models (LLMs) on your device using Ollama.

Install Ollama

See this guide: https://lemmy.world/post/27013201

Install Docker (recommended Open WebUI installation method)

  1. Open Console, type the following command and press return. This may ask for your password but not show you typing it.
sudo pacman -S docker
  1. Enable the Docker service [on-device and runs in the background] to start with your device and start it now.
sudo systemctl enable --now docker
  1. Allow your current user to use Docker.
sudo usermod -aG docker $(whoami)
  1. Log out and log in again, for the previous command to take effect.

Install Open WebUI on Docker

  1. Check whether your device has an NVIDIA GPU.
  2. Use only one of the following commands.

Your device has an NVIDIA GPU:

docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda

Your device has no NVIDIA GPU:

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Configure Ollama access

  1. Edit the Ollama service file. This uses the text editor set in the $SYSTEMD_EDITOR environment variable.
sudo systemctl edit ollama.service
  1. Add the following, save and exit.
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
  1. Restart the Ollama service.
sudo systemctl restart ollama

Get automatic updates for Open WebUI (not models, Ollama or Docker)

  1. Create a new service file to get updates using Watchtower once everytime Docker starts.
sudoedit /etc/systemd/system/watchtower-open-webui.service
  1. Add the following, save and exit.
[Unit]
Description=Watchtower Open WebUI
After=docker.service
Requires=docker.service

[Service]
Type=oneshot
ExecStart=/usr/bin/docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui
RemainAfterExit=true

[Install]
WantedBy=multi-user.target
  1. Enable this new service to start with your device and start it now.
sudo systemctl enable --now watchtower-open-webui
  1. (Optional) Get updates at regular intervals after Docker has started.
docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui

Use Open WebUI

  1. Open localhost:3000 in a web browser.
  2. Create an on-device Open WebUI account as shown.
 
 

cross-posted from: https://lemmy.dbzer0.com/post/41844010

The problem is simple: consumer motherboards don't have that many PCIe slots, and consumer CPUs don't have enough lanes to run 3+ GPUs at full PCIe gen 3 or gen 4 speeds.

My idea was to buy 3-4 computers for cheap, slot a GPU into each of them and use 4 of them in tandem. I imagine this will require some sort of agent running on each node which will be connected through a 10Gbe network. I can get a 10Gbe network running for this project.

Does Ollama or any other local AI project support this? Getting a server motherboard with CPU is going to get expensive very quickly, but this would be a great alternative.

Thanks

0
Removed (lemmy.world)
submitted 3 months ago* (last edited 3 months ago) by [email protected] to c/[email protected]
 

Removed

28
submitted 3 months ago* (last edited 3 months ago) by [email protected] to c/[email protected]
 

cross-posted from: https://lemmy.world/post/27088416

This is an update to a previous post found at https://lemmy.world/post/27013201


Ollama uses the AMD ROCm library which works well with many AMD GPUs not listed as compatible by forcing an LLVM target.

The original Ollama documentation is wrong as the following can not be set for individual GPUs, only all or none, as shown at github.com/ollama/ollama/issues/8473

AMD GPU issue fix

  1. Check your GPU is not already listed as compatibility at github.com/ollama/ollama/blob/main/docs/gpu.md#linux-support
  2. Edit the Ollama service file. This uses the text editor set in the $SYSTEMD_EDITOR environment variable.
sudo systemctl edit ollama.service
  1. Add the following, save and exit. You can try different versions as shown at github.com/ollama/ollama/blob/main/docs/gpu.md#overrides-on-linux
[Service]
Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"
  1. Restart the Ollama service.
sudo systemctl restart ollama
9
submitted 3 months ago* (last edited 3 months ago) by [email protected] to c/[email protected]
 

cross-posted from: https://lemmy.world/post/27088416

This is an update to a previous post found at https://lemmy.world/post/27013201


Ollama uses the AMD ROCm library which works well with many AMD GPUs not listed as compatible by forcing an LLVM target.

The original Ollama documentation is wrong as the following can not be set for individual GPUs, only all or none, as shown at github.com/ollama/ollama/issues/8473

AMD GPU issue fix

  1. Check your GPU is not already listed as compatibility at github.com/ollama/ollama/blob/main/docs/gpu.md#linux-support
  2. Edit the Ollama service file. This uses the text editor set in the $SYSTEMD_EDITOR environment variable.
sudo systemctl edit ollama.service
  1. Add the following, save and exit. You can try different versions as shown at github.com/ollama/ollama/blob/main/docs/gpu.md#overrides-on-linux
[Service]
Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"
  1. Restart the Ollama service.
sudo systemctl restart ollama
view more: next ›