vluz

joined 2 years ago
[–] [email protected] 5 points 9 months ago

Messing around with system python/pip and newly installed versions till all was broken and then looking at documentation.
This was way back on the 00's and I'm still ashamed on how fast completely I messed it up.

[–] [email protected] 2 points 9 months ago (1 children)

Just figured out there are 10 places called Lisbon dotted around the US, according to the search.

[–] [email protected] 3 points 1 year ago

Got one more for you: https://gossip.ink/
I use it via a docker/podman container I've made for it: https://hub.docker.com/repository/docker/vluz/node-umi-gossip-run/general

[–] [email protected] 3 points 1 year ago (4 children)

I got cancelled too and chose Hetzner instead. Will not do business with a company that can't get their filters working decently.

[–] [email protected] 3 points 1 year ago (1 children)

Not close enough for V.A.T.S.

[–] [email protected] 7 points 1 year ago (1 children)

Lovely! I'll go read the code as soon as I have some coffee.

[–] [email protected] 2 points 1 year ago

That is extremely better. It is a very interesting problem, as you put it.

[–] [email protected] 3 points 1 year ago (2 children)

We know remarkably little about how AI systems work

Every single time I see this argument used, I stop reading.

[–] [email protected] 3 points 1 year ago

I do SDXL generation in 4GB at extreme expense of speed, by using a number of memory optimizations.
I've done this kind of stuff since SD 1.4, for the fun of it. I like to see how low I can push vram use.

SDXL takes around 3 to 4 minutes per generation including refiner but it works within constraints.
Graphics cards used are hilariously bad for the task, a 1050ti with 4GB and a 1060 with 3GB vram.

Have an implementation running on the 3GB card, inside a podman container, with no ram offloading, 1 vcpu and 4GB ram.
Graphical UI (streamlit) run on a laptop outside of server to save resources.

Working on a example implementation of SDXL as we speak and also working on SDXL generation on mobile.
That is the reason I've looked into this news, SSD-1B might be a good candidate for my dumb experiments.

[–] [email protected] 3 points 1 year ago (1 children)

Oh my Gwyn, this comment section is just amazing.

[–] [email protected] 6 points 1 year ago (1 children)

Goddammit! Don't tell that one, I use it to impress random people at parties.

 

Hi,

For a media project, I need to create dark fantasy themed backgrounds.
It will be heavily inpainted to meet each background needs.

I'm looking for models, loras, styles, examples, tutorials, etc.
Anything you can think around dark fantasy is valuable, please feel free to suggest linked or related subjects or resources.

Thanks for help in advance.

 

Hi,

Not exactly my area and I'm lost in a sea of solutions, I need help.
There are so many out there but I don't know any of them or if they are still maintained, offer full solution, time to instance up, etc.

Problem is simple to describe.
I want to setup access to GPU instances in order to run any python code the project devs have built.
The hardware consists of several servers with GPUs that support vGPU, NVIDIA GPU virtualization solution.

I'm looking for something similar to https://www.runpod.io/

What open source software can be used to spawn the client machines from the existing hardware pool?

I'm looking into kubernetes for automation and MAAS from canonical for the rest. Am I missing something important?

Any help or insight would be helpful.

view more: next ›