Have you considered gollum https://github.com/gollum/gollum ?
A simple, Git-powered wiki with a local frontend and support for many kinds of markup and content.
I used it a long time ago, but then I wanted to try the next shiny note taking app..
Have you considered gollum https://github.com/gollum/gollum ?
A simple, Git-powered wiki with a local frontend and support for many kinds of markup and content.
I used it a long time ago, but then I wanted to try the next shiny note taking app..
I set up a server with all my stuff on, and use syncthing for syncing my files, and self hosting for services. I mostly use vanilla configs for apps, and prefer distros with good defaults.
Some time ago I switched to Bluefin, and stopped distro-hopping 😅
I also use miniflux, have used it for more than a year and I have not looked for alternatives, which is good sign.
I use Flux News on android to consume my feeds. https://github.com/KevinCFechtel/FluxNews
I agree. I learned and used emacs and org mode for several years. With age, I now want simpler tools that do not need extensive configuration. Using mainly Spyder and VS Code for python coding
Me too. I use uptime kuma to send the api request. then I also get uptime status 🙂
Yes it is correct. TLDR; threads run one code at the time, but can access same data. processes is like running python many times, and can run code simultaneously, but sharing data is cumbersome.
If you use multiple threads, they all run on the same python instance, and they can share memory (i.e. objects/variables can be shared). Because of GIL (explained by other comment), the threads cannot run at the same time. This is OK if you are IO bound, but not CPU bound
If you use multiprocessing, it is like running python (from terminal) multiple times. There is no shared memory, and you have a large overhead since you have to start up python many times. But if you have large calculations you can do in parallell that takes long time, it will be much faster than threads as it can use all cpu cores.
If these processes need to share data, it is more complicated. You need to use special functions to share data, like queues and pipes. If you need to share many MB of data, this takes a lot of time in my experience (10s of milliseconds).
If you need to do large calculations, using numpy functions or numba may be faster than multiple processes, due to good optimizations. But if you need to crunch a lot of data, multiprocessing is usually the way to go
if i remember correctly, i just replaced gitea with forgejo for image: in my docker-compose, and it just worked
it was a couple of versions back, so i don't know if that still works
I'm using leng in an dedicated LXC container in Proxmox
https://github.com/cottand/leng
I'm using defaults + some local dns lookups. Works fine for my use, and lighter than pihole. No web ui
Which apps are you testing?
I set up minio s3 for testing myself, but found that most of my docker services doesn't really support it. So I went back to good old folders
I use nforwardauth . It is simple, but only supports username/password
yes, regular markdown notes has been a good decision 😅
In the beginning, the query results were stored in the markdown files, which could be useful if reading them in another app. But now I just get the query code. I think there were reasons
I'm glad to hear things have cooled down. Does it take much effort to understand and use the templating stuff? I just remember templates got pushed to a different view, and I needed some header tags to get it working
So you like spaces or not? I never got that far with silverbullet. And I haven't used Trillium. I loved evernote when it came out. But it made me aware of the value of maintaining my own data.
Now I try to have data in a directory structure and not in databases
I use it too. I am too old to tinker with my OS, Bluefin has some nice defaults and stuff just works (mostly)