pe1uca

joined 2 years ago
[–] [email protected] 13 points 2 weeks ago (3 children)

My question would be, if you're only archiving repos, why do you need a forge?
A simple git clone <repo> to any your archival directory would be enough to store them, there's no need for you to use a forge software.

Are there any other features of gitea you use?

[–] [email protected] 1 points 2 weeks ago

Yeah, it was $2.5/tb/month, now it's $4.1/tb/month.
Still cheaper than backblaze's $6 which seems the only other option everyone suggests, so it'll have to do for the moment.

[–] [email protected] 1 points 4 weeks ago (1 children)

I'm assuming you mean updating every service, right?
If you don't need anything new from a service you can just stay on the version you use for as long as you like as long as your services are not public.
You could just install tailscale and connect everything inside the tailnet.
From there you'll just need to update tailscale and probably your firewall, docker, and OS, or when any of the services you use receives a security update.

I've lagged behind several versions of immich because I don't have time to monitor the updates and handle the breaking changes, so I just use a version until I have free time.
Then it's just an afternoon of reading through the breaking changes, updating the docker file and config, and running docker compose pull && docker compose up -d.
In theory there could be issues in here, that's were your backups come into place, but I've never had any issues.

The rest of the 20+ services I have are just running there, because I don't need anything new from them. Or I can just mindlessly run the same compose commands to update them.

There was only one or two times I had to actually go into some kind of emergency mode because a service suddenly broke and I had to spend a day or two figuring out what happened.

[–] [email protected] 14 points 4 weeks ago (3 children)

I'd say syncthing is not really a backup solution.
If for some reason something happens to a file on one side, it'll also happen to the file on the other side, so you'll loose your "backup".
Plus, what ensures you your friend won't be going around and snooping or making their own copies of your data.
Use a proper backup software to send your data offsite (restic, borg, duplicati, etc) which will send it encrypted (use a password manager to set a strong and unique password for each backup)

And follow the 3-2-1 rule MangoPenguin mentioned.
Remember, this rule is just for data you can't find anywhere else, so just your photos, your own generated files, databases of the services you self-host, stuff like that. If you really want you could make a backup of hard to find media, but if you already have a torrent file, then don't go doing backup of that media.

[–] [email protected] 10 points 4 weeks ago (7 children)

What do you mean jellyfin uses the *are suite?
I have Jellyfin with any media in different directories as long as I try to match the format the documents mention.
So, as long as I can get the media in any way I can just put it in any directory and it'll be added to the library.

Is it similar with Odin? Or does it directly fetch the media from where you want to download it?

[–] [email protected] 14 points 1 month ago (3 children)

FreshRSS has been amazing, as you said, other readers have other goals in mind and seems RSS is just an add-on.

On Android's also there are no good clients, I've been using the PWA which is good enough.
There are several extensions for mobile menu improvements, I have Smart Mobile Menu, Mobile Scroll Menu and Touch Control (it works great on Firefox, but not on brave, it's too sensitive there, so YMMV).

There's also ReadingTime, but there are feeds which don't send the whole body of the post, so you might only see a 1minute read because of that.


The extension AutoTTL processes the feeds and makes them update only when it's more likely for them to get new items instead of every X minutes configured by FreshRSS.
Still there's a problem when the MaxTTL happens, all feeds are allowed to be updated and you might hit some rate limits, so I developed a rate limiter. Still there's a problem with AutoTTL because how extensions are loaded and with the http code reported by FreshRSS.


I found this project which receive the emails of newsletters and turns them into a RSS feed, I've only used it for one feed and I've only received one entry, not sure if the newsletter is that bad or if the site struggles to receive/show them. Haven't tried something it.
https://github.com/leafac/kill-the-newsletter

There's also this repo linking a lot of sites with feeds, and some sites which don't offer feeds directly are provided via feedburner (which seems it's a Google service and wikipedia says "primarily for monetizing RSS feeds, primarily by inserting targeted advertisements into them", so use those at your own discretion) https://github.com/plenaryapp/awesome-rss-feeds

[–] [email protected] 5 points 1 month ago

Just for privacy reasons?
I can decouple the traffic fingerprinting of some sites, like amazon, youtube, reddit, etc.
And because I have a squid proxy router through the vpn set up via a couple of docker containers, I have a firefox container to always send the traffic over the proxy which allows me to easily search for stuff outside and inside the vpn.

Aside from that I also use the proxy to send requests in scripts over the vpn so my real IP doesn't get rate limited.
And what VPNs are actually for: looking for geo-blocked content.

[–] [email protected] 15 points 1 month ago

I've always used them as a bookmark, specially now they have lists.
There are projects with tens of thousands of stars but with commits from 2-3 years ago, with only dependabot commits, or with 0 issues but every last closed one is from stalebot because the owner doesn't care to maintain the repo.

Stars are not a way to know if a repo is good.

[–] [email protected] 2 points 1 month ago

Maybe you could submit an issue to the repo to include a way to change the format of the saved folders.
(I'm thinking something similar on how immich allows to change some formats)

I'm seeing in my instance the names seem like some sort of timestamp, not sure if the code uses them in a meaningful way, so probably the solution would be to create symlinks with the name of the site or some other format while keeping the timestamp so the rest of the code can still expect it.

[–] [email protected] 5 points 1 month ago (1 children)

I bought this one and it's been wonderful to run +20 services. A few of those are Forgejo (github replacement), Jellyfin (Plex but actually self-hosted), immich (Google Photos replacement), frigate (to process one security camera).
(Only Immich does transcoding, jellyfin already has all my media preprocessed from the GPU of my laptop)

I bought it bare-bone since I already had the RAM and an SSD, plus I wasn't to use windows. During this year I've bought another SSD and a HDD.

https://aoostar.com/products/aoostar-r7-2-bay-40t-nas-storage-amd-ryzen-7-5825u-mini-pc8c-16t-up-to-4-5ghz-with-w11-pro-ddr4-ram-2-m-2-nvme-%E5%A4%8D%E5%88%B6

I bought it on amazon, but you could buy it from the seller, although I'd recommend amazon to not deal with the import and have an easy return policy.

[–] [email protected] 3 points 1 month ago (1 children)

Gameloft was a sister company, wasn't it? What's happened to them?
I've seen a few trailers for their games in some Nintendo directs, are they any good? Or have they followed a similar path as Ubisoft?

[–] [email protected] 23 points 1 month ago

I'd say it's one thing and better to be tracked only at account level than to be tracked at traffic level.

So you know only your history in the site can be used as opposed to any other form of fingerprinting the sites might use at browser, cookies, or ip level.

 

I'm having issue making a container running in the network of gluetun to access the host.

In theory there's a variable for this: FIREWALL_OUTBOUND_SUBNETS
https://github.com/qdm12/gluetun-wiki/blob/main/setup/connect-a-lan-device-to-gluetun.md#access-your-lan-through-gluetun
When I include 172.16.0.0/12 I can ping the ip assigned with host-gateway but I can't curl anything.

The command just stays like this until it timesout

# curl -vvv 172.17.0.1
*   Trying 172.17.0.1:80...

I also tried adding 100.64.0.0/10 to connect to tailscale but is the same response, can ping and timedout curl.

Any other request works properly connected via the VPN configured in gluetun

Do you guys have any idea what I might be missing?

 

I'm trying to see how active a project is, but dependabot spam makes it annoying to find actual commits and to know if those commits are relevant.

There's no need for me to know chai was updated from 5.1.1 to 5.1.2, I want to see what were the most recent actual features implemented.

4
submitted 3 months ago* (last edited 3 months ago) by [email protected] to c/quebec
 

Savez-vous si il y aura des problèmes avec cette situation?

J'ai juste rejeté l'augmentation du loyer pis la compagnie as envoyé le cas au TAL.
Si je fait une cession de bail, le prochain propriétaire auras des problèmes?
Je me demande au cas où cela pourrait effrayer les candidats possibles ou si j'aurai des problèmes.

Je crois aussi que la compagnie pourrait rejeter ces deux processus parce que les deux département sont à eux, c'est ça?

Que me recommandez-vous?

Mon département est 3 1/2, et le outre est 4 1/2, pour seulement ~$30 plus.

 

So, I'm selfhosting immich, the issue is we tend to take a lot of pictures of the same scene/thing to later pick the best, and well, we can have 5~10 photos which are basically duplicates but not quite.
Some duplicate finding programs put those images at 95% or more similarity.

I'm wondering if there's any way, probably at file system level, for the same images to be compressed together.
Maybe deduplication?
Have any of you guys handled a similar situation?

 

I was using SQL_CALC_FOUND_ROWS and SELECT FOUND_ROWS();
But this has been deprecated https://dev.mysql.com/doc/refman/8.0/en/information-functions.html#function_found-rows

The recommended way now is first to query with limit and then again without it selecting count(*).
My query is a bit complex and joins a couple of tables with a large number of records, which makes each select take up to 4 seconds, so my process now takes double the time compared to as I just keep using found rows.

How can I go back to just running the select a single time and still getting the total number of rows found without the limit?

 

cross-posted from: https://lemmy.pe1uca.dev/post/1512941

I'm trying to configure some NFC tags to automatically open an app, which is easy, just have to type the package name.
But I'm wondering how I can launch the app in a specific activity.

Specifically when I search for FitoTrack in my phone I get the option to launch the app directly into the workout I want to track, so I don't have to launch the app, click the FAB, click "Record workout" and then select the workout.
So I want to have a tag which will automatically launch this app into a specific workout.

How can I know what's the data I need to put into the tag to do this?

Probably looking at the code will give me the answer, but this won't apply to closed source apps, so is there a way to get all the ways all my installed apps can be launched?

 

I'm trying to configure some NFC tags to automatically open an app, which is easy, just have to type the package name.
But I'm wondering how I can launch the app in a specific activity.

Specifically when I search for FitoTrack in my phone I get the option to launch the app directly into the workout I want to track, so I don't have to launch the app, click the FAB, click "Record workout" and then select the workout.
So I want to have a tag which will automatically launch this app into a specific workout.

How can I know what's the data I need to put into the tag to do this?

Probably looking at the code will give me the answer, but this won't apply to closed source apps, so is there a way to get all the ways all my installed apps can be launched?

 

I'm using https://github.com/rhasspy/piper mostly to create some audiobooks and read some posts/news, but the voices available are not always comfortable to listen to.

Do you guys have any recommendation for a voice changer to process these audio files?
Preferably it'll have a CLI so I can include it in my pipeline to process RSS feeds, but I don't mind having to work through an UI.
Bonus points if it can process the audio streams.

 

cross-posted from: https://lemmy.pe1uca.dev/post/1434359

I was trying to debug an issue I have connecting to a NAS, so I was checking the logs of UFW and found out there are a lot of connections being blocked from my chromecast HD (AndroidTV) on different ports via the local IP.

Sometimes I use jellyfin, but that's over tailscale, so there shouldn't be any traffic over local IP, just over tailscale's IP.
But shouldn't have traffic right now since I wasn't using it and didn't have tailscale on.

The ports seem random, just sometimes they are tried two times back to back, but afterwards another random port is tried to be accessed.

After seeing this I enabled UFW in my daily machine and the same type of logs showed up.

So, do you guys know what could be happening here?
Why is chromecast trying to access random ports on devices in the same network?

 

cross-posted from: https://lemmy.pe1uca.dev/post/1434359

I was trying to debug an issue I have connecting to a NAS, so I was checking the logs of UFW and found out there are a lot of connections being blocked from my chromecast HD (AndroidTV) on different ports via the local IP.

Sometimes I use jellyfin, but that's over tailscale, so there shouldn't be any traffic over local IP, just over tailscale's IP.
But shouldn't have traffic right now since I wasn't using it and didn't have tailscale on.

The ports seem random, just sometimes they are tried two times back to back, but afterwards another random port is tried to be accessed.

After seeing this I enabled UFW in my daily machine and the same type of logs showed up.

So, do you guys know what could be happening here?
Why is chromecast trying to access random ports on devices in the same network?

 

I was trying to debug an issue I have connecting to a NAS, so I was checking the logs of UFW and found out there are a lot of connections being blocked from my chromecast HD (AndroidTV) on different ports via the local IP.

Sometimes I use jellyfin, but that's over tailscale, so there shouldn't be any traffic over local IP, just over tailscale's IP.
But shouldn't have traffic right now since I wasn't using it and didn't have tailscale on.

The ports seem random, just sometimes they are tried two times back to back, but afterwards another random port is tried to be accessed.

After seeing this I enabled UFW in my daily machine and the same type of logs showed up.

So, do you guys know what could be happening here?
Why is chromecast trying to access random ports on devices in the same network?

 

I've only used ufw and just now I had to run this command to fix an issue with docker.
sudo iptables -I INPUT -i docker0 -j ACCEPT
I don't know why I had to run this to make curl work.

So, what did I exactly just do?
This is behind my house router which already has reject input from wan, so I'm guessing it's fine, right?

I'm asking since the image I'm running at home I was previously running it in a VPS which has a public IP and this makes me wonder if I have something open there without knowing :/

ufw is configured to deny all incoming, but I learnt docker by passes this if you configure the ports like 8080:8080 instead of 127.0.0.1:8080:8080. And I confirmed it by accessing the ip and port.

view more: next ›