rxxrc

joined 4 years ago
[–] [email protected] 8 points 3 weeks ago* (last edited 2 weeks ago)

Libnotify backends are D-Bus services, which isn't really something you'd want to implement in a shell script. Going by some source code I just found, it looks pretty straightforward to do in Python, so that's one option.

The easier option would be to use an existing notification daemon that lets you disable the default GUI and specify a script to run as a hook, but I don't actually know of any like that.

[–] [email protected] 5 points 1 month ago* (last edited 1 month ago)

Are you aware of Redlib? Self-hostable frontend for Reddit aimed at privacy. I've never had a problem with old.reddit but Redlib has a bit of a more modern UI if that's what you're after. There are a bunch of public instances if you don't want to host it.

Otherwise I'm sure you could use uMatrix to disable the tracking (can't give detailed instructions sorry), but I'd argue hitting Reddit's domain at all is already less than ideal if you're trying not to be tracked.

[–] [email protected] 2 points 2 months ago

Is there a reason you can't use the generic CSV format?

Regardless, I have tested and it doesn't look like those IDs are used during import. Import works perfectly fine with a Zipfile containing an unencrypted JSON file, as formatted by ProtonPass export, with all those base64 strings (itemId, itemUuid, shareId) removed or blanked out:

JSON example

{
  "encrypted": false,
  "userId": "",
  "vaults": {
    "": {
      "name": "test",
      "description": "",
      "display": {
        "color": 0,
        "icon": 0
      },
      "items": [
        {
          "data": {
            "metadata": {
              "name": "test-login",
              "note": ""
            },
            "extraFields": [],
            "type": "login",
            "content": {
              "itemEmail": "",
              "password": "password",
              "urls": [],
              "totpUri": "",
              "passkeys": [],
              "itemUsername": "username"
            }
          },
          "state": 1,
          "aliasEmail": null,
          "contentFormatVersion": 6,
          "createTime": 1733128994,
          "modifyTime": 1733128994,
          "pinned": false
        }
      ]
    }
  },
  "version": "1.25.0"
}

When re-exporting those imported values, they have new IDs even when you include the old IDs from the original export, so they're obviously not being used. My guess is they're just some sort of random UUID.

 

I'm trying to automate the creation of Wireguard profiles to connect to various Proton VPN servers. As far as I can tell, when you generate one online through account.proton.me:

  • The client generates a private key in-browser.
  • Client POSTs the corresponding public key, along with the chosen server and some other parameters, to /api/vpn/v1/certificate.
  • Server registers the given public key and returns the parameters that should be used to construct the config file.
  • Client combines returned parameters with the private key to create the final config file.

I am attempting to replicate this process with a key generated using wg:

wg genkey | tee privkey.key | wg pubkey > pubkey.key

However when sending this pubkey to the server (leaving everything else exactly as captured from a working in-browser request), it responds with:

{
  "Code": 2001,
  "Error": "Unable to read the key, please provide a valid EC key",
  "Details": {}
}

Replacing my custom pubkey with a pre-existing pubkey from a config generated through the Web UI instead returns ClientPublicKey fingerprint conflict, please regenerate a new key, so I don't think I'm messing up the request format.

My questions are:

  • Is there a better/more official way to do this? I couldn't find anything searching.
  • Why does this not work? Surely wg creates valid EC keys? Does Proton have some additional constraints on valid keys for some reason?

I don't have much (or really any) experience with WireGuard, so perhaps I'm missing something obvious? Any help would be appreciated.

[–] [email protected] 3 points 2 months ago

I had never heard of coffee tonic, but I love both coffee and tonic, and it's rolling into summer here. I am absolutely going to try this!

[–] [email protected] 17 points 2 months ago (3 children)

I'm on Wayland these days, but if you happen to be using X11 this is the homebrew solution I used to use:

xdotool type --delay 50 "$(xclip -o -sel c)"

The --delay argument specifies the delay in milliseconds between keystrokes; if you go too low on that it tends to break things.

Interested to see what solrize comes up with because this method definitely has drawbacks -- no way to interrupt it and if you accidentally paste something large it takes a long time to finish due to the forced delays.

I've never really had the need for a Wayland version, but I don't see why subbing ydotool for xdotool and wl-paste for xclip wouldn't work.

[–] [email protected] 2 points 2 months ago* (last edited 2 months ago) (1 children)

I'm not sure the invidious: protocol supports live streams, it seems like it's only fetching a single fragment from the HLS stream. What you're trying works for me using a direct invidious instance URL, e.g. https://inv.nadeko.net/watch?v=cmkAbDUEoyA.

[–] [email protected] 10 points 2 months ago* (last edited 2 months ago) (1 children)

For fun I did a quick check and based on GEBCO elevation data this looks like about 20m sea rise (I'm guessing exactly -- I assume whoever made the image picked a round number).

Hacked-together graphic showing Florida with sea level rise causing approximately the same coastline as the OP.

I could have posted what 2m looks like but at this scale it just looks like current Florida.

[–] [email protected] 8 points 2 months ago

shopt -s dotglob will make * include .dotfiles.

[–] [email protected] 8 points 3 months ago (1 children)

That's just a one-time pad with extra steps.

[–] [email protected] 4 points 3 months ago

Australia’s about as sparsely populated ...

Sorry what? Australia's population density is is 3.6/km², the US's is 33.6/km², almost 10 times higher. Even if you fudge it by treating the swathes of uninhabited desert as an outlier and ignoring them, you're still dealing with a raw number of people lower than the population of Texas.

[–] [email protected] 37 points 5 months ago (1 children)

Are we really so far down the "obligatory memetic envelope because apparently just stating opinions isn't socially acceptable any more" slope that we've dropped past "can't stop thinking about x lmao" and on to "i was talking to my sister and, get this, i said x"?

[–] [email protected] 1 points 5 months ago

I guessed the same, but according to Wikipedia:

The name wallaby comes from Dharug walabi or waliba.

I'm not sure how modern anglicisation works but I assume what's given there is considered the most accurate spelling of the indigenous word. So "wallaby" isn't too far off.

 

All our servers and company laptops went down at pretty much the same time. Laptops have been bootlooping to blue screen of death. It's all very exciting, personally, as someone not responsible for fixing it.

Apparently caused by a bad CrowdStrike update.

Edit: now being told we (who almost all generally work from home) need to come into the office Monday as they can only apply the fix in-person. We'll see if that changes over the weekend...

view more: next ›