danielquinn

joined 1 year ago
[–] danielquinn@lemmy.ca 2 points 2 days ago

Each Pi 4 has 8GB of RAM. With six devices, that's 48GB to play with. More than enough for my needs.

[–] danielquinn@lemmy.ca 5 points 3 days ago (2 children)

Actually, as a web guy, I find the ARM architecture to be more than sufficient. Most of the stuff I build is memory heavy and CPU light, so the Pi is great for this stuff.

[–] danielquinn@lemmy.ca 18 points 3 days ago (4 children)

They're fanless and low-power, which was the primary draw to going this route. I run a Kubernetes cluster on them, including a few personal websites (Nginx+Python+Django), PostgreSQL, Sonarr, Calibre, SSH (occasionally) and every once in a while, an OpenArena server :-)

[–] danielquinn@lemmy.ca 53 points 3 days ago (7 children)

Seven Raspberry Pi 4's and one Pi Zero, mounted on some tile "shelves" inside some IKEA furniture.

Ho ho ho

[–] danielquinn@lemmy.ca 1 points 1 week ago

Monolith has the same problem here. I think the best resolution might be some sort of browser-plugin based solution where you could say "archive this" and have it push the result somewhere.

I wonder if I could combine a dumb plugin with Monolith to do that... A weekend project perhaps.

[–] danielquinn@lemmy.ca 1 points 1 week ago

Monolith can be particularly handy for this. I used it in a recent project to archive the outgoing links from my own site. Coincidentally, if anyone is interested in that, it's called django-cool-urls.

[–] danielquinn@lemmy.ca 5 points 2 weeks ago

ExFAT is good for portable devices, but if you're working with something internally, there's no reason not to use EXT4 or NTFS.

[–] danielquinn@lemmy.ca 3 points 2 weeks ago

That's not been my experience. Lots of drives I've bought have been FAT32 out of the box.

[–] danielquinn@lemmy.ca 4 points 2 weeks ago* (last edited 2 weeks ago) (4 children)
  • Keep everything in an external git service. You can use third party services like Codeberg, GitLab, or GitHub, or host your own on your NAS.
  • When you're not working on a project and don't think you'll need to reference it for a while, just delete it from your laptop. The code always lives in git anyway.

In terms of local storage, I usually have everything in ~/projects/project-name, and I don't have tiny file size limits because I don't use FAT32 filesystems — that's the default filesystem you usually get on USB sticks and external hard drives you buy. You have to format those drives to something like EXT4 (Linux) or NTFS (Windows) or you get stuck with FAT32 which has 2gb file sizes.

[–] danielquinn@lemmy.ca 12 points 2 weeks ago (4 children)

You probably want to look into Health Checks. I believe you can tell Docker to "start service B when service A is healthy", so you can define your health check with a script that depends on Tailscale functioning.

[–] danielquinn@lemmy.ca 4 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

So my first impression is that the requirement to copy-paste that elaborate SQL to get the schema is clever but not sufficiently intuitive. Rather than saying "Run this query and paste the output", you say "Run this script in your database" and print out a bunch of text that is not a query at all but a one-liner Bash script that relies on the existence of pbcopy -- something that (a) doesn't exist on many default installs (b) is a red flag for something that's meant to be self-hosted (why am I talking to a pasteboard?), and (c) is totally unnecessary anyway.

Instead, you could just say: "Run this query and paste the result in this box" and print out the raw SQL only. Leave it up to the user to figure out how they want to run it.

Alternatively you can also do something like: "Run this on your machine and copy/paste the output":

$ curl 'https://app.chartdb.io/superquery.sql' | psql --user USERNAME --host HOSTNAME DBNAME

In the case of the cloud service, it's also not clear if the data is being stored on the server or client side in LocalStorage. I would think that the latter would be preferable.

[–] danielquinn@lemmy.ca 3 points 3 weeks ago

I had no idea! Thanks for the tip.

 

From time to time, often after I've restored from sleep or finished playing a Steam game, one of my CPU cores is pinned at 100% with no indication of what might be doing it. Running htop, btop, or GNOME system monitor all show the same thing: CPU0 at 100% while the rest are doing near-nothing, and no process in particular seems to be using those resources.

If I restart, it's back to normal, and sometimes I can play a game in Steam or let the computer go to sleep and it doesn't do this, but it happens often enough that's annoying/confusing so I'd like to know if there's a way to either (a) diagnose which processes are using which CPU cores, or (b) somehow "reset" the checking of these values to make sure that something's not just being misreported.

This is a desktop system running Arch & GNOME.

 

I'm working on a some materials for a class wherein I'll be teaching some young, wide-eyed Windows nerds about Linux and we're including a section we're calling "foot guns". Basically it's ways you might shoot yourself in the foot while meddling with your newfound Linux powers.

I've got the usual forgetting the . in lines like this:

$ rm -rf ./bin

As well as a bunch of other fun stories like that one time I mounted my Linux home folder into my Windows machine, forgot I did that, then deleted a parent folder.

You know, the war stories.

Tell me yours. I wanna share your mistakes so that they can learn from them.

Fun (?) side note: somehow, my entire ${HOME}/projects folder has been deleted like... just now, and I have no idea how it happened. I may have a terrible new story to add if I figure it out.

 

[For reference, I'm talking about Ash in Alpine Linux here, which is part of BusyBox.]

I thought I knew the big differences, but it turns out I've had false assumptions for years. Ash does support [[ double square brackets ]] and (as best I can tell) all of Bash's logical trickery inside them. It also supports ${VARIABLE_SUBSTRINGS:5:12}` which was another surprise.

At this stage, the only things I've found that Bash can do that Ash can't are:

  • Arrays, which Bash doesn't seem to do well anyway
  • Brace expansion, which is awesome but I can live without it.

What else is there? Did Ash used to be more limited? The double square bracket thing really surprised me.

 

The other day someone was complaining about the new ad blocker-blocker on YouTube and I mentioned that it might be fun to write a Firefox extension that would just load up yt-dlp and play the video through mpv.

It turns out, writing a Firefox extension is easy and tricking Firefox into launching yt-dlp isn't much harder (though it does require some annoying configuration on the user's end).

Anyway, if you're a Linux user, feel free to try it out. I don't know how much I'm going to pour into this, but as an exercise of "can this be done", it was pretty good for a few hours on a Friday night.

view more: next ›