__ghost__

joined 1 year ago
[–] __ghost__@lemmy.ml 6 points 1 month ago

Whatever works for you, simple is always better

[–] __ghost__@lemmy.ml 18 points 1 month ago (2 children)

Jellyseerr is your friend. She can request whatever and you can get alerts to add it. Even if your stuff isn't automated

[–] __ghost__@lemmy.ml 1 points 2 months ago* (last edited 2 months ago) (1 children)

You can create a tmpfs on other storage devices as well, just curious what their setup looked like

[–] __ghost__@lemmy.ml 2 points 2 months ago (1 children)

From personal experience intel QSV wasn't worth the trouble to txshoot on my hardware. Mine is a lot older than yours though. Vaapi has worked well on my arc card

[–] __ghost__@lemmy.ml 4 points 2 months ago (5 children)

Is the tmpfs on RAM?

[–] __ghost__@lemmy.ml 4 points 4 months ago (1 children)

Excellent choice. My jellyfin server is great, there are buggy things but I'm committed to the foss lifestyle

[–] __ghost__@lemmy.ml 7 points 4 months ago (9 children)

What software are you using to self host and serve your library?

[–] __ghost__@lemmy.ml 7 points 5 months ago
[–] __ghost__@lemmy.ml 4 points 6 months ago (1 children)

I feel you. T9 on my sidekick in 2008 was better than my current predictable text. At one point my screen was so broken that I was using maybe a 1/4" sliver of the screen to text, and text prediction was solid enough to give actual suggestions

[–] __ghost__@lemmy.ml 3 points 7 months ago

They're acceptable for basic productivity but very sluggish if you're coming from a flagship device. Get an S10 series if you're looking for something cheap and Samsung

[–] __ghost__@lemmy.ml 5 points 9 months ago

Omg I can't believe he's a

[–] __ghost__@lemmy.ml 7 points 9 months ago (2 children)

Commenting just for the cliffhanger

view more: next ›