MSids

joined 11 months ago
[–] MSids@lemmy.world 3 points 2 weeks ago

Hah, yes that was an odd placement. It seems like a non issue though.

[–] MSids@lemmy.world 8 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

I very reluctantly put a new mac mini on order last Sunday. I didn't feel great about it but I was feeling done with Windows for a bit at least for home use.

[–] MSids@lemmy.world 1 points 3 weeks ago* (last edited 3 weeks ago)

Plex is excellent, and even if you prefer the features or interface of Jellyfin, you should never expose any application (Plex, Jellyfin, or otherwise) directly to the Internet. This should be non-negotiable. Plex solves for external access with the mobile/desktop apps and app.plex.tv by brokering client connections into your network without a NAT/PAT on your router or firewall.

For a music library, even a small one, tracks should have proper metadata applied to them and be stored in directories. Plex provides guidance on this here: https://support.plex.tv/articles/200265296-adding-music-media-from-folders/

My own strategy: I deviate slightly from Plex's file and directory naming strategy, but it works perfectly. I start with high quality music, mostly from Bandcamp and process it through Musicbrainz Picard into ALBUMARTIST\YYYY - ALBUMNAME\01 - TRACKNAME.FLAC. Picard sets the metadata and ensures that there is an album cover image also.

Before moving the organized files to my Plex server, I run them through MP3Tag and overwrite any mismatched artist names with the album artist (getting rid of artist fields with 'feat xxxx artist's). This is important for when I sync files in Media Monkey to my iPod, since the iPod would break apart albums with multiple artists. My preference is to keep them grouped together.

Hope this helps good luck 👍. Let me know if you want to know a decent strategy on movie backups also.

[–] MSids@lemmy.world 19 points 4 weeks ago

It's public information transmitted over airwaves and several sites exist already. Flightradar24 and adsbexchange are the two I use, though Elon and Taylor Swift are far too boring to pay attention to when you can watch refuelers and jets instead.

[–] MSids@lemmy.world 1 points 1 month ago

I used it on an Android DAP to sync my music collection from my NAS after giving up on Folder sync due to its issues with new file detection breaking after a daylight savings time change. Synching was definitely more reliable but it takes ages to do the scan.

[–] MSids@lemmy.world 2 points 1 month ago (2 children)

At one point I had been playing GTA V online pretty consistently when I had a cheater start targeting me. It was pretty frustrating and after 30 minutes of that I gave up and closed down for the day. I shifted my attention to other games after that. I definitely get that they want to stop cheaters - cheaters ruin the fun for others. It's a shame that the new anti cheat has made it so that Steam Deck players are stuck unable to play online.

[–] MSids@lemmy.world 1 points 4 months ago (1 children)

You do not need to port forward to share a Plex instance over the Internet. App.plex.tv manages the inbound connections automatically. All you need to do is manage invites to your friends. They log in with their email/password or with Google SSO to app.plex.tv and your content will be available over a secure connection with no port forwarding.

[–] MSids@lemmy.world 0 points 4 months ago (3 children)

Plex should not be accessed externally using a port forward. Always use app.plex.tv as it prevents unauthenticated users from seeing the instance.

[–] MSids@lemmy.world 4 points 4 months ago (1 children)

Play services actually works very well for containerizing work apps. Better actually than on iOS. My work can offer a set of apps that are available in this isolated container and apply policy to them that doesn't impact other areas of the phone. I can also shut off all of them with a single button when I am on PTO. Microsoft's apps require these services to build the container, and I believe Android phones in China do not have play services. It's not perfect, but I personally think it works very well.

[–] MSids@lemmy.world 2 points 4 months ago* (last edited 4 months ago)

The costs are definitely a huge consideration and need to be optimized. A few years back we ran a POC of Open Shift in AWS that seemed to idle at like $3k/mo with barely anything running at all. That was a bad experiment. I could compare that to our new VMWare bill, which more than doubled this year following the Broadcom acquisition.

The products in AWS simplify costs into an opex model unlike anything that exists on prem and eliminate costly and time consuming hardware replacements. We just put in new load balancers recently because our previous ones were going EoL. They were a special model that ran us a about a half-mil for a few HA pairs including the pro services for installation assistance. How long will it take us to hit that amount using ALBs in AWS? What is the cost of the months that it took us to select the hardware, order, wait 90 days for delivery, rack-power-connect, configure with pro services, load hundreds of certs, gather testers, and run cutover meetings? What about the time spent patching for vulnerabilities? In 5-7 years it'll be the same thing all over again.

Now think about having to do all of the above for routers, switches, firewalls, VM infra, storage, HVAC, carrier circuits, power, fire suppression.

[–] MSids@lemmy.world 7 points 4 months ago (2 children)

The cloud today significantly different than the 2003 cpanel LAMP server. It's a whole new landscape. Complex, highly-available architectures that cannot be replicated in an on-prem environment are easily built from code in minutes on AWS.

Those capabilities come with a steep learning curve on how to operate them in a secure and effective manor, but that's always going to be the case in this industry. The people that can grow and learn will.

[–] MSids@lemmy.world 3 points 4 months ago (1 children)

The core features of a WAF do require SSL offload, which of course means that the data needs to be unencrypted with your certificate on their edge nodes, then re-encrypted with your origin certificates. There is no other way in a WAF to protect from these exploits if the encryption is not broken, and WAF vendors can respond much faster than developers can to put protections in place for emerging threats.

I had never considered that Akamai or Cloudflare would be doing any deeper analytics on our data, as it would open them up to significant liability, same as I know for certain that AWS employees cannot see the data within our buckets.

As for the captcha prompts, I can't speak to how those work in Cloudflare, though I do know that the AWS WAF does leave the sensitivity of the captcha prompts entirely up to the website owner. For free versions of CF there might be fewer configurable options.

view more: next ›