teawrecks

joined 1 year ago
[–] teawrecks@sopuli.xyz 3 points 3 months ago (3 children)

I see several Amcrest options that look like they have integrated AI object detection. Frigate on the other hand says you should get a "Google Coral Accelerator". Do you know if Frigate (or RTSP, I guess) has a way to leverage the built in detection capabilities of a camera (assuming they are built in, and not being offloaded to the cloud)? Or am I better of looking at the "dumb" Amcrest cameras, and just assuming all processing for all cameras will happen on my Frigate hardware?

[–] teawrecks@sopuli.xyz 3 points 3 months ago (2 children)

That sounds fine, but isn't this also what LXC is for?

[–] teawrecks@sopuli.xyz 0 points 3 months ago* (last edited 3 months ago) (2 children)

I'm talking about a properly made GUI, you're talking about most GUIs. I believe I covered this in my original comment: poorly made GUIs are worse than a terminal interface.

But don't act like a linear string of characters, typed in one-by-one is the optimal way to interface with a computer. Obviously, a non-invasive neuralink implant that is able to interpret your intentions with 100% accuracy without uploading any of your data to Elon Musk is the ideal Human Interface Device, but we're not quite there yet.

In the meantime, I assume you run a window manager of some kind. Why? Do you regularly browse the internet from the terminal? Unlikely. Why not? Have you ever tried non-linear video editing, image manipulation, or 3D modeling in a terminal? How about debugging multi threaded code, or visualizing allocation patterns? Pored over profiling metrics to root cause a performance issue? And if VR/AR is part of your workflow, trying to use a terminal in concert feels sillier than the hacking montage from Hackers.

Terminals are objectively more limited than a GUI, because that's literally the definition of a terminal: a very limited graphical user interface. The advantage of a terminal is that it's easy (especially for programmers who don't have an artistic/UX-bone in their body, and are thinking in terms of functions and operands) to make a primitive interface that adheres to a set of expectations. But no one commits every parameter for every command line tool to memory, and even if they did, people don't want to type out a novel when moving a cursor to a specific region of the screen feels more natural and takes a fraction of the time. (Not that it always feels more natural in every circumstance, but in the times when it does, that's what every sane person should prefer to do).

So just like I told OP, the goal shouldn't be to use a terminal; you should instead focus on solving a problem. The terminal is just often the least bad tool that currently exists to solve a lot of problems.

[–] teawrecks@sopuli.xyz -1 points 3 months ago (6 children)

As other's have said, have a goal. A computer is a tool, use it to accomplish something, try to get something working for yourself that currently doesn't. If your PC aleady does everything you need it to, great, you're ahead of everyone else 😅.

Don't think of the command line as a good option, it's archaic, and its capabilities are objectively rudimentary, it's just often the least bad option because no one has made a convenient GUI for what you're trying to do (or if they have, they did it poorly, and somehow the command line is still less bad). So you will inevitably have to interact with it.

[–] teawrecks@sopuli.xyz 18 points 3 months ago

That's already been happening for the last 15+ years, but Linux growth is primarily in the last 3. People are definitely moving to mobile, but the ones on desktop seem to be preferring Linux more than they did even 5-10 years ago (Note that laptops are included in "desktop" here).

[–] teawrecks@sopuli.xyz 4 points 3 months ago

Hah I had the same thought. Trillian, though. Named after the character from HHGttG.

[–] teawrecks@sopuli.xyz 2 points 3 months ago

You should definitely throw that whole line into a script though, no reason to type it out every time. Then if it's possible to have a hook that runs it after a kernel update, that would be ideal. Not sure if there's a standard way to do that, might be a bit distro dependent.

[–] teawrecks@sopuli.xyz 10 points 3 months ago

I would still say dual booting is the superior option, but that might be complicated for some people, so this is probably a good recommendation.

[–] teawrecks@sopuli.xyz 64 points 3 months ago (2 children)

I feel like this is the perfect place for Right to Repair legislation: the product is broken? And it's outside your support window? Then give customers what they need to make the fix themselves. It's not good enough to say "meh, guess you gotta buy one of our newer chips then 🤷"

[–] teawrecks@sopuli.xyz 1 points 3 months ago

Ahh, yeah if it's specifically when coming back from a VM, that sounds different. Maybe the vfio_pci driver isn't getting swapped back to the real one? I barely know how it works, I'm sure you've checked everything.

[–] teawrecks@sopuli.xyz 1 points 3 months ago (2 children)

For me, I have intel integrated + amd discrete. When I tried to set DRI_PRIME to 0 it complained that 0 was invalid, when I set it to 2 it said it had to be less than the number of GPUs detected (2). After digging in I noticed my cards in /dev/dri/by-path were card1 card2 rather than 0 and 1 like everyone online said they should be. Searching for that I found a few threads like this one that mentioned simpledrm was enabled by default in 6.4.8, which apparently broke some kind of enumeration with amd GPUs. I don't really understand why, but setting that param made my cards number correctly, and prime selection works again.

[–] teawrecks@sopuli.xyz 2 points 3 months ago* (last edited 3 months ago) (4 children)

I actually may have seen the same issue recently. Have you tried adding initcall_blacklist=simpledrm_platform_driver_init to your kernel launch params?

view more: ‹ prev next ›