this post was submitted on 13 May 2024
37 points (97.4% liked)

Selfhosted

40296 readers
344 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I started tinkering with frigate and saw the option to use a coral ai device to process the video feeds for object recognition.

So, I started checking a bit more what else could be done with the device, and everything listed in the site is related to human recognition (poses, faces, parts) or voice recognition.

In some part I read stable diffusion or LLMs are not an option since they require a lot of ram which these kind of devices lack.

What other good/interesting uses can these devices have? What are some of your deployed services using these devices for?

top 7 comments
sorted by: hot top controversial new old
[–] RegalPotoo@lemmy.world 12 points 6 months ago

Yeah, they are mostly designed for classification and inference tasks; given a piece of input data, decide which of these categories it belongs to - the sort of things you are going to want to do in near real time, where it isn't really practical to ship off to a data centre somewhere for processing.

[–] Sims@lemmy.ml 5 points 6 months ago

Image recognition, speech2txt, txt2speech, classification and such smaller models. They are fast but have no memory worth mentioning and are heavily dependent on data access speed. Afaik, transformer based models are hugely memory bound and may not be a good match if run on these externally via Usb3.

[–] minnix@lemux.minnix.dev 4 points 6 months ago (1 children)

I started using Frigate and thought about going the Coral route but realized you didn't need them if you have a relatively recent Intel CPU (6th gen or newer) as OpenVino with the iGPU is pretty much on par https://github.com/blakeblackshear/frigate/discussions/5742 .

A lot of the newer SBCs are being shipped with integrated NPUs/TPUs now as well. I would get a Coral if I were to use an older SBC or RPi or older PC as a camera server for object detection. Currently I have an ESP32-CAM watching a bird feeder but that feed goes to a modern server for bird species recognition but I could see a Coral as an option.

[–] redbr64@lemmy.world 1 points 6 months ago

Can you tell me more about your bird recognition setup? I currently have a feeder with a PiCam on it that records based on movement (just using RPi_Cam_Web_Interface) but would love to do something like that!

[–] acockworkorange@mander.xyz 2 points 6 months ago

They’re a great use for that otherwise useless “Wi-Fi slot” on a wired machine. Not too expensive either. So if you’re using your iGPU to transcode videos, it won’t interfere with your Frigate or Immich workload. And they’re supposed to be energy efficient too.

[–] solrize@lemmy.world -4 points 6 months ago (1 children)

They are generally used for speech recognition and image classification, sometimes in a BAD way, like face recognition in surveillance cameras.

[–] gaylord_fartmaster@lemmy.world 6 points 6 months ago

I mean that's not inherently bad, what you do with that data could be though.