If nothing else the atproto is pretty great, we're starting to see a proper federated net start opening up around it.
WalnutLum
Waiting for a 8x1B MoE
Anyone else remember when people were making expert systems with scheme and saying that was the end of doctors etc?
Ironically thanks in no small part to Facebook releasing Llama and kind of salting the earth for similar companies trying to create proprietary equivalents.
Nowadays you either have gigantic LLMs with hundreds of billions of parameters like Claude and ChatGPT or you have open Models that are sub-200B.
I could be mistaken too, this has all only recently become interoperable so there's some growing pains
Yes! Actually.
The full atproto up and running with bluesky is only in the last month or so, so people are finally starting to trickle out and set up their own services and hosts.
It's actually very promising and hopeful.
This doesn't seem to be that big an issue as PDS's can just directly communicate with one-another like how ActivityPub works.
I wouldn't lump bluesky in the same pile as threads anymore, the atprotocol is fully up and running and slowly but surely individually hosted data servers are trickling out and away to their own services.
There's even new services running completely independent of bluesky running on atproto now: https://whtwnd.com/about
There's a really good write-up on how atproto federation works here: https://whtwnd.com/alexia.bsky.cyrneko.eu/3l727v7zlis2i
If you treat an AI like anything other than the rubber duck in Rubber Duck Programming you're using it wrong.
Like that person in a dream who keeps telling you to wake up
I've been curious about google coral, but their memory is so tiny I'm not sure what kinds of models you can run on them