barsoap

joined 1 year ago
[–] barsoap@lemm.ee 1 points 4 months ago* (last edited 4 months ago) (1 children)

Earthworks are expensive, doubly so if you need specialised techs because fibre isn't easy to install much less splice. If you get fibre to within 200-500m of the property G.Fast will deliver 100Mbit to 1Gbit, which is way faster than most people are willing to pay for. And that's old tech in fact most plans for FTTH are actually FTTF, that is, fibre only reaches the property border, then you get a copper cable from there using XG-FAST, a single-user DSL installation. Expect something on the order of 8Gbit/s. Which is an amount of speed most people's PCs can't even deal with, 1Gbit NICs are still the norm with 2.5G making inroads. Gigabit ethernet has been sufficient for the vast, vast, majority of people for a good 20 years now.

Things might be a bit different in the US because suburbia and those ludicrously sparse neighbourhoods, yep going directly to fibre at least to the property border probably makes sense there. But in the city? Provide fibre to a block, the rest of the infrastructure can be reused. It's not cheap to run fibre through apartment building hallways, either, and no running Ethernet on those copper lines is a much worse idea, ethernet can't deal gracefully with interference, crosstalk, and otherwise shoddy copper.

[–] barsoap@lemm.ee 2 points 4 months ago (3 children)

It's also not hard to use that fibre connection to the neighbourhood to provide DSL. That's precisely what it's made for: Use that copper last mile and have whatever on the upstream side. And there's plenty of DSL hardware that doubles as POTS and/or ISDN hardware, you can upgrade the whole neighbourhood to "DSL available" by installing such a thing, connecting all the lines to it, and then remotely activating DSL when people sign up.

Over here they're actually moving away from that, opting for voip instead and using DSL over the whole frequency spectrum.

[–] barsoap@lemm.ee 2 points 4 months ago

I mean... back in the days I would never have bought a uATX board. You need expansion slots, after all, video, sound, TV, network, at least.

Nowadays? Exactly one PCIe slot occupied by the graphics card. Soundcards are pointless nowadays if your onboard doesn't suffice for what you want to do you'd get an external audio interface, have it away from all that EM interference in the case, TV we've got the internet, NIC is onboard and as I won't downgrade my network to wifi that's not needed, either.

As far as I'm concerned pretty much all of my boards were an upgrade while also simultaneously becoming more and more budget.

[–] barsoap@lemm.ee 1 points 4 months ago* (last edited 4 months ago) (1 children)

Depends on the desktop. I have a NanoPC T4, originally as a set top box (that's what the RK3399 was designed for, has a beast of a VPU) now on light server and wlan AP duty, and it's plenty fast enough for a browser and office. Provided you give it an SSD, that is.

Speaking of Desktop though the graphics driver situation is atrocious. There's been movement since I last had a monitor hooked up to it but let's just say the linux blob that came with it could do gles2, while the android driver does vulkan. Presumably because ARM wants Rockchip to pay per fucking feature per OS for Mali drivers.

Oh the VPU that I mentioned? As said, a beast, decodes 4k h264 at 60Hz, very good driver support, well-documented instruction set, mpv supports it out of the box, but because the Mali drivers are shit you only get an overlay, no window system integration because it can't paint to gles2 textures. Throwback to the 90s.

Sidenote some madlads got a dedicated GPU running on the thing. M.2 to PCIe adapter, and presumably a lot of duct tape code.

[–] barsoap@lemm.ee 8 points 4 months ago (3 children)

I'm not really that knowledgeable about AM5 mobos (still on AM4) but you should be able to get something perfectly sensible for 100 bucks. Are you going to get as much IO and bells and whistles no but most people don't need that stuff and you don't have to spend a lot of money to get a good VRM or traces to the DIMM slots.

Then, possibly bad news: Intel Gen 13 supports DDR4, so you might need new RAM.

[–] barsoap@lemm.ee 2 points 4 months ago* (last edited 4 months ago) (1 children)

because that phrase doesn’t ever appear in the training data.

Eh but LLMs abstract. It has seen " have feathers" and " have fur" quite a lot of times. The problem isn't that LLMs can't reason at all, the problem is that they do employ techniques used in proper reasoning, in particular tracking context throughout the text (cross-attention) but lack techniques necessary for the whole thing, instead relying on confabulation to sound convincing regardless of the BS they spout. Suffices to emulate an Etonian but that's not a high standard.

[–] barsoap@lemm.ee 5 points 4 months ago* (last edited 4 months ago)

Lemmy search already is quite excellent... at least here on lemm.ee, we don't have many communities but tons of users subscribed to probably about everything on the lemmyverse so the servers have it all.

It might be interesting to team up with something like YaCy: Instances could operate as YaCy peers for everything they have. That is, integrate a p2p search protocol into ActivityPub itself so that also smaller instances can find everything. Ordinary YaCy instances, doing mostly web crawling, can in turn use posts here as interesting starting points.

[–] barsoap@lemm.ee 1 points 4 months ago

Even without all that messing with stuff too much is bound to clash with protections those kinds of sites have around editorialising. That is, by doing such stuff X says "we're not actually a pinboard, we're a newspaper, we're editorially responsible for what's on there", and then prosecution can come along and say "so, your newspaper published an article calling for , didn't it? That's your speech now, not speech of some random user, isn't it?".

[–] barsoap@lemm.ee 2 points 4 months ago* (last edited 4 months ago)

Honestly if you're managing kernel and userspace remotely it's your own fault if you don't netboot. Or maybe Microsoft's don't know what the netboot situation looks like in windows land.

[–] barsoap@lemm.ee 3 points 4 months ago* (last edited 4 months ago)

You call it unregulated, but that is the natural trend for when the only acceptable goal is the greater accumulation of wealth.

Nah unregulated is the exact right word and that isn't the kind of neolib you're out for. Those would use "free" instead of unregulated, deliberately confusing unregulated markets with the theoretical model of the free market which allocates resources perfectly -- if everyone is perfectly rational and acts on perfect information. Which obviously is not the case in the real world because real-world.

There's a strain of liberalism which is pretty much the cornerstone of Europe's economical model, also, generally compatible with socdem approaches, and it says precisely that regulation should be used to bring the real-world market closer to that theoretical ideal -- they're of course not going all-out, you'd need to do stuff like outlaw trade secrets to actually do that, have all advertisement done by an equitable and accountable committee and shit. But by and large regulation does take the edge off capitalism. If you want to see actually unregulated capitalism, have a look at Mexican cartels. Rule of thumb: If you see some market failure, regulate it away. Like make producers of cereal pay for the disposal costs of the packaging they use and suddenly they have an interest in making that packaging more sensible, can't externalise the cost any more.

Defeating capitalism ultimately is another fight altogether, it's nothing less than defeating greed -- as in not the acquisition of things, but getting addicted to the process of acquisition: The trouble isn't that people want shit the problem is that they aren't satisfied once they've got what they wanted. Humanity is going to take some more time to learn to not do that, culturally, (and before tankies come along nah look at how corrupt all those ML states were and are same problem different coat of paint), in the meantime regulation, rule of law, democracy, even representative democracy, checks and balances, all that stuff, is indeed a good idea.

[–] barsoap@lemm.ee 1 points 4 months ago

Won't last for long because deflation is built into bitcoin and every sane state matches monetary supply to economical output to keep prices stable. El Salvador isn't doing that anyway, though, otherwise using USD, or getting many tax payments in bitcoin, the thing being about as liquid as asphalt, so it doesn't really change much.

[–] barsoap@lemm.ee 4 points 4 months ago* (last edited 4 months ago) (6 children)

That's already the nvidia approach, upscaling runs on the tensor cores.

And no it's not something magical it's just matrix math. AI workloads are lots of convolutions on gigantic, low-precision, floating point matrices. Low-precision because neural networks are robust against random perturbation and more rounding is exactly that, random perturbations, there's no point in spending electricity and heat on high precision if it doesn't make the output any better.

The kicker? Those tensor cores are less complicated than ordinary GPU cores. For general-purpose hardware and that also includes consumer-grade GPUs it's way more sensible to make sure the ALUs can deal with 8-bit floats and leave everything else the same. That stuff is going to be standard by the next generation of even potatoes: Every SoC with an included GPU has enough oomph to sensibly run reasonable inference loads. And with "reasonable" I mean actually quite big, as far as I'm aware e.g. firefox's inbuilt translation runs on the CPU, the models are small enough.

Nvidia OTOH is very much in the market for AI accelerators and figured it could corner the upscaling market and sell another new generation of cards by making their software rely on those cores even though it could run on the other cores. As AMD demonstrated, their stuff also runs on nvidia hardware.

What's actually special sauce in that area are the RT cores, that is, accelerators for ray casting though BSP trees. That's indeed specialised hardware but those things are nowhere near fast enough to compute enough rays for even remotely tolerable outputs which is where all that upscaling/denoising comes into play.

view more: ‹ prev next ›