barsoap

joined 1 year ago
[–] barsoap@lemm.ee 3 points 7 months ago

Nah dataport doesn't make profit, or at least it's not paying out any to the states. It's about as close to a ministry as you can get without being required to pay government wages and there's not many in the industry who'd work for that. They don't pay as much as FAANG or even SAP but among the wider industry it's definitely competitive, especially if you don't plan on job-hopping and dodging lay-offs.

[–] barsoap@lemm.ee 3 points 7 months ago (2 children)

What else can be more profitable for a consulting company than shifting the entire IT of a city or a country between two largely incompatible solutions? :)

See that's the neat thing SH has (together with HH, HB and ST) its own IT consultancy. Public enterprise, not some public-private partnership, and 5300 staff a quite a bit more than what Munich's IT department has.

And yes of course Munich is corrupt what do you expect it's Bavaria.

[–] barsoap@lemm.ee 12 points 7 months ago* (last edited 7 months ago) (1 children)

Oh he's still perfectly blunt about code, and even about people if need be but he makes sure he has a good night's worth of sleep before he does that to not do it in anger. Which means dress-downs are now of the "I'm not angry, I'm disappointed" type. I'm not aware of him ever telling people to kill themselves, just erm "wondering":

Of course, I'd also suggest that whoever was the genius who thought it was a good idea to read things ONE F*CKING BYTE AT A TIME with system calls for each byte should be retroactively aborted. Who the f*ck does idiotic things like that? How did they noty die as babies, considering that they were likely too stupid to find a tit to suck on?

(And to be fair, yes, reading things one byte at a time is fucking stupid. Not something you'd ever expect in a kernel)

[–] barsoap@lemm.ee 2 points 7 months ago

They’re promising to give notice, but it’s just not a good situation.

cache.nixos.org keeps all sources so once hydra has ingested something it's not going away unless nixos maintainers want it to. The policy for decades was simply "keep all derivations" but in the interest of space savings it has recently been decided to do a gc run, meaning that 22 year old derivations will still available but you're going to have to build them from the cached source, the pre-built artifacts will be gone.

[–] barsoap@lemm.ee 2 points 7 months ago (2 children)

You’re looking at the wrong line.

Never mind the lines I linked to I just copied the links from search.nixos.org and those always link to the description field's line for some reason. I did link to unstable twice though this is the correct one, as you can see it goes to tukaani.org, not github.com. Correct me if I'm wrong but while you can attach additional stuff (such like pre-built binaries) to github releases the source tarballs will be generated from the repository and a tag, they will match the repository. Maybe you can do some shenanigans with rebase which should be fixed.

[–] barsoap@lemm.ee 10 points 7 months ago* (last edited 7 months ago) (4 children)

Downloading from github is how NixOS avoided getting hit. On unstable, that is, on stable a tarball gets downloaded (EDIT: fixed links).

Another reason it didn't get hit is that the exploit is debian/redhat-specific, checking for files and env variables that just aren't present when nix builds it. That doesn't mean that nix couldn't be targeted, though. Also it's a bit iffy that replacing the package on unstable took in the order of 10 days which is 99.99% build time because it's a full rebuild. Much better on stable but it's not like unstable doesn't get regular use by people, especially as you can mix+match when running NixOS.

It's probably a good idea to make a habit of pulling directly from github (generally, VCS). Nix checks hashes all the time so upstream doing a sneak change would break the build, it's more about the version you're using being the one that has its version history published. Also: Why not?

Overall, who knows what else is hidden in that code, though. I've heard that Debian wants to roll back a whole two years and that's probably a good idea and in general we should be much more careful about the TCB. Actually have a proper TCB in the first place, which means making it small and simple. Compilers are always going to be an issue as small is not an option there but the likes of http clients, decompressors and the like? Why can they make coffee?

[–] barsoap@lemm.ee 2 points 8 months ago* (last edited 8 months ago)

Duke Nukem can do that, too, both it and Dark Forces use portal engines while Doom is a BSP engine. With a portal engine you're not bound to a single global coordinate system, you can make things pass through each other.

Not actually a feature of the renderer you can do the same using modern rendering tech, though I can't off the top of my head think of a game that uses it. Certainly none of the big game engines support it out of the box. You can still do it by changing levels and it wouldn't be hard to do something half-way convincing in the Source engine (Half-Life, Portal, etc, the Valve thing), quick level loading by mere movement is one of its core features, but it isn't quite as seamless as a true portal engine would be.

[–] barsoap@lemm.ee 4 points 8 months ago (3 children)

Most notably perspective only gets calculated on the horizontal axis, vertically there is no perspective projection. Playing the OG graphics with mouse gets trippy fast because of that. Doom doesn't use much verticality to hide it. Duke Nukem level design uses it more and it's noticeable but still tolerable. Modern level design with that kind of funk, forget it.

[–] barsoap@lemm.ee 4 points 8 months ago

And CPUs still do it to this day. Nasty, nasty maths involved in figuring out an optimal combination between lookup table size and refinement calculations because that output can't be approximate, it has to work how IEEE floats are supposed to work. Pure numerology.

[–] barsoap@lemm.ee 1 points 8 months ago (2 children)

Killer samples do happen, sure but vorbis at Q9? I’m highly dubious.

Back in 2004, when the album released, the encoder was barely past version 1.0. Though after 20 years I could misremember "full quality" as "whatever people said wouldn't degrade quality".

That track in particular just sounds badly recorded to begin with.

Heresy. Next thing you're going to tell me is that Sunn O))) should move the mics away from the amps so the sound is cleaner. Granted, though, Sunn O))) does that live, blackmail live is quite different because they can't layer a gazillion tracks for the mix. But yes the deliberateness of just how much noise is in those guitars doesn't get conveyed after getting mangled by ten year old youtube compression.

[–] barsoap@lemm.ee 2 points 8 months ago* (last edited 8 months ago) (5 children)

psychoacoustic models

Sometimes they mess up. Actually only ever noticed it once and that was years ago CD vs. ogg vorbis at full quality level, this track. Youtube version is even worse, it seems (from memory): The guitars kicking in around 30 seconds should be harsh and noisy as fuck like nothing you've ever heard, they're merely distorted on youtube.

Then lossy codecs are a bad idea for archival reasons as you can't recode them without incurring additive losses -- each codec has a different psychoacoustic model, each deletes different stuff. Thus, FLAC definitely has a place.

[–] barsoap@lemm.ee 1 points 8 months ago (1 children)

Tape makes a lot of sense audio-quality wise especially for people who insist on analogue for some silly reason, the prices don't make sense, though: Tapes are expensive to manufacture. CDs and vinyl are pressed whole while tapes need to be run through a machine, centimetre by centimetre. Though maybe for small runs it does make sense as you don't need a physical master.

view more: ‹ prev next ›