barsoap

joined 1 year ago
[–] barsoap@lemm.ee 2 points 6 months ago (1 children)

Not random substances, just like diluting vodka with water is not mixing in a random substance.

[–] barsoap@lemm.ee 4 points 6 months ago

Nausea isn't overdose but that's a technicality, what I wanted to say is that it's quite hard to get to nausea off a single puff no matter the nic strength because it tastes, for lack of better term, sharp, very noticeably so. Coming off low-concentration juice you'd notice before the vapour goes past your tongue.

[–] barsoap@lemm.ee 1 points 6 months ago

Starbucks can provide value to Americans and their coffee can suck, those two things are not mutually exclusive.

[–] barsoap@lemm.ee 6 points 6 months ago

You can, in fact, go to Starbucks and order an Espresso. Let's just say that it tastes as if the barrista had never drank one straight.

[–] barsoap@lemm.ee -5 points 6 months ago* (last edited 6 months ago) (2 children)

Americans found lots of values in Starbucks coffee because Americans have no concept of coffee that's simultaneously black, not bitter, not acidic, and sweet. It would be wrong to blame Starbucks for that, they're a symptom, not the cause, but yes their coffee sucks. As it does everywhere else in the US, the country that thought that percolators were a mighty fine idea.

(And yes I know you guys invented the Aeropress. Good thing, good job, good coffee (with proper beans), now also use it).

[–] barsoap@lemm.ee 4 points 6 months ago
[–] barsoap@lemm.ee 4 points 6 months ago* (last edited 6 months ago) (1 children)

small module nuclear reactors.

Hmm let's see what changed since I last looked. This study seems recent, just looking at the publicly available sections:

SMRs do not represent dramatic improvements in economics compared to large reactors.

Translation: They're way more expensive than renewables. SMRs have some advantage which are mentioned (less land usage, non-intermittency), then we have

The advanced SMRs are compared to conventional large reactors and natural gas plants,

...but not renewables+storage, which would be a good comparison point. If it looked any good they definitely would've included it.


Now that doesn't mean that these things don't make sense for Microsoft. It might e.g. simplify power distribution within datacentres to a degree that other sources just can't, also reduce or eliminate the need for backup power, etc. But generally speaking I'm still smelling techbro BS.

[–] barsoap@lemm.ee 0 points 6 months ago

It was someone different who said that. There's a chance the authors might've gotten some claim wrong because their maths and/or methodology is shoddy but it's a large and diverse set of authors so that's unlikely. Fraud in CS empirics is generally unheard of, I mean what are you going to do when challenged, claim that the dog ate the program you ran to generate the data? There's shenanigans about the equivalent of p-hacking especially from papers from commercial actors trying to sell stuff but that's not the case here, either.

CS academics generally submit papers to journals more because of publish or perish than the additional value formal peer review offers. It's on the internet, after all. By all means, if you spot something in the paper that's wrong then be right on the internet.

[–] barsoap@lemm.ee 1 points 6 months ago* (last edited 6 months ago) (2 children)

That paper is yet to be peer reviewed or released.

Never doing either (release as in submit to journal) isn't uncommon in maths, physics, and CS. Not to say that it won't be released but it's not a proper standard to measure papers by.

I think you are jumping into conclusion with that statement. How much can you dilute the data until it breaks again?

Quoth:

If each linear model is instead fit to the generate targets of all the preceding linear models i.e. data accumulate, then the test squared error has a finite upper bound, independent of the number of iterations. This suggests that data accumulation might be a robust solution for mitigating model collapse.

Emphasis on "finite upper bound, independent of the number of iterations" by doing nothing more than keeping the non-synthetic data around each time you ingest new synthetic data. This is an empirical study so of course it's not proof you'll have to wait for theorists to have their turn for that one, but it's darn convincing and should henceforth be the null hypothesis.

Btw did you know that noone ever proved (or at least hadn't last I checked) that reversing, determinising, reversing, and determinising again a DFA minimises it? Not proven yet widely accepted as true, crazy, isn't it? But, wait, no, people actually proved it on a napkin. It's not interesting enough to do a paper about.

[–] barsoap@lemm.ee 10 points 6 months ago (1 children)

While xmms is dead there's qmmp. Supports xmms and winamp 2 skins.

[–] barsoap@lemm.ee 1 points 6 months ago

Meh. Today is September 11216, 1993. It's been a while since the internet last went uphill.

view more: ‹ prev next ›