Robustness Principle
It's a principle for brittleness. First you get implementation-defined behaviour, then bug-compatible software.
Robustness Principle
It's a principle for brittleness. First you get implementation-defined behaviour, then bug-compatible software.
Python for example supposedly only allows 4 spaces for indentation, but as long as the developer is consistent most if not all Python interpreters will accept any kind of indentation.
That's a recipe for disaster as your syntax is under-specced. You're right-up inviting programmers to produce programs with implementation-defined semantics.
Haskell (which also uses layout syntax) never had that problem as tabs were simply defined to be eight spaces, but that then led to issues with people setting different tab-widths in their editors and a flurry of syntax errors when they did "tabs for indentation, spaces for alignment". Which is why Haskell then moved ahead to outlaw tabs, I think it's still in the "throw a warning" phase but at some point it's going to be a hard error.
That's not to say that kconfig should do the same -- presumably they used tabs for a good reason, and all those other programs are simply not following the spec. Essentially including unit tests in the actual production files is a good move when you're dealing with that kind of situation.
Argh it's been a while. The question is whether an n-qbit system actually can contain arbitrary (k <= 2^n^) amounts of n-bit states for arbitrary values of n and k: Such a system might work up to a certain number, but then lose coherence once you try to exceed what the universe can actually compute. As far as I know we simply don't know because noone has yet built a system that actually pushes boundaries in earnest. The limiting factor is more n than k I think but then I'm not a quantum physicist.
It would still mean ludicrously miniaturised computing, in fact, minimised to a maximum extent, but it would not give the asymptotic speedup cryptologists are having nightmares about.
This is why some people (me included) don’t believe the current form of quantum computers we are researching can actually work in the real world.
And then there's some people (me included) who bet a whole beer on quantum computers being inherently impossible. Not the "get them to calculate" part, but the "shave a factor off the asymptotics of computers using ordinary physics" part. The argument is simple: It could very well be that the more data you try to squeeze into a qbit, the fuzzier the result is going to get, so if you put ten million numbers each into two qbits and somehow make the qbits add them, you'll get ten million results that are ten million times fuzzier than if you'd put in a single number. To the best of my knowledge I've not yet lost that bet, it has not been demonstrated that researchers won't run against a wall, there, essentially that the universe has a limited computation capacity per volume of space (or however you measure things at that scale).
Other fun thing to annoy people with: Claim that deciding between P = NP and P /= NP is undecidable.
Indeed to catch a fraudster and then file a criminal complaint you should verify stuff.
Attempt is usually punishable when it comes to fraud, btw, he'd be on the hook even if caught before services were rendered.
I think games using actual AI would be undesirable because it would make games involving AI much less predictable and probably way harder.
You get something very predictable when you throw NEAT at flappy bird. And you don't need ML approaches to make game AI not fun. Take RTS games: In the beginning many AIs would be very simple and have access to essentially cheat codes to be half-way competitive, then programmers sat down and allowed it to path-find through possibility spaces such as economic build-up to formulate a strategy to follow so it didn't need to cheat, thing is those things are pretty much on or off: Either they suck badly and need cheating to survive, or they're so good they're getting accused of cheating. So you need to dumb them down to make them believable, make them make non-optimal decisions and mistakes in execution.
That's the main issue: Having a believable and fun opponent, not either an idiot or a perfect genius, and you don't need ML approaches to get to either. Most studios pretty much gave up on making AI smart they keep it deliberately simple, to the point where HL2 is still the pinnacle of achievement when it comes to game AI, second place going to HL1. Those troopers are darn smart and if the player couldn't listen into their radio chatter they would indeed appear cheaty, always appearing out of nowhere... no dummy they flushed you into an ambush. That is, Valve solved the issue by essentially letting the player cheat: The player gets more knowledge than the AI (the radio chatter), also, compared to the troopers the player is a bullet sponge. All of that is non-ML, it's all hand-written state machines, more than complex enough to exhibit chaotic behaviour.
Unless you talk to game developers where a "follow the ball" "algorithm" for Pong classifies as AI, because it's controlling the behaviour of a game-world agent that's not the player. The term pretty much matches up with what game theorists (as in game theory, not computer games) call strategies. If people use ML for that kind of stuff it's not the approaches which make news nowadays because inference is (comparatively) expensive, stuff like NEAT churns out much more sensible actor programs as it evolves structure, not just weight.
A good patina will contain a good chunk of burnt oil, it's not that the stuff vanishes when smoke gets produced linseed oil in fact produces very little smoke compared to say canola. Never getting to the smoke point of whatever you have on there will result in a non-black and not entirely unlikely also gooey patina.
It's not a good idea to go miles beyond the smoke point but hovering around it is pretty much optimal. You use oils with higher smoke points if you want a more aggressive sear without ruining the taste of whatever it is you're searing, the thin layer you smoke off when heating the pan, or that smokes off while the pan is cooling off quickly after adding oil+ingredients, is generally so miniscule that it doesn't really affect taste short of giving some wok hei which is generally a good thing. If the smoke alarm goes off or you need to open a window you're overdoing it.
Not an issue once on the pan: Linseed oil oxidises quite quickly when exposed to air which is where the heat is coming from and it's certainly exposed to air on a pan, however, the pan is also an excellent heatsink and not flammable. Rags are a combination of even more exposure to oxygen (because the oil soaks into fibres and then has lots of surface area) combined with the rag being flammable, those are very specific circumstances. Bottles of the stuff also don't spontaneously combust in the fridge, they only spoil within a week or so (for culinary use, that is, it's still perfectly fine to season pans with it, and is still food-safe. Just starts to taste like ass quite quickly but that doesn't matter when you burn the stuff anyway)
But yes I should probably have mentioned that I flush my kitchen tissues when working with linseed oil.
Nah this is more reduce a tomato sauce territory.
Modern dish soap is not acidic or a base so it's quite harmless to the patina, but it's also superfluous because you generally don't want to degrease the thing which is the only thing that soap is good for. Boiling some plain water in it cleans off anything that you want to get rid off. If you're terrified of bugs when not using soap for some reason get yourself a bonfire and heat your pan as hot as you want for as long as you want nothing will survive that. Just make sure to not melt it.
To reseason cast iron, you need an oil high in poly-unsaturated fatty acids.
In other words: Linseed.
Though I wouldn't go so far as to say "need". Linseed works much better, builds a nicer patina very quickly, but pretty much any fat works. In practice mine is getting seasoned with olive oil because that's what I have standing around in the kitchen.
Proper technique is much more important in practice: First and foremost heat empty, then add oil and fry, then clean, ideally without degreasing (boiling water and a spatula do wonders), then (if necessary) add a drop of oil and try to rub it off with kitchen tissue, then put back on the stove to dry and maybe polymerise a little. Always have that thin layer of oil otherwise the pan is going to rust.
You can have a perfect patina, if you don't heat up the pan before putting stuff in there things are still going to stick. You can have practically no patina, if you bring up just a single thin layer of any fat up to its smoke point and after that add oil (so the thing isn't completely dry) things aren't going to stick.
No. Having a language depend on semicolons even though there's ways to do without, ways that don't even include layout if you don't want to, is well not absurd on the face of it it's hysterical raisins.
Haskell has one of the most admired syntaxes out there, and it's layout. It's clean, predictable, very simple and most of all intuitive rules. It makes sure that semantic structure always follows visual structure, thus provides a single source of truth why Algol-likes (i.e. everything that looks at least vaguely like C) have two.
I don't indent my Rust, I let rustfmt do that. All that automation and I still get into lots of missing or mismatched braces situations which literally never happen in Haskell because the structure of the program is visually obvious, you don't have to look for tiny squiggles to figure out what it is.