In this instance it doesn't. But in this universe almost every industry using simulations run many different ones with different parameters. It doesn't make sense to assume simulation theory with only a single simulation without interventions, because that assumes the simulator already knew that what the simulation would produce would fit what they wanted and that's not a guarantee (just for information theory reasons alone!)
Natanael
Why does testing numerous different circumstances and consequences violate the idea is simulation? A sufficiently capable simulation engine could literally be used for social experiments
In theory no, practically speaking the patent system is absurdly dumb around anything IT. Multiple patents which Apple won against Samsung with got invalidated which cut part of the awards issued
Facebook / meta has too much history of abuse. OTOH I don't think it's necessary to fully defederate, but setting a server defaults per-account to filter their instance would reduce their influence and risk of abuse while still allowing people to opt in to connecting to them
Alt-right propagandists, leaning towards the dumbest of their kind
So is Gab, but they got defederated by pretty much the entire network so fast that they just gave up and disabled federation on their own end
My first thought is you can embedd this inside buildings rather trivially
They could literally have used some variance in implementation, server side bandwidth limitations, etc, but THIS is just blatantly obvious
That's why they all try to buy each other
Yes and no, it's not about the instruction set size but about general overhead.
The x86 architecture makes a lot of assumptions that require a bunch of circuitry to be powered on continously unless you spend a ton if effort on power management and making sure anything not currently needed can go into idle - for mobile CPUs there's a lot of talk about "race to idle" as a way to minimize power consumption for this exact reason, you try to run everything in batches and then cut power.
The more you try to make ARM cover the same usecases and emulate x86 the more overhead you add, but you can keep all that extra stuff powered off when not in use. So you wouldn't increase baseline power usage much, but once you turn everything on at once then efficiency ends up being very similar.
There's already CPUs with extra instructions specifically designed for efficient emulation of other instruction sets. This includes ARM CPUs with x86 emulation at near native speed.
Checkpointing interesting points in simulations and rerunning with modified parameters happens literally all the time
Especially weather / climate / geology and medicine