AI is actually deterministic, a random input is usually included to let you get multiple outputs for generative tasks. And anyway, you could just save the "random" output when you get a good one.
Trantarius
Do you have a source for this? This sounds like fine-tuning a model, which doesn't prevent data from the original training set from influencing the output. The method you described would only work if the AI is trained from scratch on only images of iron man and cowboy hats. And I don't think that's how any of these models work.
Other than citing the entire training data set, how would this be possible?
When does that even happen? If you have nano installed, wouldn't it work too?
Why do you need so much info on Mike? Can't you just evaluate his statements/work on its own merit? The whole point of open source, federated platforms is that you don't have to trust him. If he decides to enshittify it, you can just go with a fork or another instance. A nomadic identity isn't a centralized alternative to the fediverse, it's just a way of bringing some of the features of a centralized identity to a decentralized one (at least, that's the way I interpreted the article).
Quotas are not the only way to combat discrimination, nor are they a good one. Name-blind hiring would resolve name discrimination without making additional presumptions about the applicant pool. A quota presumes that the applicant pool has a particular racial mix, and that a person's qualifications and willingness to apply are independent of race. And even if those happen to be true, it can't take into account the possibility that the random distribution of applicants just happens to sway one way or another in a particular instance.
The bill itself says, more or less, "any foreign adversary controlled app is banned. Also, TikTok is a foreign adversary controlled app". So it doesn't apply exclusively to TikTok, but it does explicitly include them.
I dislike TikTok as much as the next guy, but I think there are several issues with this bill:
-
It specifically mentions TikTok and ByteDance. While none of the provisions seem to apply exclusively to them, the way they are included would give them no recourse to petition this, the way other companies would be able to (ie, other companies could argue in court that they aren't controlled by a foreign adversary, but TikTok can't. The bill literally defines "foreign adversary controlled application" as "TikTok, or ..." (g.3.A)). It also gives the appearance that this law is only supposed to apply to them, which isn't what it says but it might be treated that way anyway.
-
It leaves the determination of whether or not a company is "controlled by a foreign adversary" entirely up to the president. He has to explain himself to Congress, but doesn't need their approval. That seems ripe for exploitation. I think it should require Congress to approve, either in a addition to or instead of the president.
-
According to g.2.A.ii (in the definition of "covered company"), the law only applies to social media with more than 1,000,000 monthly active users. Not sure why that's included.
-
There is a specific exemption for any app that's for posting reviews (g.2.B). I'm guessing one such company paid a whole lot to just not have this apply to them.
You are misrepresenting a lot of stuff here.
This entirely depends on the quality of the AI and the task at hand. A well made AI can be relatively predictable. However, most tasks that AI excels at are tasks which themselves do not have a predictable solution. For instance, handwriting recognition can be solved by a neural network with much better than human accuracy. That task does not have a perfect solution, and there is not an ideal answer for each possible input (one person's 'a' could look exactly the same as another's 'o'). The same can be said for almost all games, especially those involving a human player.
Unpredictable things can be tested. That's pretty much what the entire field of statistics and probability is about. Also, testability is a fundamental requirement for any kind of machine learning. It isn't just a good practice kind of thing; if you can't test your model, you don't even have a model in the first place. The whole point is to create many candidate models and test them to find the best one.
A neural network only knows what you tell it. If you don't tell it where the player is, it's not going to magically deduce it from nothing. Also, it's output has to be interpreted to even be used. The raw output is a vector of numbers. How this is transformed into usable actions is entirely up to the developer. If that transformation allows violating the rules, that's the developers fault, not the networks. The same can be said of human input; it is the developers responsibility to transform that into permissable actions in game.
That is possible. Which is why you should make a performance metric that reflects what you actually want it to try to do. This is a very common issue and is just part of the process of making an AI. It is not an insurmountable problem.
Neural networks have been used to play countless games before. It's probably one of the most studied use cases simply because it is so easy to do.