wouldnt it make more sense to do a trial that tests their supposed advantages over purpose built robots rather than one which decidedly does not
underisk
Yeah but the article says the only thing these ones are gonna do is deliver parts which is probably overkill for the likely expense for the kind of sophistication necessary to imitate even a fraction of a human worker’s versatility. To say nothing about the difficulty involved in adapting them to various tasks without reprogramming or training.
I cannot conceive of a task where a humanoid robot would be better suited than just a robot built for the task without trying to mimic a human form.
gambling repeatedly with other people’s money
so... a stock broker?
I mean it’s still an AI, it’s not going to be able to perfectly block everything because they’re statistical, not deterministic. I’ve had Bing block generated images from displaying because they probably got classified as a banned subject, so it’s not exactly unprecedented.
Image generation models are also classification models.
Gonna start a business for car wraps with integrated faraday cages
You're not entirely wrong. It's more like a series of multi-dimensional maps with hundreds or thousands of true/false pathways stacked on top of each other, then carved into by training until it takes on a shape that produces the 'correct' output from your inputs.
The part you're missing is the metadata. AI (neural networks, specifically) are trained on the data as well as some sort of contextal metadata related to what they're being trained to do. For example, with reddit posts they would feed things like "this post is popular", "this post was controversial", "this post has many views", etc. in addition to the post text if they wanted an AI that could spit out posts that are likely to do well on reddit.
Quantity is a concern; you need to reach a threshold of data which is fairly large to have any hope of training an AI well, but there are diminishing returns after a certain point. The more data you feed it the more you have to potentially add metadata that can only be provided by humans. For instance with sentiment analysis you need a human being to sit down and identify various samples of text with different emotional responses, since computers can't really do that automatically.
Quality is less of a concern. Bad quality data, or data with poorly applied metadata will result in AI with less "accuracy". A few outliers and mistakes here and there won't be too impactful, though. Quality here could be defined by how well your training set of data represents the kind of input you'll be expecting it to work with.
Legally I think they’d probably be exempted from liability as a common carrier, similar to how your email server isn’t going to get sued if you mail someone a link to piracy. I doubt they’re interested in testing that theory though.