Rivalarrival

joined 2 years ago
[–] Rivalarrival@lemmy.today 52 points 1 month ago (10 children)

They need to advertise a legitimate use for their service.

If they don't have a threat from public wifi or other security concerns to remedy, then the only purpose for their service is to bypass region limits and block infringement notices. They would be considered complicit in such infringement.

That their service also hinders efforts to stop pirates needs to be an "unintended" and "unavoidable" side effect.

[–] Rivalarrival@lemmy.today 2 points 1 month ago (1 children)

We have incentivized night time consumption. Base load generation (nuclear, coal) can't ramp up and down fast enough to match the daily demand curve. They can't produce more than the minimum overnight demand, but they have keep producing that around the clock. To minimize the need for "peaker" plants during the day, they want the overnight demand to be as high as possible.

So they put steel mills, aluminum smelters, and other heavy industry on overnight shifts by offering them extraordinarily cheap power.

That incentivized overnight load needs to be shifted to daytime, so it can be met with solar and wind. Moving forward, we need to minimize overnight demand.

[–] Rivalarrival@lemmy.today 1 points 1 month ago (3 children)

Because it is not cost effective. Simple as that.

The problem is that we don't have enough demand shaping to shift night time loads to day time, and we don't have enough storage to shift production to overnight. The result is that daytime generation is regularly going into negative rates (you have to pay to put power on the grid, which melts the returns on your investment into solar.

As far as problems go, it's a good one to have, as it will eventually result in lower prices for daytime generation.

[–] Rivalarrival@lemmy.today 0 points 2 months ago* (last edited 2 months ago)

Can't give a general answer, but we can look to history. Consider the Castrati: removing the testicles before puberty prevented a boy's voice from changing, giving them a high singing voice in adulthood.

[–] Rivalarrival@lemmy.today 1 points 2 months ago (1 children)

I'm glad you mentioned insulin pumps, because there is a community of developers working on pumps, making them available to a broader audience, providing more people with better control over their blood sugar levels than manufacturers are willing or able to provide on their own.

https://openaps.org/

What you are arguing for is a threat to systems like OpenAPS, and to the people who benefit from them.

[–] Rivalarrival@lemmy.today 2 points 2 months ago* (last edited 2 months ago)

People repair their brakes wrong all the time. It's absolutely caused accidents.

It also allows end users to install parts superior to OEM, improving braking capabilities, and preventing accidents.

Any automotive technician can tell you that manufacturers take engineering shortcuts, resulting in a product with certain deficiencies. The manufacturer's motivation is to put out a product that widely appeals to the general public. They want nothing to do with a product specifically tailored to the needs of a particular individual.

[–] Rivalarrival@lemmy.today 9 points 2 months ago* (last edited 2 months ago) (3 children)

We probably shouldn't let people repair their own brake pads

What kind of auth-dystopian nonsense is that?

Repair an insulin pump the wrong way and it will absolutely kill you

You're just as dead if you can't get that insulin pump repaired or replaced because the manufacturer won't or can't support it. When they go bankrupt because other customers have sued them into non-existence, you still own the device they manufactured, and you still need it repaired.

Further, you presume the manufacturer can provide the best repairs. It is entirely possible and plausible that a competing engineer or programmer can improve upon the device, rendering it safer or providing superior operation. Car Mechanics can install a better braking system than the cheap, generic calipers and pads provided by the factory. Repair technicians can replace generic parts of medical devices allowing superior operation.

[–] Rivalarrival@lemmy.today 4 points 2 months ago

Yes, dangers exist from third party repairs.

Refusal or even simple failure to provide critical repair data to the end user or their agent denies the end user the ability to make an informed decision about repairs.

The company should be liable for all damages from a botched 3rd-party repair unless they provide to the end user complete specifications and unrestricted access to the device in order to make informed decisions about repairs.

[–] Rivalarrival@lemmy.today 3 points 2 months ago* (last edited 2 months ago)

Proprietary information and corporate classified information do not exist once they are incorporated into the device and sold to the end user. That information now belongs to the end user, who will continue to need it even if the company is out of business, or refuses service to the owner of the device.

Any attempt to conceal that information from the end user should make the company liable for any failed repair performed by any individual, including harm arising from that failed repair. The only way to avoid that liability is to release all information to the end user, so they are fully informed when making a repair decision.

[–] Rivalarrival@lemmy.today 1 points 2 months ago

The "collapse" you're talking about is a reduction in the diversity of the output, which is exactly what we should expect when we impart a bias toward obviously correct answers, and away from obviously incorrect answers.

Further, that criticism is based on closed-loop feedback, where the LLM is training itself only on it's own outputs.

I'm talking about open-loop, where it is also evaluating the responses from the other party.

Further, the studies whence such criticism comes are based primarily on image generation AIs, not LLMs. Image generation is highly subjective; there is no definitively "right" or "wrong" output, just whether it appeals to the specific observer. An image generator would need to tailor itself to that specific observer.

LLM sessions deal with far more objective content.

A functional definition of insanity is doing the same thing over and over and expecting different results. The inability to consider it's previous interactions denies it the ability to learn from it's previous behavior. The idea that AIs must not be allowed to train on their own data is functionally insane.

[–] Rivalarrival@lemmy.today 1 points 2 months ago* (last edited 2 months ago) (2 children)

Also, with llms there is no "next time" it's a completely static model.

It's only a completely static model if it is not allowed to use it's own interactions as training data. If it is allowed to use the data acquired from those interactions, it stops being a static model.

Kids do learn elementary arithmetic by rote memorization. Number theory doesn't actually develop significantly until somewhere around 3rd to 5th grade, and even then, we don't place a lot of value on it at that time. We are taught to memorize the multiplication table, for example, because the efficiency of simply knowing that table is far more computationally valuable than the ability to reproduce it at any given time. That rote memorization is mimicry: the child is simply spitting out a previously learned response.

Remember: LLMs are currently toddlers. They are toddlers with excellent grammar, but they are toddlers.

Remember also that simple mimicry is an incredibly powerful problem solving method.

view more: ‹ prev next ›