andyburke

joined 2 years ago
[–] andyburke@fedia.io 2 points 1 month ago (1 children)

You sound like me. I hope you can find a way to flip your focus: your time outside work should be way more about you than it sounds like your work life is letting it be.

Maybe you are one of the very few with a meaningful job. If not, consider trying to treat your job like the bullshit it is and use your best cycles outside work on stuff that will really make you happy.

[–] andyburke@fedia.io -3 points 1 month ago (9 children)

Jellyfin is open source. You could be helping out.

Best of luck with Plex, though. I would say this is even more writing on the wall but it does not sound like that matters to you.

[–] andyburke@fedia.io 2 points 1 month ago
[–] andyburke@fedia.io 22 points 1 month ago (7 children)

My guess? Using the term "addiction."

[–] andyburke@fedia.io 0 points 1 month ago

https://fuelarc.com/tech/can-teslas-self-driving-software-detect-bus-only-lanes-not-reliably-no/

edit: it's trivial to find examples of these utterly failing at basic driving. This isn't close to human performance and it is obvious.

[–] andyburke@fedia.io 2 points 1 month ago

Get the data. Get it without putting me and my family at risk.

[–] andyburke@fedia.io 0 points 1 month ago (2 children)

This is the same anecdotal appeal we get over and over while AI cars drive into firetrucks and trees in ways even the most basic licensed driver would not. Then we are told these are safer because people text or become distracted. I am over this garbage. Get real numbers and find a way to do it that doesn't put me and my family at risk.

[–] andyburke@fedia.io 5 points 1 month ago (5 children)

Evidence, please.

I have literally been in thousands of driving incidences where a human has not randomly driven into a tree.

You are making a claim here: that these AI systems are safer than humans. There is at least one clear counter example to your claim in existence (which I cited - https://youtu.be/frGoalySCns if anyone wants to try to figure out what this AI was doing) and there are others including ones where they have driven into the sides of tractor trailers. I assume you will make an argument about aggregates, but the sample size we have for these AI driving systems relative to the sample size we have for humans is many orders of magnitude different. And having now seen years of these incidents continuing to pile up, I believe there needs to be much more rigorous research and testing before you can make valid claims these systems are somehow safer.

[–] andyburke@fedia.io 9 points 1 month ago (10 children)

A Tesla in FSD randomly just veered off the road into a tree. There is video. It makes no sense, very difficult to work out why the AI thought that looked like a good move.

These tools this author is saying we have do not work how people claim they do.

[–] andyburke@fedia.io 3 points 1 month ago

You continue to spout things with no citations and a bad vibe. I am done here.

[–] andyburke@fedia.io 5 points 1 month ago (2 children)

That prompt modification "directed Grok to provide a specific response on a political topic" and "violated xAI's internal policies and core values," xAI wrote on social media.

Relevant quote because one of us didn't read the article for sure.

Edit: not to mention that believing a system prompt somehow binds or constrains rather than influences these systems would also indicate to me that one of us definitely doesn't understand how these work, either.

[–] andyburke@fedia.io 2 points 1 month ago (4 children)

Why was it mentioning it at all in conversations not about it?

And why does the fact that it did that not seem to bother you?

view more: ‹ prev next ›