this post was submitted on 08 Jan 2024
102 points (95.5% liked)

Technology

59534 readers
3195 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Key Points:

  • Security and privacy concerns: Increased use of AI systems raises issues like data manipulation, model vulnerabilities, and information leaks.
  • Threats at various stages: Training data, software, and deployment are all vulnerable to attacks like poisoning, data breaches, and prompt injection.
  • Attacks with broad impact: Availability, integrity, and privacy can all be compromised by evasion, poisoning, privacy, and abuse attacks.
  • Attacker knowledge varies: Threats can be carried out by actors with full, partial, or minimal knowledge of the AI system.
  • Mitigation challenges: Robust defenses are currently lacking, and the tech community needs to prioritize their development.
  • Global concern: NIST's warning echoes recent international guidelines emphasizing secure AI development.

Overall:

NIST identifies serious security and privacy risks associated with the rapid deployment of AI systems, urging the tech industry to develop better defenses and implement secure development practices.

Comment:

From the look of things, it looks like it's going to get worse before it gets better.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] EpicFailGuy@lemmy.world 8 points 10 months ago (1 children)

That's a fair point, but if AI is not better or at least equivalent to a competent human driver.

why are we even allowing it?

"Bad drivers" have rights ... AI doesn't and it creates potential risks to others

[โ€“] henrikx@lemmy.dbzer0.com 1 points 10 months ago

We aren't allowing it.

No doubt that AI which is used for Level 5 autonomy should be trained to detect these situations and make the correct decision. Otherwise they wouldn't be Level 5 systems. This is one of the many reasons why self-driving cars is not a solved issue yet. The systems we use today are either used strictly as a driving aid under close supervision by a human driver or used in small areas that the AI has been already evaluated to perform well in.