this post was submitted on 14 Jun 2024
365 points (98.7% liked)

Technology

59534 readers
3199 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] ricecake@sh.itjust.works 2 points 5 months ago (2 children)

Those aren't contradictory. The Feds have an enormous budget for security, even just "traditional" security like everyone else uses for their systems, and not the "offensive security" we think of when we think "Federal security agencies". Companies like Amazon, Microsoft, and Cisco will change products, build out large infrastructure, or even share the source code for their systems to persuade the feds to spend their money. They'll do this because they have products that are valuable to the Feds in general, like AWS, or because they already have security products and services that are demonstrably valuable to the civil security sector.

OpenAI does not have a security product, they have a security problem. The same security problem as everyone else, that the NSA is in large part responsible for managing for significant parts of the government.
The government certainly has interest in AI technology, but OpenAI has productized their solutions with a different focus. They've already bought what everyone thinks OpenAI wants to build from Palantir.

So while it's entirely possible that they are making a play to try to get those lines of communication to government decision makers for sales purposes, it seems more likely that they're aiming to leverage "the guy who oversaw implementation of security protocol for military and key government services is now overseeing implementation of our security protocols, aren't we secure and able to be trusted with your sensitive corporate data".
If they were aiming for security productization and getting ties for that side of things, someone like Krebs would be more suitable, since CISA is a bit more well positioned for those ties to turn into early information about product recommendations and such.

So yeah, both of those statements are true. This is a non-event with bad optics if you're looking for it to be bad.

[–] lemmyvore@feddit.nl 1 points 5 months ago* (last edited 5 months ago) (1 children)

I've always kinda assumed that government, surveillance and analytics would be OpenAI's main goals, and that consumer stuff is just for marketing and a good image. There's no money and no point in enabling Jimmy Random to use GPT to find out if Africa exists, and the commercial applications of the models they produce can be better leveraged differently (black boxed TPU hardware for example).

That's also what I assume Goggle's been doing with all the data they collect. The location data alone they collect from billions of phones is an analyst's wet dream.

If it turns out they are NOT selling all that data to be mined by evil overlords I'm gonna be disappointed.

[–] ricecake@sh.itjust.works 3 points 5 months ago

Oh, to me it just doesn't remotely look like they're interested in surveillance type stuff or significant analytics.

We're already seeing growing commercial interest in using LLMs for stuff like replacing graphic designers, which is folly in my opinion, or for building better gateways and interpretive tools for existing knowledge based or complex UIs, which could potentially have some merit.

Chat gpt isn't the type of model that's helpful for surveillance because while it could tell you what's happening in a picture, it can't look at a billion sets of tagged gps coordinates and tell you which one is doing some shenanigans, or look at every bit of video footage from an area and tell you which times depict certain behaviors.

Looking to make OpenAI, who seem to me to be very clearly making a play for business to business knowledge management AI as a service, into a wannabe player for ominous government work seems like a stretch when we already have very clear cut cases of the AI companies that are doing exactly that and even more. Like, Palantirs advertisements openly boast about how they can help your drone kill people more accurately.

I just don't think we need to make OpenAI into Palantir when we already have Palantir, and OpenAI has their own distinct brand of shit they're trying to bring into the world.

Google doesn't benefit by selling their data, they benefit by selling conclusions from their data, or by being able to use the data effectively. If they sell it, people can use the data as often as they want. If they sell the conclusions or impact, they can charge each time.
While the FBI does sometimes buy aggregated location data, they can more easily subpoena the data if they have a specific need, and the NSA can do that without it even being public, directly from the phone company.
The biggest customer doesn't need to pay, so targeting them for sales doesn't fit, whereas knowing where you are and where you go so they can charge Arby's $2 to get you to buy some cheese beef is a solid, recurring revenue stream.

It's a boring dystopia where the second largest surveillance system on the planet is largely focused on giving soap companies an incremental edge in targeted freshness.

[–] exanime@lemmy.today -1 points 5 months ago (1 children)

So you are speculating this is all good and innocent while I'm speculating they hire this guy to aim their data harvesting in a way the government would want to pay tons for it... Yet your speculation is apparently more valid then mine because, checks notes, reasons

[–] ricecake@sh.itjust.works 1 points 5 months ago

Yes, neither of us is responsible for hiring someone for the OpenAI board of directors, making anything we think speculation.

I suppose you could dismiss any thought or reasoning behind an argument for a belief as "reasons" to try to minimize them, but it's kind of a weak argument position. You might consider instead justifying your beliefs, or saying why you disagree instead of just "yeah, well, that's just, like, your opinion, man".