this post was submitted on 18 Aug 2024
53 points (87.3% liked)
Technology
59589 readers
2891 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I mean. This whole "article" is an opinion piece. Some of the opinions I even agreed with but there's a lot of "I think" involved in a lot of the paragraphs written here.
That's the case for all predictions on the future of AI. They are all just opinions, some more informed than others. The author does clearly cite some of the things that are informing their opinion in this case, so I'm not sure what your problem is there. I'm also not sure why you put article in quotation marks; this is clearly an article. Your comment seems like a lazy attempt to discredit the piece and shutdown discussion without bothering to respond to any of your real problems with it.
They probably just dislike Vox.
No, I'm not a fan of vox. But on the other hand this particular article might as well have been a random blog post. I don't necessarily disagree that people calling Generative AI a bust are jumping the gun. And I kind of even agree that using these generative models in applications that solve problems is going to take time, I don't necessarily agree that just because a bunch of users have fixated on the new shiny thing that it will have staying power or that it will achieve a level of usefulness that will translate to long term profitability.
But mostly I take exception to an article positing itself as factually starting every paragraph with "I think".
Where did it do that? The author writes in first person throughout, it is clearly an opinion piece.
Because this article is posited (with its title and the little blurb at the top about the author) to be about the safety of AI. The author doesn't talk about what safety regulations there are. They don't talk about what safety apparatus are being proposed or which ones have already been developed. There's no conclusion here.
When you read a newspaper, generally there is a section for opinion pieces and editorials. There are several groups trying to push for clear and concise labeling of editorial, opinion pieces, and news pieces specifically because there's so much misinformation going around.
But really. What is the point of posting an opinion piece to a community where we share tech news, when it's not even valuable in its opinions? What is there to discuss here? That shareholders and consumers should view AI safety legislation or safety protocols differently because they affect those two parties differently? We already knew that.
Unless the title and blurb have changed, this is just wrong.
The title says nothing about safety: "How AI’s booms and busts are a distraction - However current companies do financially, the big AI safety challenges remain."
Likewise the blurb says nothing about safety: "Kelsey Piper is a senior writer at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter."
What are you going on about? You're mad because you couldn't tell this was on Op/Ed?
(Sidenote: I didn't notice that "effective altruism" thing before. Barf.)
The blurb suggests that this person writes specifically altruist articles (a suggestion that this is for the benefit of someone which by proxy suggests that it's telling the truth). Because opinions are subjective that conflicts with the context of the piece pretty harshly. It gives the idea that it may in some way be an opinion based in on fact when it simply isn't because it cites no factual data that can be quantified whatsoever. This is literally how misinformation is spread. It doesn't have to be outright lies in order to be damaging.
The article talks about how new safety measures could be developed. It's in the text. It just doesn't conclude anything or talk about any specifics. That's really my problem with it. What good is the opinion of the author? What are they basing this opinion on? There's no substance to this writing at all.
This is also an opinion from you. Where's your citation to support this statement? How do we know you're not contributing to misinformation here?
Possibly because you read the article. But whatever I guess. It is just my opinion, after all.
lol what?
There's no way to write an article with that title and not have it be an opinion.
When someone starts every paragraph with "I think", they're not positing themselves as factual.
And? That's not really helpful. It adds nothing to the conversation about Generative AI. It is a list of opinions and they're based on seemingly nothing. You're arguing with me about whether or not this is an opinion piece that is obviously an opinion piece because it doesn't validate itself in any way. There's literally nothing to discuss here.