Spedwell

joined 1 year ago
[–] Spedwell@lemmy.world 16 points 4 months ago (1 children)

As the article points out, TSA is using this tech to improve efficiency. Every request for manual verification breaks their flow, requires an agent to come address you, and eats more time. At the very least, you ought not to scan in the hopes that TSA metrics look poor enough they decide this tech isn't practical to use.

[–] Spedwell@lemmy.world 35 points 5 months ago (5 children)

I'm curious what issue you see with that? It seems like the project is only accepting unrestricted donations, but is there something suspicious about shopify that makes it's involvement concerning (I don't know much about them)?

[–] Spedwell@lemmy.world 6 points 6 months ago (1 children)

Right concept, except you're off in scale. A MULT instruction would exist in both RISC and CISC processors.

The big difference is that CISC tries to provide instructions to perform much more sophisticated subroutines. This video is a fun look at some of the most absurd ones, to give you an idea.

[–] Spedwell@lemmy.world 2 points 6 months ago

The current assumption made by these companies is that AI training is fair use, and is therefore legal regardless of license. There are still many ongoing court cases over this, but one case was already resolved in favor or the fair use position.

[–] Spedwell@lemmy.world 7 points 6 months ago* (last edited 6 months ago)

There is an episode of Tech Won't Save Us (2024-01-25) discussing how weird the podcasting play was for Spotify. There is essentially no way to monetize podcasts at scale, primarily because podcasts do not have the same degree of platform look-in as other media types.

Spotify spent the $100 million (or whatever the number was) to get Rogan exclusive, but for essentially every other podcast you can find a free RSS feed with skippable ads. Also their podcast player just outright sucks :/

[–] Spedwell@lemmy.world 6 points 6 months ago

Spin up c/notquitetheonion?

[–] Spedwell@lemmy.world 2 points 6 months ago (5 children)

Errrrm... No. Don't get your philosophy from LessWrong.

Here's the part of the LessWrong page that cites Simulacra and Simulation:

Like “agent”, “simulation” is a generic term referring to a deep and inevitable idea: that what we think of as the real can be run virtually on machines, “produced from miniaturized units, from matrices, memory banks and command models - and with these it can be reproduced an indefinite number of times.”

This last quote does indeed come from Simulacra (you can find it in the third paragraph here), but it appears to have been quoted solely because when paired with the definition of simulation put forward by the article:

A simulation is the imitation of the operation of a real-world process or system over time.

it appears that Baudrillard supports the idea that a computer can just simulate any goddamn thing we want it to.

If you are familiar with the actual arguments Baudrillard makes, or simply read the context around that quote, it is obvious that this is misappropriating the text.

[–] Spedwell@lemmy.world 12 points 6 months ago* (last edited 6 months ago)

The reason the article compares to commercial flights is your everyday reader knows planes' emissions are large. It's a reference point so people can weight the ecological tradeoff.

"I can emit this much by either (1) operating the global airline network, or (2) running cloud/LLMs." It's a good way to visualize the cost of cloud systems without just citing tons-of-CO2/yr.

Downplaying that by insisting we look at the transportation industry as a whole doesn't strike you as... a little silly? We know transport is expensive; It is moving tons of mass over hundreds of miles. The fact computer systems even get close is an indication of the sheer scale of energy being poured into them.

[–] Spedwell@lemmy.world 4 points 6 months ago* (last edited 6 months ago)

concepts embedded in them

internal model

You used both phrases in this thread, but those are two very different things. It's a stretch to say this research supports the latter.

Yes, LLMs are still next-token generators. That is a descriptive statement about how they operate. They just have embedded knowledge that allows them to generate sometimes meaningful text.

[–] Spedwell@lemmy.world 7 points 7 months ago

It's not really stupid at all. See the matrix code example from this article: https://spectrum.ieee.org/ai-code-generation-ownership

You can't really know when the genAI is synthesizing from thousands of inputs or just outright reciting copyrighted code. Not kosher if it's the latter.

[–] Spedwell@lemmy.world 3 points 7 months ago (1 children)

Just curious, where does the Anti Commercial-AI bit come from? The page linked does not include that term in the title or summary, and from what I understand of the legal situation it wouldn't make a difference to explicitly mention AI.

[–] Spedwell@lemmy.world 9 points 7 months ago (6 children)

I get that there are better choices now, but let's not pretend like a straw you blow into is the technological stopping point for limb-free computer control (sorry if that's not actually the best option, it's just the one I'm familiar with). There are plenty of things to trash talk Neuralink about without pretending this technology (or it's future form) is meritless.

view more: next ›