FaceDeer

joined 8 months ago
[–] FaceDeer@fedia.io -3 points 8 months ago (15 children)

It isn't a "shadow", its current market cap is near its historical high point right now.

If you have no use for it then by all means ignore it. But calling it a bubble that has popped is simply factually inaccurate. Other people evidently find value in it.

[–] FaceDeer@fedia.io 0 points 8 months ago

That wasn't me.

Apologies, you're right. I took care to double check the wording but neglected to spot the different username.

Hasn't it already been ruled that LLM outputs cannot be copyrighted, or was that just patents and I'm misremembering?

No, there have been a lot of misleading news articles with headlines like that but nothing like that has been decided in any jurisdictions that I'm aware of.

The most popular news story to get headlines like that is the Thaler v. Perlmutter case, if you do a Google search for that you'll find an endless stream of "U.S. Court holds that AI generated works cannot be copyrighted" headlines. But that's not remotely what the case was actually about. What actually happened was that Thaler generated an image using an AI and then went to the US Copyright office to register the copyright *in the name of the program that generated it." That is, he went to the Copyright office and told them "the copyright for this work is solely held by the computer that generated it. Nobody else was involved in its creation." The copyright office responded "that's silly, copyright can only be held by a person (human or corporate). A computer is not a person." Since the list of copyright-holders Thaler was claiming was therefore zero, the Copyright office ruled that the work must be in the public domain.

Thaler sued, and in the subsequent court case he tried to add himself to the list of copyright-holders. The judge said "no, that's not what this suit is about, knock it off. You told the copyright office you didn't hold a copyright to that work, and as a result their ruling that the work was uncopyrighted was correct."

If he'd tried to claim copyright for himself from the start there wouldn't have been any problem. There have been other instances where humans have registered copyrights for works that they used an AI to generate. The only reason Thaler failed was because he specifically and explicitly said that he wasn't claiming copyright over it. This has unfortunately turned into one of those "suing McDonalds for making their coffee hot" semi-urban-legends.

And even if a U.S. court did make a ruling along those lines, the U.S. isn't the whole world. There are plenty of countries out there that would be happy to take the lead instead if the U.S. decided it didn't want to be supportive of local AI-driven industry.

Ah yes, because rolling your own unreliable text generator is so much less expensive. XD

It really is. I run LLMs on my home computer myself, for fun, using a commodity graphics card. The models it can run don't quite reach ChatGPT's level of sophistication but they're close, and they have the advantage that I can control them much more precisely to perform the tasks that I want them to perform. If I wanted to use a more sophisticated open model there are cloud providers that could run it for me for pennies, I just like having the hardware completely under my control.

[–] FaceDeer@fedia.io 2 points 8 months ago* (last edited 8 months ago) (2 children)

You've switched from saying that a cryptocurrency's "primary purpose" is as a currency for transactions to saying that they're securities, those are not remotely similar things.

Anyway. Are you aware that, assuming the Gartner hype cycle actually does apply here (it's not universal) and AI is really in the "trough of disillusionment", beyond that phase lies the "slope of enlightenment" wherein the technology settles into long-term usage? I feel like you're tossing terminology around in this discussion without knowing a lot about what it actually means.

No, it can't, because it isn't and cannot be made trustworthy. If you need a human to review the output for hallucinations then you might as well save yourself the licensing costs and let the human do the work in the first place.

If you think it can't replace anyone then why say "It can't replace anyone senior"?

Also, what licensing costs? Some AI providers charge service fees for using them, but as far as I'm aware none of them claim copyright over the output of LLMs. And there are open-weight LLMs you can run yourself on your own computer if you want complete independence.

[–] FaceDeer@fedia.io -5 points 8 months ago (20 children)

I'm just pointing out that they're still there. If it's a scam then at this point it's one of history's biggest and longest-running.

And whether any particular cryptocurrency qualifies as a security in any particular jurisdiction is a complicated question, some do and some don't. This is about cryptocurrency as a whole so calling them an unlicensed security would not be accurate.

[–] FaceDeer@fedia.io -3 points 8 months ago (22 children)

They are, though. The total market cap across cryptocurrencies right now is about $2.75 trillion.

[–] FaceDeer@fedia.io 0 points 8 months ago* (last edited 8 months ago) (4 children)

Why do you think that's its primary purpose? It has lots of uses. The point is that it's doing fine, it hasn't "gone away." And if you need a non-volatile cryptocurrency for some purpose there are a variety of stablecoins designed to meet that need.

AI can't reach its promised capability of doing everything for us automatically

Your criterion for a "bubble popping" is that the technology doesn't grow to completely dominate the world and do everything that anyone has ever imagined it could do? That's a pretty extreme bar to hold it to, I don't know of any technology that's passed it.

It's just advanced Clippy and autocomplete. It can't replace anyone senior.

So it can replace people lower than "senior?" That's still quite revolutionary.

When spreadsheet and word processing programs became commonplace whole classes of jobs ceased to exist. Low-level accountants, secretarial pools, and so forth. Those jobs never came back because the spreadsheet and word processing programs are still working fine in that role to this day. AI's going to do the same to another swath of jobs. Dismissing it as "just advanced Clippy" isn't going to stop it from taking those jobs, it's only going to make the people who were replaced by it feel a little worse about what they previously did for a living.

[–] FaceDeer@fedia.io -3 points 8 months ago* (last edited 8 months ago) (30 children)

But this one definitely is.

Such confidence. Why do you think so?

Many of the shifts that have happened in the economy are a result of capabilities that existing AI models actually demonstrably have right now, rather than anticipation of future developments. Even if no further developments happen those existing capabilities aren't going to just "go away" again somehow.

Also worth noting, blockchains are still around and are doing just fine.

[–] FaceDeer@fedia.io 6 points 8 months ago (32 children)

Not every new technology or shift in the economy is a "bubble" that's inevitably going to "pop" someday.

[–] FaceDeer@fedia.io 4 points 8 months ago

The term "Artificial Intelligence" is an umbrella term for a wide range of algorithms and techniques that has been in use by the scientific and engineering communities for over half a century. The term was brought into use by the Dartmouth workshop in 1956. It's perfectly applicable to LLMs and other similar generative algorithms being used today, and many less sophisticated ones as well. "Artificial general intelligence" is a subset of AI.

[–] FaceDeer@fedia.io -4 points 8 months ago

Ah, irony. It's common for people to say "AI art generators are just collage machines, copying and pasting bits of existing images together, unable to generate anything novel." I guess there's no intelligence there either, they're just parroting each other.

[–] FaceDeer@fedia.io 8 points 8 months ago* (last edited 8 months ago) (2 children)

No, the goal posts of "AI is evil and should be fought until are resolved."

The question of whether training an AI even violates copyright in the first place is still unanswered, BTW, the various court cases addressing it are still in progress. This current target is about "ethics", which are vague enough that anyone can claim that they're being violated without having to go to the hassle of proving it.

[–] FaceDeer@fedia.io 8 points 8 months ago (4 children)

I wonder where the goalposts will shift to next.

view more: ‹ prev next ›