FatCrab

joined 1 year ago
[–] FatCrab@lemmy.one 6 points 9 months ago

Keep in mind that this isn't creating 3d Billy volumes at all. While immensely impressive, the thing being created by this architecture is a series of 2d frames.

[–] FatCrab@lemmy.one 3 points 9 months ago

You have different fees related to bringing the patent to issuance that depend on the quality of the application (many patents just never issue) and that can rack up considerably. Then you have maintenance fees every few years after issuance that increase exponentially. In the US.

[–] FatCrab@lemmy.one 6 points 9 months ago

Filing and prosecuting a patent application is already very expensive. Moreover, different entities are charged different rates, ranging from solo inventory (75% discount), to small entity (50%), and large/standard entity (0%, of course). Might be a little off on those discounts, been a minute since I've had to look directly at it.

[–] FatCrab@lemmy.one 4 points 9 months ago

I really hate this reduction of gpt models. Is the model probabilistic? Absolutely. But it isn't simply learning a comprehensible probability of words--it is generating a massively complex conditional probability sequence for words. Largely, humans might be said to do the same thing. We make a best guess at the sequence of words we decide to use based on conditional probabilities along a myriad number of conditions (including semantics of the thing we want to say).

[–] FatCrab@lemmy.one 4 points 9 months ago

The ultimate issue is that the models don't encode the training data in any way that we historically have considered infringement of copyright. This is true for both transformer architectures (gpt) and diffusion ones (most image generators). From a lay perspective, it's probably good and relatively accurate for our purposes to imagine the models themselves as enormous nets that learn vague, muddled, impressions of multiple portions of multiple pieces of the training data at arbitrary locations within the net. Now, this may still have IP implications for the outputs and here music copyright is pretty instructive, albeit very case-by-case. If a piece is too "inspired" by a particular previous work, even if it is not explicit copying it may still be regarded as infringement of copyright. But, like I said, this is very case specific and precedent cuts both ways on it.

[–] FatCrab@lemmy.one 1 points 10 months ago

This is actually an effective measure when you sit down to actually think about this from a policy perspective. Right now, the biggest issue with AI generated content for the corporate side is that there is no IP right in the generated content. Private enterprise generally doesn't like distributing content that it doesn't have ability to exercise complete control over. However, distributing generated content without marking it as generated reduces that risk outlay potentially enough to make the value calculus swing in favor of its use. People will just assume there are rights in the material. Now, if you force this sort of marking, that heavily alters the calculus.

Now people will say wah wah wah no way to really enforce. People will lie. Etc. But that's true for MOST of our IP laws. Nevertheless, they prove effective at accomplishing many of their intents. The majority of private businesses are not going to intentionally violate regulatory laws of they can help it and, when they do, it's more often than not because they think they've found a loophole but were wrong. And yes, that's even accounting for and understanding that there are many examples of illegal corporate activity.

[–] FatCrab@lemmy.one 3 points 10 months ago (1 children)

Food production and transport is famously a zero emission industry.

view more: ‹ prev next ›