BetaDoggo_

joined 1 year ago
[–] BetaDoggo_@lemmy.world 58 points 1 week ago (1 children)

Of course it was political retribution and not the whole unregistered securities and gambling market thing.

[–] BetaDoggo_@lemmy.world 24 points 2 weeks ago (1 children)

More sympathy for squirrels than human beings

[–] BetaDoggo_@lemmy.world 12 points 3 weeks ago (1 children)

Anthropic released an api for the same thing last week.

[–] BetaDoggo_@lemmy.world 133 points 2 months ago

This is actually pretty smart because it switches the context of the action. Most intermediate users avoid clicking random executables by instinct but this is different enough that it doesn't immediately trigger that association and response.

[–] BetaDoggo_@lemmy.world 27 points 2 months ago* (last edited 2 months ago) (2 children)

All signs point to this being a finetune of gpt4o with additional chain of thought steps before the final answer. It has exactly the same pitfalls as the existing model (9.11>9.8 tokenization error, failing simple riddles, being unable to assert that the user is wrong, etc.). It's still a transformer and it's still next token prediction. They hide the thought steps to mask this fact and to prevent others from benefiting from all of the finetuning data they paid for.

[–] BetaDoggo_@lemmy.world 5 points 2 months ago

The role of biodegradable materials in the next generation of Saw traps

[–] BetaDoggo_@lemmy.world 25 points 2 months ago (7 children)

It's cool but it's more or less just a party trick.

[–] BetaDoggo_@lemmy.world 22 points 2 months ago* (last edited 2 months ago) (1 children)

How many times is this same article going to be written? Model collapse from synthetic data is not a concern at any scale when human data is in the mix. We have entire series of models now trained with mostly synthetic data: https://huggingface.co/docs/transformers/main/model_doc/phi3. When using entirely unassisted outputs error accumulates with each generation but this isn't a concern in any real scenarios.

[–] BetaDoggo_@lemmy.world 7 points 3 months ago

Based on the pricing they're probably betting most users won't use it. The cheapest api pricing for flux dev is 40 images per dollar, or about 10 images a day spending $8 a month. With pro they would get half that. This is before considering the cost of the language model.

[–] BetaDoggo_@lemmy.world 4 points 3 months ago

About a dozen methods they could use https://arxiv.org/pdf/2312.07913v2

[–] BetaDoggo_@lemmy.world 3 points 3 months ago

New record for most buzz words in a headline.

[–] BetaDoggo_@lemmy.world 60 points 3 months ago (1 children)

I feel like they should at least provide them with a laptop If they're going to do unpaid promotion.

 

First, applicant argues that the mark is not merely descriptive because consumers will not immediately understand what the underlying wording "generative pre-trained transformer" means. The trademark examining attorney is not convinced. The previously and presently attached Internet evidence demonstrates the extensive and pervasive use in applicant's software industry of the acronym "GPT" in connection with software that features similar AI technology with ask and answer functions based on pre-trained data sets; the fact that consumers may not know the underlying words of the acronym does not alter the fact that relevant purchasers are adapted to recognizing that the term "GPT" is commonly used in connection with software to identify a particular type of software that features this AI ask and answer technology. Accordingly, this argument is not persuasive.

view more: next ›