masterspace

joined 1 year ago
[–] masterspace@lemmy.ca 28 points 2 months ago

Four bad CEOs is like a grade school level of analysis. You can spend half an hour on Wikipedia and come up with like 18 other patterns that connect the companies.

Do you really think Warren Buffett (or any other serious investor or business analyst), is sitting there counting out the number of bad CEOs on their fingers when making an investment decision?

[–] masterspace@lemmy.ca -1 points 2 months ago* (last edited 2 months ago)

A) An LLM does not inherently sell you anything. Some companies charge you to run and use their LLMs (OpenAI), and some companies publish their LLMs open source for anyone to use (Meta, Microsoft). With neural chips starting to pop in PCs and phones, pretty soon anyone will be able to run an open source LLM locally on their machine, completely for free.

B) LLMs still rarely regurgitate the exact same original source. This would be more like someone in the back alley putting on their own performance of the movie and morphing it and adjusting it in real time based on your prompts and comments, which is a lot closer to parody and fair use than blatant piracy.

[–] masterspace@lemmy.ca 1 points 2 months ago

But we do already have that with many LLMs. Both Meta and Microsoft have been publishing their models and weights open source for anyone to use.

[–] masterspace@lemmy.ca 0 points 2 months ago* (last edited 2 months ago)

I generally agree, but I really think people in this thread are being overly dismissive about how useful LLMs are, just because they're associated with techbros who are often associated with relatively useless stuff like crypto.

I mean most people still can't run an LLM on their local machine, which vastly limits what developers can use them for. No video game or open source software can really include them in any core features because most people can't run them. Give it 3 years when every machine has a dedicated neural chip and devs can start using local LLMs that don't require a cloud connection and Azure credits and you'll start seeing actually interesting and inventive uses of them.

There's still problems with attributing sources of information but I honestly feel like if all LLMs that were trained on copyrighted data had to be published open source so that anyone could use them it would get us enough of the way there that their benefits would outweigh their costs.

[–] masterspace@lemmy.ca 1 points 2 months ago (1 children)

Users like it because it feels more comfortable, natural and useful, and probably quicker too. And in some cases it is actually better. But, it's important to appreciate how we got here ... by the internet becoming shitter, by search engines becoming shitter all in the pursuit of ads revenue and the corresponding tolerance of SEO slop

No, it legitimately is better. Do you know what Google could never do but that Copilot Search and Gemini Search can? Synthesize one answer from multiple different sources.

Sometimes the answer to your question is inherently not on a single page, it's split across the old framework docs and the new framework docs and stack overflow questions and the best a traditional search engine can ever do is maybe get some of the right pieces in front of you some of the time. LLMs will give you a plain language answer immediately, and let you ask follow up questions and modifications to your original example.

Yes Google has gotten shitty, but it would never have been able to do the above without an LLM under the hood.

[–] masterspace@lemmy.ca -4 points 2 months ago

Google has to scrape and process the entire webpage and analyze the content to figure out which links to provide you. The end result of them presenting you with those links is from them copying and processing all your site's data, including when you aren't listed in the search results.

[–] masterspace@lemmy.ca 2 points 2 months ago (3 children)

I really think it's mostly about getting a big enough data set to effectively train an LLM.

[–] masterspace@lemmy.ca -3 points 2 months ago (2 children)

Depends on what the function was. If the function was to drive ad revenue to your site, then sure, if the function was to get information into the public, then it's not replacing the function so much as altering and updating it.

[–] masterspace@lemmy.ca 1 points 2 months ago (2 children)

You'd probably end up back with AI at that point. A lot easier to distribute a trained model then an entire web index.

[–] masterspace@lemmy.ca -5 points 2 months ago (4 children)

I appreciate the defense of the blind downvotes, though I can't say I necessarily see how Foss search engines would even be allowed to exist in that case?

view more: ‹ prev next ›