bamboo

joined 1 year ago
[–] bamboo@lemm.ee 19 points 5 months ago

I assume the primary market for this is insurance companies who salivate at any data they can use to justify a rate hike. Secondarily advertisers, but they probably wouldn’t pay nearly as much since they have all sorts of data sources to pick from.

[–] bamboo@lemm.ee 56 points 5 months ago (2 children)

This seems like a win for privacy. Modern cars collect a creepy amount of data often without the users knowledge or the ability to opt out. This article makes it seem like some car manufacturers are no longer selling the data, but I’m not sure how true that is.

[–] bamboo@lemm.ee 1 points 5 months ago

As someone who primarily uses Unix-like systems and develops cross platform software, having windows as a weird outlier is probably best for the long term. Windows is weird and dumb but it forces us to consider platform differences more explicitly. In the future if a new operating system becomes popular, all the checks that were implemented for windows will make it a bit easier to port to newer systems.

[–] bamboo@lemm.ee 3 points 5 months ago (1 children)

Sure but what about the smartphone in your pocket?

[–] bamboo@lemm.ee 0 points 5 months ago

ChatGPT isn’t gonna replace software engineers anytime soon. It can increase productivity though, that’s the value LLMs provide. If someone made a shitty pull request filled with obvious ChatGPT output, that’s on them and not the technology. Blaming ChatGPT for a programmer’s bad code is like blaming the autocomplete in their editor for bad code: just because the editor suggests it doesn’t mean you have or should accept it if it’s wrong.

[–] bamboo@lemm.ee -1 points 5 months ago (1 children)

OpenAI is a non-profit. Further, US tech companies usually take many years to become profitable. It’s called reinvesting revenue, more companies should be doing that instead of stock buybacks.

Let’s suppose hosted LLMs like ChatGPT aren’t financially sustainable and go bust though. As a user, you can also just run them locally, and as smaller models improve, this is becoming more and more popular. It’s likely how Apple will be integrating LLMs into their devices, at least in part, and Microsoft is going that route with “Copilot+ PCs” that start shipping next week. Integration aside, you can run 70B models on an overpriced $5k MacBook Pro today that are maybe half as useful as ChatGPT. The cost to do so exceeds the cost of a ChatGPT subscription, but to use my numbers from before, a $5k MacBook Pro running llama 3 70B would have to save an engineer one hour per week to pay for itself in the first year. Subsequent years only the electrical costs would matter, which for a current gen MacBook Pro would be about equivalent to the ChatGPT subscription in expensive energy markets like Europe, or half that or less in the US.

In short, you can buy overpriced Apple hardware to run your LLMs, do so with high energy prices, and it’s still super cheap compared to a single engineer such that saving 1 hour per week would still pay for itself in the first year.

[–] bamboo@lemm.ee -1 points 5 months ago (5 children)

It can be quite profitable. A ChatGPT subscription is $20/m right now, or $240/year. A software engineer in the US is between $200k and $1m with all benefits and support costs considered. If that $200k engineer can use ChatGPT to save 2.5 hours in a year, then it pays for itself.

[–] bamboo@lemm.ee 22 points 5 months ago (10 children)

I don’t think generative AI is going anywhere anytime soon. The hype will eventually die down, but it’s already proved its usefulness in many tasks.

[–] bamboo@lemm.ee 1 points 5 months ago

If something like that were to work, a lot of effort would need to be put into minimizing the UI friction. I could see something like: uploaders add topic tags to their videos, and an AI runs in the background to generate and apply new tags based on the content (most people would not understand how to properly tag content). An AI would also be used to create a graph of related tags, where similar or closely related tags are nodes joined by an edge. Then, on first login the user is prompted to pick some tags to start with. Over time, the client uses the adjacent tag graph to fine-tune users’ tags, on device. The idea here is that we could get a decent algorithm that can recommend new stuff based on what the user watches, but keep that data processing of user-specific content local. Then, the client would also have an option the user could enable that would contribute their client’s tag information back to the global tag graph, improving the global tag graph for everybody. This data could also be combined with other users data at the instance level to somewhat anonymize the data, assuming it is a large multi-user instance. If you were to host a single user instance, you’d probably not want to contribute to the global tag graph unless you’re ok with your tag preferences being public.

[–] bamboo@lemm.ee 3 points 5 months ago

It’s a bit tricky but I think a privacy preserving algorithm is possible. Simply put, the more data available, the better an algorithm can be.

[–] bamboo@lemm.ee 3 points 5 months ago (1 children)

I think the easy discoverability on these platforms is part of what makes them so popular. Using TikTok or similar, a user typically wants to be shown new things, it maintains a sense of novelty that keeps users constantly engaged. Having to do this manually would be a huge negative.

[–] bamboo@lemm.ee 16 points 5 months ago (8 children)

The algorithms are what makes these services. Most interactions aren’t searching and selecting something specific or intentional, they’re just opening a fire hose and expecting the algorithm to pick content they find entertaining for them. It requires the algorithm to have a lot of information, both about the specific user, and about similar users.

view more: ‹ prev next ›