maegul

joined 2 years ago
[–] maegul@lemmy.ml 33 points 3 months ago (2 children)

Yea, this highlights a fundamental tension I think: sometimes, perhaps oftentimes, the point of doing something is the doing itself, not the result.

Tech is hyper focused on removing the "doing" and reproducing the result. Now that it's trying to put itself into the "thinking" part of human work, this tension is making itself unavoidable.

I think we can all take it as a given that we don't want to hand total control to machines, simply because of accountability issues. Which means we want a human "in the loop" to ensure things stay sensible. But the ability of that human to keep things sensible requires skills, experience and insight. And all of the focus our education system now has on grades and certificates has lead us astray into thinking that the practice and experience doesn't mean that much. In a way the labour market and employers are relevant here in their insistence on experience (to the point of absurdity sometimes).

Bottom line is that we humans are doing machines, and we learn through practice and experience, in ways I suspect much closer to building intuitions. Being stuck on a problem, being confused and getting things wrong are all part of this experience. Making it easier to get the right answer is not making education better. LLMs likely have no good role to play in education and I wouldn't be surprised if banning them outright in what may become a harshly fought battle isn't too far away.

All that being said, I also think LLMs raise questions about what it is we're doing with our education and tests and whether the simple response to their existence is to conclude that anything an LLM can easily do well isn't worth assessing. Of course, as I've said above, that's likely manifestly rubbish ... building up an intelligent and capable human likely requires getting them to do things an LLM could easily do. But the question still stands I think about whether we need to also find a way to focus more on the less mechanical parts of human intelligence and education.

[–] maegul@lemmy.ml 3 points 3 months ago

Sure, but IME it is very far from doing the things that good, well written and informed human content could do, especially once we're talking about forums and the like where you can have good conversations with informed people about your problem.

IMO, what ever LLMs are doing that older systems can't isn't greater than what was lost with SEO ads-driven slop and shitty search.

Moreover, the business interest of LLM companies is clearly in dominating and controlling (as that's just capitalism and the "smart" thing to do), which means the retention of the older human-driven system of information sharing and problem solving is vulnerable to being severely threatened and destroyed ... while we could just as well enjoy some hybridised system. But because profit is the focus, and the means of making profit problematic, we're in rough waters which I don't think can be trusted to create a net positive (and haven't been trust worthy for decades now).

[–] maegul@lemmy.ml 2 points 3 months ago (2 children)

I really think it’s mostly about getting a big enough data set to effectively train an LLM.

I mean, yes of course. But I don't think there's any way in which it is just about that. Because the business model around having and providing services around LLMs is to supplant the data that's been trained on and the services that created that data. What other business model could there be?

In the case of google's AI alongside its search engine, and even chatGPT itself, this is clearly one of the use cases that has emerged and is actually working relatively well: replacing the internet search engine and giving users "answers" directly.

Users like it because it feels more comfortable, natural and useful, and probably quicker too. And in some cases it is actually better. But, it's important to appreciate how we got here ... by the internet becoming shitter, by search engines becoming shitter all in the pursuit of ads revenue and the corresponding tolerance of SEO slop.

IMO, to ignore the "carnivorous" dynamics here, which I think clearly go beyond ordinary capitalism and innovation, is to miss the forest for the trees. Somewhat sadly, this tech era (approx MS windows '95 to now) has taught people that the latest new thing must be a good idea and we should all get on board before it's too late.

[–] maegul@lemmy.ml 4 points 3 months ago* (last edited 3 months ago) (4 children)

I mean, their goal and service is to get you to the actual web page someone else made.

What made Google so desirable when it started was that it did an excellent job of getting you to the desired web page and off of google as quickly as possible. The prevailing model at the time was to keep users on the page for as long as possible by creating big messy "everything portals".

Once Google dropped, with a simple search field and high quality results, it took off. Of course now they're now more like their original competitors than their original successful self ... but that's a lesson for us about what capitalistic success actually ends up being about.

The whole AI business model of completely replacing the internet by eating it up for free is the complete sith lord version of the old portal idea. Whatever you think about copyright, the bottom line is that the deeper phenomenon isn't just about "stealing" content, it's about eating it to feed a bigger creature that no one else can defeat.

[–] maegul@lemmy.ml 7 points 3 months ago (1 children)

IMO, app developers in general are lacking imagination or ambition over ideas like this. I’ve even suggested it directly to a developer or a popular mastodon app, who was entertaining the idea of making a lemmy app … and they said they couldn’t see how it would work.

[–] maegul@lemmy.ml 2 points 3 months ago

I can see this argument, at least in general. As for community mods, I feel like it'd be generally fruitful and useful for them to be and feel empowered to create their own spaces. While I totally hear your argument about the size of the "mod" layer being too large to be trustworthy, I feel like some other mitigating mechanisms might be helpful. Maybe the idea of a "senior" mod, of which any community can only have one? Maybe "earning" seniority through being on the platform for a long time or something, not sure. But generally, I think enabling mods to moderate effectively is a generally good idea.

[–] maegul@lemmy.ml 6 points 4 months ago (3 children)

Yea, which is why I think the obvious solution to the whole vote visibility question is to have private votes that are visible to admins and mods for moderation purposes. It seems like the right balance.

[–] maegul@lemmy.ml 10 points 4 months ago (2 children)

If public voting data becomes a thing across the threadiverse, as some lemmy people want.

Which is why I think the appropriate balance is private votes visible to admins/mods.

[–] maegul@lemmy.ml 10 points 4 months ago* (last edited 4 months ago) (2 children)

I feel like some software/platform features that encourage and foster more community-based and discussion could go a long way.

Some quick thoughts:

  • user-specific multi-communities
  • Being able to notifications for certain events or activities (incl special notifications from a community or ongoing discussion in a thread)
  • Opt-in post visibility, such as excluding a post from the All/local feeds (similar to the private communities feature coming to lemmy)
  • Perhaps controversial ... but expanding into a quasi-blogging direction where people can have their own personal communities into which only they can post, a little like microblogging, but more like an actual blog given the character limit here. Along with multi-communities, it could be quite a nice complement and allow for communities to evolve around people with interesting ideas/thoughts.
[–] maegul@lemmy.ml 3 points 4 months ago

you just need a user on any of the services that show votes publicly.

Well if that sort of thing started happening, I think it’d be reasonable to have that person blocked, banned or defederated.

IMO, What’s possible doesn’t need to dictate what’s easy and common, especially when there are always balances and countermeasures previously involved.

[–] maegul@lemmy.ml 2 points 4 months ago (1 children)

Yea ... that makes sense. Thanks!

Still ... intuitively it feels like if the "threadiverse" platforms weren't so concerned with interoperating with the likes on microblogging platforms, they could come up with a system that involved only sharing total vote numbers from their instance without any idenfifying metadata.

[–] maegul@lemmy.ml 8 points 4 months ago (3 children)

I mean, this starts to get moot if no one is aware doesn't it. You might dismiss the design as merely artificial obscurity, but if no one is pulling up the data, then the obscurity is working. The "curtain" you cite isn't trivial for the vast majority of users, which is what this is all about. Starting an instance and extracting the desirable data is a pretty tall hurdle where just the effort alone is prohibitive and enough to give someone a chance to calm down.

view more: ‹ prev next ›