Nevoic

joined 1 year ago
[–] Nevoic@lemm.ee 11 points 5 months ago* (last edited 5 months ago)

Not all gamers are triple A gamers. I'd call myself an avid gamer (I used to put in easily 80 hour weeks gaming, now it's almost always lower, but I'll still go on gaming binges during long vacations or holidays).

The vast, vast majority of my time has been WoW and LoL. I have played other games throughout the years, but usually in the same genres (mmo/moba).

A lot of these games have entry fees of below $70. Right now most of my gaming time is cata classic, and that requires $15 a month. Over time that will obviously add up, but everything adds up overtime, and $15 a month is not prohibitively expensive for most people. Also it's really only $15 for the first month, just by leveling in cata classic to max you make enough to buy a wow token, and can easily pay $0 a month every month by just using in game currency.

[–] Nevoic@lemm.ee 26 points 6 months ago (2 children)

Any chance you have an nvidia card? Nvidia for a long time has been in a worse spot on Linux than AMD, which interestingly is the inverse of Windows. A lot of AMD users complain of driver issues on Windows and swap to Nvidia as a result, and the exact opposite happens on Linux.

Nvidia is getting much better on Linux though, and Wayland+explicit sync is coming down the pipeline. With NVK in a couple years it's quite possible that nvidia/amd Linux experience will be very similar.

[–] Nevoic@lemm.ee 5 points 6 months ago

"they can't learn anything" is too reductive. Try feeding GPT4 a language specification for a language that didn't exist at the time of its training, and then tell it to program in that language given a library that you give it.

It won't do well, but neither would a junior developer in raw vim/nano without compiler/linter feedback. It will roughly construct something that looks like that new language you fed it that it wasn't trained on. This is something that in theory LLMs can do well, so GPT5/6/etc. will do better, perhaps as well as any professional human programmer.

Their context windows have increased many times over. We're no longer operating in the 4/8k range, but instead 128k->1024k range. That's enough context to, from the perspective of an observer, learn an entirely new language, framework, and then write something almost usable in it. And 2024 isn't the end for context window size.

With the right tools (e.g input compiler errors and have the LLM reflect on how to fix said compiler errors), you'd get even more reliability, with just modern day LLMs. Get something more reliable, and effectively it'll do what we can do by learning.

So much work in programming isn't novel. You're not making something really new, but instead piecing together work other people did. Even when you make an entirely new library, it's using a language someone else wrote, libraries other people wrote, in an editor someone else wrote, on an O.S someone else wrote. We're all standing on the shoulders of giants.

[–] Nevoic@lemm.ee 18 points 6 months ago* (last edited 6 months ago) (3 children)

18 months ago, chatgpt didn't exist. GPT3.5 wasn't publicly available.

At that same point 18 months ago, iPhone 14 was available. Now we have the iPhone 15.

People are used to LLMs/AI developing much faster, but you really have to keep in perspective how different this tech was 18 months ago. Comparing LLM and smartphone plateaus is just silly at the moment.

Yes they've been refining the GPT4 model for about a year now, but we've also got major competitors in the space that didn't exist 12 months ago. We got multimodality that didn't exist 12 months ago. Sora is mind bogglingly realistic; didn't exist 12 months ago.

GPT5 is just a few months away. If 4->5 is anything like 3->4, my career as a programmer will be over in the next 5 years. GPT4 already consistently outperforms college students that I help, and can often match junior developers in terms of reliability (though with far more confidence, which is problematic obviously). I don't think people realize how big of a deal that is.

[–] Nevoic@lemm.ee 2 points 7 months ago* (last edited 7 months ago) (1 children)

825,000 chickens per year in the U.S are accidentally boiled alive or drowned before their intended slaughter. https://animalclock.org/ this isn't prevented because prevention mechanisms cost money, as in they eat into profits.

It's standard practice for male pigs to have their tails and testicles ripped out without pain relief https://www.vox.com/future-perfect/23817808/pig-farm-investigation-feedback-immunity-feces-intestines this link also showcases how people abuse pigs for fun. Objectifying animals you kill is a coping mechanism for humans, engaging in that much killing is unnatural and unhealthy for humans, it also leads to vastly higher rates of domestic violence and crime, as it normalizes violence as a solution.

It's normal for foxes to have their skin ripped off while they're alive. Animals have their beaks ripped off so they can't kill each other in distress, as they go literally insane, abandon normal social hierarchies, and start simply trying to kill each other given the lack of space. http://www.nationearth.com/

I understand ignorance of how horrible the conditions are is a normal part of how humans justify our atrocities. However what always baffles me is people who appear genuinely concerned about animal welfare can be so absurdly uninformed on the practices that they directly support with their purchases, while criticizing practices that you have absolutely no influence over in a place on the other side of the planet.

[–] Nevoic@lemm.ee 1 points 7 months ago (3 children)

What's the cat abuse situation over there? Is it worse than our pig/cow/chicken abuse situation?

[–] Nevoic@lemm.ee 0 points 7 months ago* (last edited 7 months ago) (1 children)

Depends on what you're looking for. I had a high paying tech job (layoffs op), and I wanted a fun car that accelerates fast but also is a good daily driver. I was in the ~60k price range, so I was looking at things like the Corvette Stingray, but there are too many compromises for that car in terms of daily driving.

The Model 3 accelerates faster 0-30, and the same speed 0-60. Off the line it feels way snappier and responsive because it's electric, and the battery makes its center of gravity lower, so it's remarkably good at cornering for a sedan, being more comparable to a sports car in terms of cornering capabilities than a sedan.

Those aren't normally considerations for people trying to find a good value commuter car, so you would literally just ignore all those advantages. Yet people don't criticize Corvette owners for not choosing a Hyundai lol

On the daily driving front, Tesla wins out massively over other high performance cars in that price range. Being able to charge up at home, never going to a gas station, best in class driving automation/assistance software, simple interior with good control panel software, one pedal driving with regen breaking.

If you're in the 40k price range for a daily commuter, your criteria will be totally different, and I am not well versed enough in the normal considerations of that price tier and category to speak confidently to what's the best value. Tesla does however, at the very least, have a niche in the high performance sedan market.

[–] Nevoic@lemm.ee 0 points 7 months ago* (last edited 7 months ago)

Like sure fuck Elon, but why do you think FSD is unsafe? They publish the accident rate, it's lower than the national average.

There are times where it will fuck up, I've experienced this. However there are times where it sees something I physically can't because of either blindspots or pillars in the car.

Having the car drive and you intervene is statistically safer than the national average. You could argue the inverse is better (you drive and the car intervenes), but I'd argue that system would be far worse, as you'd be relinquishing final say to the computer and we don't have a legal system setup for that, regardless of how good the software is (e.g you're still responsible as the driver).

You can call it a marketing term, but in reality it can and does successfully drive point to point with no interventions normally. The places it does fuckup are consistent fuckups (e.g bad road markings that convey the wrong thing, and you only know because you've been on that road thousands of times). It's not human, but it's far more consistent than a human, in both the ways it succeeds and fails. If you learn these patterns you can spend more time paying attention to what other drivers are doing and novel things that might be dangerous (people, animals, etc ) and less time on trivial things like mechanically staying inside of two lines or adjusting your speed. Looking in your blindspot or to the side isn't nearly as dangerous for example, so you can get more information.

[–] Nevoic@lemm.ee 0 points 7 months ago* (last edited 7 months ago)

You also don't know what is meant by "junior level LLM". A junior level LLM would definitionally require the same level of code review as a junior developer. You have this weird human-bias that doesn't actually exist in capitalism. Capitalists don't prefer to watch people grow or develop. They want consistent, scalable output. They want the ability to throw money at something and get more output per money spent. Coding is notoriously difficult to scale, you get diminishing returns as you have to coordinate more and more people together.

LLMs and AI in general are different in that they scale vertically (from a consumer's perspective). You buy more API credits, you can make more requests to a model with a larger context window and more accuracy. It's a capitalist's wet dream. Division of labor and reducing complexity of specific jobs has been the goal of capitalists since forever.

This is why we went from tailoring as a profession to repetitive factory work. Anyone can do a factory job, it takes no ramp up time, meaning there's a massive labor market to reduce the cost of the labor. It's worse from the worker's perspective, but better for the capitalist, and that's all that matters.

Capitalists don't understand and don't care about code quality. If you've ever worked a corporate job you've felt this friction, the constant battle between developers who care about quality output and capitalist stakeholders who care about quantity and speed. AIs already blow humans out of the water in terms of quantity and speed.

Even if 2024 is the end all be all of AI, even if we literally never have another breakthrough and it doesn't improve at all (nobody in the world actually believes this is the case, including you), the current LLMs will still radically transform the way capitalists interact with coders. A ton of simple, junior level contracting work will be gone, as those super small businesses barely had enough to hire a developer to begin with, they'll strongly prefer 20x the output at 75% of the quality of a junior developer for 1/100th the amount of money.

It just takes time for capitalists to react to technological changes, and for easier ways to interact with AI to become more mainstream (e.g Dave vs the GPT API). I have used autogpt before, which is the same idea. It can build a simple web app on its own with just a single English prompt. Often times that's all small contracts need, some dumb/simple CRUD app. It takes like $1.00 of GPT4 API tokens to put together a blogging site with SSO login, postgresdb, react frontend and backend and a basic bootstrap UI. That would've been like 10 hours for a junior at $20 an hour, so like $200 for a human or $1 for an AI to do it in 30 minutes. With modern day 2024 technology.

[–] Nevoic@lemm.ee 0 points 7 months ago* (last edited 7 months ago)

You're missing the entire point I was making, attacking the veracity of some tests and metrics we use to measure aptitude. By all measures that we have in existence, LLMs perform at a college level (or higher). Maybe you disagree with those measures, cool. That's completely irrelevant to the actual point I was making.

YOU made the claim. You said "there will likely always be a need for coders". You have no idea how the burden of proof works. You made a claim, I said there was a lack of evidence. You can't then go back and say "you lack evidence to reject my claim I made without evidence!"

The irony of this conversation is that any top level LLM (Opus, GPT4, Gemini advanced) wouldn't have made such a rudimentary error in logic. Without even getting into the discussion of whether it "understands" what it's saying, functionally, it wouldn't have strung together the same incoherent message you put together.

[–] Nevoic@lemm.ee -1 points 7 months ago (3 children)

This is a non-sequitur. You can't go from "as it stands right now AI is nowhere near good enough [...]" (true statement) to "there will likely always be a need for coders in some form or fashion" (citation needed).

To get to the latter, you need the claim "AI will never perform at a human level wrt programming or general level intelligence". We have no evidence of this claim. Maybe LLMs aren't the right architecture for achieving AGI, but we already have LLMs performing at human college level for most tasks.

A lot of people have this perspective that AI needs to be perfect to replace humans, but that's not true. It doesn't even necessarily need to match the best of us, if you can hire a fleet of 128 junior dev-level AIs for the cost of 1 human junior dev, why would you ever hire a junior dev?

Once juniors can't find positions, only hobbyists will become senior devs, so there'll be a massive reduction in senior developers in the world. Plus there's no sign that junior-dev is the max level for LLM-AIs, nor is there a sign that LLMs are the absolute end for what AI has to offer.

[–] Nevoic@lemm.ee 1 points 7 months ago

Socialists use work and labor to describe different things. Work is the set of actions a worker is coerced to participate in by capitalists to align with the interests of capital. Labor can be something you engage in as part of work, but that's not always the case. Sometimes people have jobs that are so inefficient or bullshit that they literally don't labor at all at work (read Bullshit Jobs).

Labor is necessary (currently), work is not. Aligning with the interests of capital is not synonymous with the interests of humanity (think ad work, literally encouraging greater consumption, especially around harmful products like tobacco/alcohol/sugar. Most western countries now have bans on tobacco advertising, but still let advertising in general flourish).

On the topic of feeding everyone, it would be very logistically difficult in the 1600s no doubt. Now we have a massive international trade system, I can easily get massive amounts of goods shipped from the other side of the world in weeks or maybe months at the worst. We also produce enough food currently to feed 12 billion people, and that's with our incredibly inefficient system of converting edible plant matter (mostly soy) to animals.

The issue is, under capitalism, poor people don't deserve to eat. If they lack money, they're better off dead than alive and consuming resources without paying for them, so that's what the global international capitalist system does, it moves more than enough food great enough distances to feed everyone as it is. It just moves it to the rich countries where obesity has been a massive issue instead of the global south, because people in rich countries have the money to pay for food, and so they deserve to live (and overeat/waste food) but people born in Africa deserve death.

Capitalists often lose sight of what an economy is for. An economy isn't something of value in and of itself, it's about setting up incentives and systems to benefit humanity. Capitalism fails to do this in everyway that is uniquely capitalist. Anything it does right is attributed to the general functioning of markets, which existed before capitalism and can exist after capitalism (market socialism is a real thing). There are problems with markets no doubt, but capitalism really has no redeeming qualities when compared to market socialism. If you compare it to feudalism, it does do better at mobilizing productive forces, of course at the massive detriment to workers.

view more: next ›