This is allegedly it: https://chatgpt.com/share/69dd1c83-b164-8385-bf2e-8533e9baba9c
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
“The raw output of ChatGPT’s proof was actually quite poor. So it required an expert to kind of sift through and actually understand what it was trying to say,” Lichtman says. But now he and Tao have shortened the proof so that it better distills the LLM’s key insight.
This tracks with what I have seen regarding AI. It looks superficially awesome, but when you start to analyze its output it has a lot of holes that require someone trained in the art to fix. You know, someone with years of experience, and who got that experience without the benefit of AI shortcuts.
What happens 10 or 15 years from now, when all the current crop of experts are retired and all the experts who could have curated the AI output had to spend all that time as baristas instead because the AI took all of their entry level jobs?
Lol, capitalism & CEO rule 1: only think about the next quarter profits, fuck the future, I've already made my money
It's already happening. I'm looking to "retire" this year (which essentially means I'm just quitting this bullshit, I can't deal with it anymore.) I've been doing consultation/contracting dev work for the past several years and about 2 years ago I pivoted from that to essentially doing code review for AI slop for my various clients. It's always the same song and dance of "this is why your new fancy AI produced crap doesn't scale, this is why there are exploits, this is how you fix it with real devs, yadda yadda yadda". I was naive and hoped I could make a difference by hoping these startups and small tech houses would get the picture and pivot back to utilizing actual devs. hire people back and what have you. But none of them have. So I'm getting paid and wasting my time talking to CTOs and upper managers and I might as well be talking to a brick wall. they're all going to continue to ride this AI train until the wheels come off and even when the carriage is missing all the wheels they'll try to push it along down the tracks.
it's hopeless. I've given up. I know it's not something unemployed or under-employed devs want to hear but this is the conclusion I've come to within the past couple months. I hate this industry now. absolutely hate it. I might just focus on FOSS stuff, contribute to random projects or start maintaining some and call it a day. but the passion for coding and anything tech related has been sucked dry from me thanks to LLM's and AI.
I would just keep cashing those checks....
Do you see viable career paths for people who love technology and computer science without a formal degree who is interested in getting into it, or is that just a dead end pipe dream?
There is a distinct possibility that when people like the guy you were talking to "retire" or get forced out, these corps will hire you (a person without the degree but with the passion to do the job). In pretty much all cases you should assume that they will be taking advantage of you in any way they can, including by looking to use your much cheaper labor to fill the holes the other guy left when they retired.
They will pay you a fraction of what they paid him regardless of your skill. They will avoid any and all training so they don't have to increase your pay. They will try to force you to use AI rather than building skills you will need to progress in a career like this. And when you give pushback they will force you out either by outright firing you or by making things so miserable you abandon your job. They will continue to do this to anyone they can get on the hook. Probably even after the bubble pops (using local models instead). They don't want knowlegable humans because those people can ask for what they want and will advocate for it. Those people know what they are worth. They want patsies.
It's not a dead end pipe dream but you need to know what to expect. I have been in the industry for about a decade and saw it evolve in the recent years. The way I see it now the developer's job is completely different than it was before. Many corporations (or at least these I worked for) try to embrace AI as much as possible and think it will take over many domains but usually it boils down to generating more code. It's expected from me to deliver more so most days I generate stuff just like any developer in the company and that doesn't require much skill. But when the shit hits the fan (and it does constantly with so much "vibe coding") my expertise is necessary as I am able to pinpoint issues, quickly investigate and ensure the hole is actually filled (and not covered). But from day to day work I think we've lost the most fun part (coding) while turning up all the bullshit (more meetings, shitty documentation, more code reviews where some devs don't even self-review). Project managers I had the displeasure to work for were the biggest AI embracers, using it to generate superfluous and bloated plans, docs and acceptance criteria which are unnecessarily verbose, filled with errors, misleading info and straight up garbage. And now devs need to untagle all this mess.
Tl;dr - I vibe for work and code for passion. I hope this passion will keep me employable.
I wish you good luck in your journey.
Depends on who is hiring.
I'm in the role you are describing. I can't code, but I'm good at troubleshooting and if required I can read code.
I would much prefer to work along someone who spent highschool tinkering with game mods than someone with a CS degree, as troubleshooting requires a specific skillset that is developed better by breaking and fixing things than by learning the fundamentals of how computers work or best practices for coding.
That said, if you wanna work for an OEM doing actual chip design or engineering and stuff, you're prolly gonna need that degree.
appreciate the feedback, "learn to code" was pushed for so long, now "coding is dead" is the new vibe, but glad to hear there may still be options for people like me out there.
Going to continue to hone my skills and work my unrelated job as long as it lasts.
Find a niche where you are appreciated. If you're brought on as one in an army of thousands for "the next big thing" - you're much more likely to be a part of the next wave of layoffs statistic too.
I didn't learn these skills for a job, they simply suited the job I found. If you enjoy what youre doing, and it builds problem solving skills, you will be hard pressed to regret learning the skill.
That said, I started out answering phones, and built from there. Fix peoples problems and keep your eyes open for a job that let's you fix the kinds of problems you find interesting.
Cyber security
Honestly one of the most interesting parts to me as I enjoy the concept but it can be tricky to filter out bad information from good. Do you have any recommended readings on the subject, any books or info you would consider to be biblical in their importance or fundamental?
Just start with the free CC cert from ISC2. It's basically just an introduction to Infosec theories and terminology.
From there you have to decide if you want to work in analytics or GRC (governance, risk, compliance). First is more tech oriented and second is more policy and documentation, although many roles combine the two.
If you want to go the tech route, get your A+, Network+ and Security+ from CompTIA, then you can pick one of many fields like networking security, systems security, and dev security.
For the GRC route, if you're in the US the NIST 800-53r5 publication is a great place to start, although it can be difficult to translate their vague wording into what work needs to be done.
this is way more indepth and informative than I could of possibly hoped for, thank you so much - I'll get started with CC from ISC2.
You're welcome! I just kind of fell into an Infosec role, so I had to do a lot of catching up on my own.
Why didn't they just ask ChatGPT to summarize it for them? /s
If you have your steak a little burnt already, then you can't fix that with more heat.
I see you too have eaten my father in law's steaks.
That's when you ask chatgpt how to un-burn the steak! It probably involves glue, or perhaps sunblock.
"A little bleach will take that char right off
and gives the steak a bold, vibrant flavor as well!"
t when you start to analyze its output it has a lot of holes that require someone trained in the art to fix.
I don't disagree, but that's not really what the article is saying.
The article is saying: GPT found a novel approach resulting in a solution where none existed before, presented it poorly - though still technically correctly - and they polished the output to make it more human friendly.
I have used the new LLMs for various things over the past few months, the one constant: for anything longer than a paragraph of output, you can get better results by reading the output (yourself) and feeding back "notes" for things to improve.
What happens 10 or 15 years from now, when all the current crop of experts are retired and all the experts who could have curated the AI output had to spend all that time as baristas instead because the AI took all of their entry level jobs?
Presumably, that next crop of experts will be curating AI output for 10-15 years before the current crop expires. Hopefully they learn what they're doing in that time.
Not just that the next generation of experts will hypothetically be employed as baristas, but I don't think people take the risk of deskilling enough. The next generation of would-be experts won't be as good at whatever because they've learned to rely on AI. We risk effectively transferring valuable skills from humans to Musk- or Altman-owned chatbots. That should horrify everyone.
Ok, maybe not literally baristas. But my point is that the next generation of experts simply will not exist, because all the entry level jobs are evaporating. All of them. Just ask any group of college graduates with a tech degree about how hard the job market is right now.
Not disagreeing at all. The mass unemployment of a bunch of industries is terrible. I'm just saying the other side of the coin is also terrible, that we're heading towards a world where humans have lost the ability to perform important skills to (potentially hostile) chatbots (owned by billionaires) that we won't be able to properly manage or oversee. That's the flip side of most 'positive' AI stories: 'AI is better at detecting early breast cancer... And the doctors that use AI have gotten worse because of it.'
Also there's a "thousand monkeys at a thousand typewriters" effect going on, but what people neglect to notice is that each of the thousand AI monkeys is (either out of necessity or mere curiosity) currently being supervised and edited by a brilliant mathematician who would otherwise be working on their own proofs and discoveries right now. And sure enough, one team might actually come up with a genuine shakespeare-quality draft eventually, but even if that is the case, you also have to consider the opportunity cost of having 1,000 brilliant mathematicians focusing on reviewing monkey-typewriter output instead of each working on their own groundbreaking work much more slowly and "traditionally". The work being delegated to AI isn't replacing human work, it's overriding it.
I don't know if all this AI work is a completely net-unproductive and worthless endeavour or not, but I do know we're not doing an honest accounting and AI companies have a huge incentive to cook the books to make it look way more productive than it actually is.
Also this:
"What’s beginning to emerge is that the problem was maybe easier than expected..."
My grandpa said using a calculator would spoil my math abilities.
Actually it spoiled my arithmetic tricks. Instead I had more time to learn things like vector calculus.
Yeah, but your calculator does math the same way every time, and doesn't hallucinate wrong answers seemingly at random.
This reminds me of a story my graph theory professor told me (long before LLMs). One of their grad students discovered that a subset of graphs that are of type A and B at once has fantastic properties, such as fast searching, and a few others, useful in communication networks etc.
Excited about their potential thesis, student asked the professor to take a look. After calculating which graphs actually are types A and B at the same time, professor found that the intersection of such graph types is a null set. So the theoretically nice graphs the student "discovered" simply do not exist.
Easy to be surprised when you don't know how the magic box works. Basically magic.