Yaztromo

joined 2 years ago
[–] Yaztromo@lemmy.world 1 points 1 month ago

Apple recently removed the USB-A only SuperDrive, and replaced the Magic Mouse with a USB-C variant.

Other than perhaps old-stock, the only current Lightning device Apple is currently selling anywhere is the iPhone SE, which appears due for replacement soon.

[–] Yaztromo@lemmy.world 3 points 1 month ago

Agreed — this is overall a really, really good thing for consumers. Now that my MacBook Pro, iPad Pro, and iPhone Pro all use USB-C it’s trivial to swap devices between them and generally they all just work. The USB-C Ethernet adaptor I have for my MBP work with my iPad Pro and iPhone Pro. As do Apple’s USB-A/USB-C/HDMI adaptors. And my USB-C external drives and USB sticks. And my PS5 DualSense controllers. And the 100W lithium battery pack with 60W USB-PD output. Heck, even the latest Apple TV remote is USB-C.

AFAIK, this is the first time ever that there is one single connector that works across their entire lineup of devices. Even if you go back to the original Apple 1 (when it was the only device they sold), it had several different connector types. Now we have one connector to rule them all, and while the standard has its issues, it’s quite a bit better than the old days when everything had a different connector.

[–] Yaztromo@lemmy.world 4 points 1 month ago

It’s worth remembering however that there weren’t a lot of options for a standardized connector back when Apple made the first switch in 2012. The USB-C connector wasn’t published for another two years after Lightning was released to the public. Lightning was much better than the then-available standard of micro USB-B, allowed for thinner phones and devices, and was able to carry video and audio (which was only achieved on Android phones of the time with micro USB-B by violating the USB standard).

Also worth noting here is that the various Macs made the switch to USB-C before most PCs did, and the iPad Pro made the switch all the way back in 2018 — long before the EU started making noise about forcing everyone to use USB-C. So Apple has a history of pushing USB-C; at least for devices where there wasn’t a mass market of bespoke docks that people were going to be pissed off at having to scrap and replace.

I’ll readily agree we’re in a better place today — I’m now nearly 100% USB-C for all my modern devices (with the one big holdout being my car — even though it was an expensive 2024 EV model, it still came with USB-A. I have several USB-A to USB-C cables in the car for device charging small devices, but can’t take advantage of USB-PD to charge and run my MacBook Pro). But I suspect Apple isn’t as bothered by this change as everyone thinks they are. They finally get to standardize on one connector across their entire lineup of devices for the first time ever, and don’t have to take the blame for it. Sounds win-win to me.

[–] Yaztromo@lemmy.world 9 points 1 month ago (7 children)

I’m still of the opinion that Apple benefitted from this legislation, and that they know it. They never fought this decision particularly hard — and ultimately, it’s only going to help Apple move forward.

I’m more than old enough to remember the last time Apple tried changing connectors from the 30-pin connector to the Lightning connector. People (and the press) were apoplectic that Apple changed the connector. Everything from cables to external speakers to alarm clocks and other accessories became useless as soon as you upgraded your iPod/iPhone — the 30-pin connector had been the standard connector since the original iPod, and millions of devices used it. Apple took a ton of flak for changing it — even though Lightning was a pretty significant improvement.

That’s not happening this time, as Apple (and everyone else) can point to and blame the EU instead. If Apple had made this change on their own, they would likely have been pilloried in the press (again) for making so many devices and cables obsolete nearly overnight — but at least this way they can point at the EU and say “they’re the ones making us do this” and escape criticism.

[–] Yaztromo@lemmy.world 1 points 4 months ago

The Fediverse by design prevents this, while the internet of the old age had little if any guardrails against this specially since the platforms never really federated with another.

I see someone is too young to remember USENET.

[–] Yaztromo@lemmy.world 7 points 5 months ago (6 children)

The Fediverse is a bit more like the old USENET days in some regards, but ultimately if it ever becomes more popular the same assholes that ruin other online experiences will also wind up here.

What made the Internet more exciting 30 years ago was that it was mostly comprised of the well educated and dedicated hobbyists, who had it in their best interest to generally keep things decent. We didn’t have the uber-lock-in of a handful of massive companies running everything.

It’s all Eternal September. There’s no going back at this point — any new medium that becomes popular will attract the same forces making the current Internet worse.

[–] Yaztromo@lemmy.world 4 points 5 months ago

Depends on what you mean by “back in the day”. So far as I know you could be ~30, and “back in the day” for you is the 2005 era.

For some of us “back in the day” is more like the early 90’s (and even earlier than that if we want to include other online services, like BBS’s) — and the difference since Eternal September is pretty stark (in both good and bad ways).

[–] Yaztromo@lemmy.world 4 points 5 months ago

There are a lot of manufacturer-agnostic smart home devices out there, and with just a tiny bit of research online it’s not difficult to avoid anything that is overly tied to a cloud service. Z-wave, ZigBee, Thread/Matter devices are all locally controlled and don’t require a specific companies app or environment — it’s only really the cheapest, bottom-of-the-barrel WiFi based devices that rely on cloud services that you have to be careful of. As with anything, you get what you pay for.

Even if the Internet were destroyed tomorrow, my smart door locks would continue to function — not only are they Z-wave based (so local control using a documented protocol which has Open Source drivers available), but they work even if not “connected”. I can even add new door codes via the touchscreen interface if I wanted to.

The garage door scenario can be a bit more tricky, as there aren’t a lot of good “open” options out there. However, AFAIK all of them continue to work as a traditional garage door opener if the online service becomes unavailable. I have a smart Liftmaster garage door opener (which came with the house when we bought it), and while it’s manufacturer has done some shenanigans in regards to their API to force everyone to use their app (which doesn’t integrate with anything), it still works as a traditional non-smart garage door opener. The button in the garage still works, as does the remote on the outside of the garage, the remotes it came with, and the Homelink integration in both of our vehicles.

With my IONIQ 5, the online features while nice are mostly just a bonus. The car still drives without them, the climate control still works without being online — most of what I lose are “nice-to-have” features like remote door lock/unlock, live weather forecasts, calendar integration, and remote climate control. But it isn’t as if the car stops being drivable if the online service goes down. And besides which, so long as CarPlay and Android Auto are supported, I can always rely on them instead for many of the same functions.

Some cars have much more integration than mine — and the loss of those services may be more annoying.

[–] Yaztromo@lemmy.world 11 points 7 months ago (1 children)

…until the CrowdStrike agent updated, and you wind up dead in the water again.

The whole point of CrowdStrike is to be able to detect and prevent security vulnerabilities, including zero-days. As such, they can release updates multiple times per day. Rebooting in a known-safe state is great, but unless you follow that up with disabling the agent from redownloading the sensor configuration update again, you’re just going to wing up in a BSOD loop.

A better architectural solution like would have been to have Windows drivers run in Ring 1, giving the kernel the ability to isolate those that are misbehaving. But that risks a small decrease in performance, and Microsoft didn’t want that, so we’re stuck with a Ring 0/Ring 3 only architecture in Windows that can cause issues like this.

[–] Yaztromo@lemmy.world 7 points 7 months ago

That company had the power to destroy our businesses, cripple travel and medicine and our courts, and delay daily work that could include some timely and critical tasks.

Unless you have the ability and capacity to develop your own ISA/CPU architecture, firmware, OS, and every tool you use from the ground up, you will always be, at some point, “relying on others stuff” which can break on you at a moments notice.

That could be Intel, or Microsoft, or OpenSSH, or CrowdStrike^0. Very, very, very few organizations can exist in the modern computing world without relying on others code/hardware (with the main two that could that come to mind outside smaller embedded systems being IBM and Apple).

I do wish that consumers had held Microsoft more to account over the last few decades to properly use the Intel Protection Rings (if the CrowdStrike driver were able to run in Ring 1, then it’s possible the OS could have isolated it and prevented a BSOD, but instead it runs in Ring 0 with the kernel and has access to damage anything and everything) — but that horse appears to be long out of the gate (enough so that X86S proposes only having Ring 0 and Ring 3 for future processors).

But back to my basic thesis: saying “it’s your fault for relying on other peoples code” is unhelpful and overly reductive, as in the modern day it’s virtually impossible to do so. Even fully auditing your stacks is prohibitive. There is a good argument to be made about not living in a compute monoculture^1; and lots of good arguments against ever using Windows^2 (especially in the cloud) — but those aren’t the arguments you’re making. Saying “this is your fault for relying on other peoples stuff” is unhelpful — and I somehow doubt you designed your own ISA, CPU architecture, firmware, OS, network stack, and application code to post your comment.

——- ^0 — Indeed, all four of these organizations/projects have let us down like this; Intel with Spectre/Meltdown, Microsoft with the 28 day 32-bit Windows reboot bug, and OpenSSH just announced regreSSHion.
^1 — My organization was hit by the Falcon Sensor outage — our app tier layers running on Linux and developer machines running on macOS were unaffected, but our DBMS is still a legacy MS SQL box, so the outage hammered our stack pretty badly. We’ve fortunately been well funded to remove our dependency on MS SQL (and Windows in general), but that’s a multi-year effort that won’t pay off for some time yet.
^2 — my Windows hate is well documented elsewhere.

[–] Yaztromo@lemmy.world 3 points 9 months ago

I certainly wouldn’t run to HR right away — but unfortunately, it’s true sometimes that people just aren’t a good fit for whatever reason. Deadweight that isn’t able to accomplish the tasks that need to be done doesn’t do you any favours — if you’re doing your job and their jobs because they just can’t handle the tasks that’s hardly fair to you, and isn’t doing the organization any good — eventually you’ll burn out, nobody will pickup the slack, and everyone will suffer for it.

My first instinct in your situation however would be that everyone has got used to the status quo, including the staff you have to constantly mentor. Hopefully if you can coach them into doing the work for themselves and keeping them accountable to tasks and completion dates will help change the dynamic.

[–] Yaztromo@lemmy.world 18 points 9 months ago (2 children)

I’m a tech manager with a 100% remote team of seven employees. We’re a very high performing team overall, and I give minimal hand-holding while still fostering a collaborative working environment.

First off, you need to make outcomes clear. Assign tasks, and expect them to get done in a reasonable timeframe. But beyond that, there should be no reason to micro-manage actual working hours. If some developer needs some time during the day to run an errand and wants to catch up in the evening, fine by me. I don’t need them to be glued to their desk 9-5/10-6 or for some set part of the day — so long as the tasks are getting done in reasonable time, I let me employees structure their working hours as they see fit.

Three times a week we have regular whole-team checkins (MWF), where everyone can give a status update on their tasks. This helps keep up accountability.

Once a month I reserve an hour for each employee to just have a general sync-up. I allow the employee to guide how this time is used — whether they want to talk about issues with outstanding tasks, problems they’re encountering, their personal lives, or just “shoot the shit”. I generally keep these meetings light and employee-directed, and it gives me a chance to stay connected with them on both a social level and understand what challenges they might be facing.

And that’s it. I’ve actually gone as far as having certain employees who were being threatened with back-to-office mandates to have them converted to “remote employee” in the HR database so they’d have to lay off threatening them — only 2 of my 7 employees are even in the same general area of the globe (my employees are spread in 3 different countries at the moment), and I don’t live somewhere with an office, so having some employees forced to report to an office doesn’t help me in the slightest (I can’t be in 6 places at once — I live far enough away I can’t be in any of those places on a regular basis!).

Your employees may have got used to you micro-managing them. Changing this won’t happen overnight. Change from a micro-manager into a coach, and set them free. And if they fail…then it’s time to talk to HR and to see about making some changes. HTH!

view more: next ›