tal

joined 2 years ago
[–] tal@lemmy.today 1 points 1 month ago* (last edited 1 month ago)

We still get everything compressed

I don't know if sound engineers are doing so, but the streaming services removed the volume benefit to doing so. If you use DRC, your music will be cut in volume. DRC will reduce audio quality both on CDs and streaming services, but before there at least was a volume edge to gain, and now that's gone.

[–] tal@lemmy.today 6 points 1 month ago* (last edited 1 month ago) (2 children)

physical media CDs for music

My understanding is that the streaming services basically ended the loudness war by imposing ReplayGain-style volume normalization. I'm not sure that I want to restart it.

[–] tal@lemmy.today 5 points 1 month ago

I'd mostly be interested for E2E encryption.

[–] tal@lemmy.today 3 points 1 month ago (1 children)

The horns made me sorely miss the inoffensive chaos of Mexico.

And yet!

https://www.youtube.com/watch?v=6tD9MouLKB8

Relaxing Life Ambiance

2 Hours of Mumbai Traffic Sounds | ASMR City Noise & Honking Horns for Sleep, Relaxation & Focus

[–] tal@lemmy.today 5 points 1 month ago

Third it has network effect going for it. Nobody is going to watch videos on your platform if there’s only a couple dozen of them total. The sheer size and scope of YouTube means no matter what you’re looking for you can find something to watch.

Yeah, though I think that you could avoid some of that with a good cross-video-hosting service search engine, as I don't think that most people are engaging in the social media aspect of YouTube. YouTube doesn't have a monopoly on indexing YouTube videos.

But the scale doesn't hurt them, that's for sure.

[–] tal@lemmy.today 2 points 1 month ago

I did see some depth=1 or something like that to get only a certain depth of git commits but thats about it.

Yeah, that's a shallow clone. That reduces what it pulls down, and I did try that (you most-likely want a bit more, probably to also ask to only pull down data from a single branch) but back when I was crashing into it, that wasn't enough for the Cataclysm repo.

It looks like it's fixed as of early this year; I updated my comment above.

[–] tal@lemmy.today 1 points 1 month ago* (last edited 1 month ago) (2 children)

Thanks. Yeah, I'm pretty sure that that was what I was hitting. Hmm. Okay, that's actually good


so it's not a git bug, then, but something problematic in GitHub's infrastructure.

EDIT: On that bug, they say that they fixed it a couple months ago:

This seems to have been fixed at some point during the last days leading up to today (2025-03-21), thanks in part to @MarinoJurisic 's tireless efforts to convince Github support to revisit this problem!!! 🎉

So hopefully it's dead even specifically for GitHub. Excellent. Man, that was obnoxious.

[–] tal@lemmy.today 3 points 1 month ago* (last edited 1 month ago) (4 children)

A bit of banging away later


I haven't touched Linux traffic shaping in some years


I've got a quick-and-dirty script to set a machine up to temporarily simulate a slow inbound interface for testing.

slow.sh test script

# !/bin/bash
# Linux traffic-shaping occurs on the outbound traffic.  This script
# sets up a virtual interface and places inbound traffic on that virtual
# interface so that it may be rate-limited to simulate a network with a slow inbound connection.
# Removes induced slow-down prior to exiting.  Needs to run as root.

# Physical interface to slow; set as appropriate
oif="wlp2s0"

modprobe ifb numifbs=1
ip link set dev ifb0 up
tc qdisc add dev $oif handle ffff: ingress
tc filter add dev $oif parent ffff: protocol ip u32 match u32 0 0 action mirred egress redirect dev ifb0

tc qdisc add dev ifb0 root handle 1: htb default 10
tc class add dev ifb0 parent 1: classid 1:1 htb rate 1mbit
tc class add dev ifb0 parent 1:1 classid 1:10 htb rate 1mbit

echo "Rate-limiting active.  Hit Control-D to exit."
cat

# shut down rate-limiting
tc qdisc delete dev $oif ingress
tc qdisc delete dev ifb0 root
ip link  set dev ifb0 down
rmmod ifb

I'm going to see whether I can still reproduce that git failure for Cataclysm on git 2.47.2, which is what's in Debian trixie. As I recall, it got a fair bit of the way into the download before bailing out. Including the script here, since I think that the article makes a good point that there probably should be more slow-network testing, and maybe someone else wants to test something themselves on a slow network.

Probably be better to have something a little fancier to only slow traffic for one particular application


maybe create a "slow Podman container" and match on traffic going to that?


but this is good enough for a quick-and-dirty test.

[–] tal@lemmy.today 11 points 1 month ago* (last edited 1 month ago) (8 children)

This low bandwidth scenario led to highly aggravating scenarios, such as when a web app would time out on [Paul] while downloading a 20 MB JavaScript file, simply because things were going too slow.

Two major applications I've used that don't deal well with slow cell links:

  • Lemmyverse.net runs an index of all Threadiverse instances and all communities on all instances, and presently is an irreplaceable resource for a user on here who wants to search for a given community. It loads an enormous amount of data for the communities page, and has some sort of short timeout. Whatever it's pulling down internally

I didn't look


either isn't cached or is a single file, so reloading the page restarts from the start. The net result is that it won't work over a slow connection.

  • This may have been fixed, but git had a serious period of time where it would smash into timeouts and not work on slow links, at least to github. This made it impossible to clone larger repositories; I remember failing trying to clone the Cataclysm: Dark Days Ahead repository, where one couldn't even manage a shallow clone. This was greatly-exacerbated by the fact that git does not presently have the ability to resume downloads if a download is interrupted. I've generally wound up working around this by git cloning to a machine on a fast connection, then using rsync to pull a repository over to the machine on a slow link, which, frankly, is a little embarrassing when one considers that git really is the premier distributed VCS tool out there in 2025, and really shouldn't need to rely on that sort of workaround.
[–] tal@lemmy.today 8 points 1 month ago* (last edited 1 month ago) (2 children)

sabotage

Microsoft's interest in Nokia was being able to compete with what is now a duopoly between Google and Apple in phones. They wanted to own a mobile platform. I am very confident that they did not want their project to flop. That being said, they'll have had their own concerns and interests. Maybe Nokia would have done better to go down the Apple or Google path, but for Microsoft, the whole point was to get Microsoft-platform hardware out there.

[–] tal@lemmy.today 20 points 1 month ago (6 children)

And Amazon says it will help train 4 million people in AI skills and “enable AI curricula” for 10,000 educators in the US by 2028, while offering $30 million in AWS credits for organizations using cloud and AI tech in education.

So, at some point, we do have to move on policy, but frankly, I have a really hard time trying to predict what skillset will be particularly relevant to AI in ten years. I have a hard time knowing exactly what the state of AI itself will be in ten years.

Like, sure, in 2025, it's useful to learn the quirks and characteristics of LLMs or diffusion models to do things with them. I could sit down and tell people some of the things that I've run into. But...that knowledge also becomes obsolete very quickly. A lot of the issues and useful knowledge for, working with, say, Stable Diffusion 1.5 are essentially irrelevant as regards Flux. For LLMs, I strongly suspect that there are going to be dramatic changes surrounding reasoning, and retaining context. Like, if you put education time into training people on that, you run the risk that they don't learn stuff that's relevant over the longer haul.

There have been major changes in how all of this works over the past few years, and I think that it is very likely that there will be continuing major changes.

[–] tal@lemmy.today 3 points 1 month ago* (last edited 1 month ago)

Here's an example from the BBC:

https://www.youtube.com/watch?v=qeUM1WDoOGY

Why Do Cats Miaow? | Cats Uncovered | BBC

Though just to mix things up, the BBC Earth YouTube channel appears to be using title case capitalization in that title, which is typically an American English style, there. The main BBC YouTube channel appears to use the more-usually British English sentence case capitalization.

So I expect that there's always the possibility that people aren't always super-religious about the form of English that they use.

view more: ‹ prev next ›