this post was submitted on 24 Feb 2024
217 points (95.8% liked)

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ

54669 readers
512 users here now

⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder

📜 c/Piracy Wiki (Community Edition):


💰 Please help cover server costs.

Ko-Fi Liberapay
Ko-fi Liberapay

founded 1 year ago
MODERATORS
 

Nearing the filling of my 14.5TB hard drive and wanting to wait a bit longer before shelling out for a 60TB raid array, I've been trying to replace as many x264 releases in my collection with x265 releases of equivalent quality. While popular movies are usually available in x265, less popular ones and TV shows usually have fewer x265 options available, with low quality MeGusta encodes often being the only x265 option.

While x265 playback is more demanding than x264 playback, its compatibility is much closer to x264 than the new x266 codec. Is there a reason many release groups still opt for x264 over x265?

you are viewing a single comment's thread
view the rest of the comments
[–] Shimitar@feddit.it 31 points 9 months ago (3 children)

Some notes: Don't use GPU to reencode you will lose quality.

Don't worry for long encoding times, specially if the objective is long term storage.

Power consumption might be significant. I run mine what the sun shine and my photovoltaic picks up the tab.

And go AV1, open source and seems pretty committed to by the big players. Much more than h265.

[–] pastermil@sh.itjust.works 14 points 9 months ago (5 children)

Why is the GPU reencoding bad for the quality? Any source for this?

[–] bhamlin@lemmy.world 8 points 9 months ago* (last edited 9 months ago)

I have some comments based on personal experiences with GPU av1 encoding: you will always end up with either larger or worse output with GPU encoding because currently all the encoders have a frame deadline. It will only try for so long to build frame data. This is excellent when you are transcoding live. You can ensure that you hit generation framerate goals that way. If you disable the frame deadline, it's much much slower.

Meanwhile CPU encoders don't have this because CPU is almost never directly used in transcoding. And even with a frame deadline the output would still not be at the same speed as the GPU. However the CPU encoders will get frames as small as you ask for.

So if you need a fast transcode of anything, GPU is your friend. If you're looking for the smallest highest quality for archival, CPU reference encoders are what's needed.

[–] cuppaconcrete@aussie.zone 6 points 9 months ago (2 children)

Yeah that caught my eye too, seems odd. Most compression/encoding schemes benefit from a large dictionary but I don't think it would be constrained by the sometimes lesser total RAM on a GPU than the main system - in most cases that would make the dictionary larger than the video file. I'm curious.

[–] icedterminal@lemmy.world 21 points 9 months ago (1 children)

It's not odd at all. It's well known this is actually the truth. Ask any video editor in the professional field. You can search the Internet yourself. Better yet, do a test run with ffmpeg, the software that does encoding and decoding. It's available to download by anyone as it's open source.

Hardware accelerated processing is faster because it takes shortcuts. It's handled by the dedicated hardware found in GPUs. By default, there are parameters out of your control that you cannot change allowing hardware accelerated video to be faster. These are defined at the firmware level of the GPU. This comes at the cost of quality and file size (larger) for faster processing and less power consumption. If quality is your concern, you never use a GPU. No matter which one you use (AMD AMF, Intel QSV or Nvidia NVENC/DEC/CUDA), you're going to end up with a video that appears more blocky or grainy at the same bitrate. These are called "artifacts" and make videos look bad.

Software processing uses the CPU entirely. You have granular control over the entire process. There are preset parameters programmed if you don't define them, but every single one of them can be overridden. Because it's inherently limited by the power of your CPU, it's slower and consumes more power.

I can go a lot more in depth but I'm choosing to stop here because this can comment can get absurdly long.

[–] cuppaconcrete@aussie.zone 2 points 9 months ago (4 children)

My understanding is that all of the codecs we are discussing are deterministic. If you have evidence to the contrary I'd love to see it.

[–] RvTV95XBeo@sh.itjust.works 11 points 9 months ago

GPU encoders like NVENC run their own algorithms that are optimized for graphics cards. The output it compatible with x265, but the encoder is not identical and there are far fewer options to tweak to optimize your video.

The output is orders of magnitude faster but (in my experience) objectively worse, introducing lots of artifacts

[–] icedterminal@lemmy.world 10 points 9 months ago

The evidence you want to see is literally something you can do or search the Internet yourself. There's thousands of results. CPU is better than a GPU no matter codec you use. This hasn't changed for decades. Here's one of many direct from a software developer.

https://handbrake.fr/docs/en/latest/technical/performance.html

[–] Randomgal@lemmy.ca 4 points 9 months ago (3 children)

This. It sounds really odd to me that the GPU would make what is pretty much math calculations somehow "different" from what the CPU would do.

[–] entropicdrift@lemmy.sdf.org 11 points 9 months ago (1 children)

GPU encoders basically all run at the equivalent of "fast" or "veryfast" CPU encoder settings.

Most high quality, low size encodes are run at "slow" or "veryslow" or "placebo" CPU encoder settings, with a lot of the parameters that aren't tunable on GPU encoders set to specific tunings depending on the content type.

[–] effward@lemmy.world 2 points 9 months ago

NVENC has a slow preset:

https://docs.nvidia.com/video-technologies/video-codec-sdk/12.0/ffmpeg-with-nvidia-gpu/index.html#command-line-for-latency-tolerant-high-quality-transcoding

As they expand the NVENC options that are exposed on the command line, is it getting closer to CPU-encoding level of quality?

[–] conciselyverbose@sh.itjust.works 6 points 9 months ago* (last edited 9 months ago)

So the GPU encoding isn't using the GPU cores. It's using separate fixed hardware. It supports way less operations than a CPU does. They're not running the same code.

But even if you did compare GPU cores to CPU cores, they're not the same. GPUs also have a different set of operations from a CPU, because they're designed for different things. GPUs have a bunch of "cores" bundled under one control unit. They all do the exact same operation at the same time, and have significantly less capability beyond that. Code that diverges a lot, especially if there's not an easy way to restructure data so all 32 cores under a control unit* branch the same way, can pretty easily not benefit from that capability.

As architectures get more complex, GPUs are adding things that there aren't great analogues for in a CPU yet, and CPUs have more options to work with (smaller) sets of the same operation on multiple data points, but at the end of the day, the answer to your question is that they aren't doing the same math, and because of the limitations of the kind of math GPUs are best at, no one is super incentivized to try to get a software solution that leverages GPU core acceleration.

*last I checked, that's what a warp on nvidia cards was. It could change if there's a reason to.

Every encoder does different math calculations. Different software and different software profiles do different math calculations too.

Decoding is deterministic. Encoding depends on the encoder.

[–] db2@lemmy.world 7 points 9 months ago

The way it was explained to me once is that the asic in the gpu makes assumptions that are baked in to the chip. It made sense because they can't reasonably "hardcode" for every possible variation of input the chip will get.

The great thing though is if you're transcoding you can use the gpu to do the decoding part which will work fine and free up more cpu for the encoding half.

[–] BehindTheBarrier@programming.dev 4 points 9 months ago* (last edited 9 months ago)

Already been explained a few times, but GPU encoders are hardware with fixed options, with some leeway in presets and such. They are specialized to handle a set of profiles.

They use methods which work well in the specialized hardware. They do not have the memory that a software encoder can use for example to comb through a large amount of frames, but they can specialize the encoding flow and hardware to the calculations. Hardware encoded can not do everything software encoders do, nor can they be as thorough because of constraints.

Even the decoders are like that, for example my player will crash trying to hardware decode AV1 encoded with super resolution frames, frames that have a lower resolution that are supposed to be upscale by the decoder. (a feature in AV1, that hardware decoder profiles do not support, afaik.)

[–] Kissaki@feddit.de 2 points 9 months ago

GPU encoding means it's using the encoder the GPU and driver provides. Which can be worse than software encoders. For software encoders they exist for encoding. On a GPU it's one feature of many, and doesn't necessarily seek out the same high bar.

[–] Shimitar@feddit.it -3 points 9 months ago

Not really, I don't do GPU encoding anyway so can't say first person.

But everybody says so on all forums so maybe its true.

[–] MonkderZweite@feddit.ch 11 points 9 months ago* (last edited 9 months ago)

Yep, gpu de- and encoding is high-speed but often lower quality and with old codec versions. Common mistake to think that gpu = better.

[–] Zedstrian@lemmy.dbzer0.com 4 points 9 months ago (3 children)

In order to encode to a specific format without unintentionally losing quality, doesn't the initial file have to be a remux?

[–] kamiheku@sopuli.xyz 7 points 9 months ago

Yes, that's right. But the point stands, you indeed shouldn't do such encoding on the GPU, it's a tradeoff of (fast) speed vs (poor) quality and (big) size. Good for when you need realtime encoding.

[–] zeluko@kbin.social 1 points 9 months ago

You can downsample from BluRay, which would give you least loss.
But if you only have some good h264 version and want space savings, you can also reencode that, while probably loosing some small amount of quality, depending on your settings.

[–] Shimitar@feddit.it 1 points 9 months ago

Indeed, but YMMV and to me quality is still good if source was not a remix but a top quality encoding