this post was submitted on 21 Jan 2024
36 points (97.4% liked)

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ

54669 readers
417 users here now

⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder

📜 c/Piracy Wiki (Community Edition):


💰 Please help cover server costs.

Ko-Fi Liberapay
Ko-fi Liberapay

founded 1 year ago
MODERATORS
 

Hello! I'd like to write a script to download videos from streamingcommunity.estate from a given video URL, and to do this I need the m3u8 file url. Currently I manually go to the network tab to search for it, but I'd like the script to do this automatically. Do you know of a way to achieve this? Bash or Python if possible, otherwise any other method will do fine. Thanks in advance!

you are viewing a single comment's thread
view the rest of the comments
[–] ISOmorph@feddit.de 7 points 10 months ago* (last edited 10 months ago) (2 children)

youtube-dl does something similar (it works on a lot more than just youtube). AFAIK each site needs a slightly altered code. You could have a look at the source on their github.

It might be as easy as forking the project and creating your own extractor.py.

[–] tubbadu@lemmy.kde.social 1 points 10 months ago

I actually use yd-dlp to download m3u8 playlists, but what I'm looking for is a way to extract the m3u8 file URL, so that I can give it to ytdlp to download the actual video

I'll look into the extractor docs, seems interesting! Thanks!

[–] 7Sea_Sailor@lemmy.dbzer0.com 0 points 10 months ago (1 children)

I sure hope they commit the work back into the main repo...

[–] ISOmorph@feddit.de 7 points 10 months ago

I doubt they would be able to. I was interested and looked around a bit. There's even a whole chapter in their documentation dedicated to adding extractors. First paragraph of that chapter is basically ''do not add piracy websites''