Why not use what the client requested
Linux
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
I was hoping to avoid going full AI. Unfortunately these are the YT type videos that AI is completely taking over so unless I want to spend a week on this I think I'll have to. Was just trying to exhaust every avenue before going that route.
Yeah, I get that. But seems it's pretty well suited for the task.
You can probably create a similar workflow using comfyui though. But it will require time and effort.
Honestly it's looking that way. I just needed to try everything else first for my own principles. Appreciate the kindness
There’s no getting around using AI for some of this, like subtitle generation
Eh... yes there is, you can pay actual humans to do that. In fact if you do "subtitle generation" (whatever that might mean) without any editing you are taking a huge risk. Sure it might get 99% of the words right but it fucks up on the main topic... well good luck.
Anyway, if you do want to go that road still you could try
- ffmpeg with whisper.cpp (but honestly I'm not convinced hardcoding subtitles is a good practice, why not package as e.g.
.mkv? Depends on context obviously) - Kdenlive with vosk
- Kdenlive with whatever else via
*.srt *.ass *.vtt *.sbvformats
I'd love to pay someone, or I'd just transcribe it myself if it wouldn't take so long. I'm new to VE so learning as I go, I do audio and the editing process seems fairly transferable, it's the barragement of movement and transitions in these that I'm struggling to not spend a week working on it. I'm doing this as a favour so outsourcing isn't an option. I'll be checking over the subtitles anyway, generating just saves a bunch of time before a full pass over it.
I'd rather not have hardcoded subs at all, but these are the "no attention span" style videos for YT (constant zooms, transitions, big subtitles, etc) that I have to mimic. Honestly I hate the style haha, but it is what it is. The style "gets traction" on social media.
I'm quickly realizing why these videos use AI, it's a tonne of work without it for very little pay. I was just hoping to use as little of it as possible and trying to avoid going with Descript.
Anyway, appreciate you taking the time, I got some sub generation working with Kdenlive but it's looking like I either have to bite the bullet with Descript or just transcribe it myself. The editing for the subs generation looks to be as much work as just transcribing a handful of frames at a time.
i think Kdenlive has subtitle transcription which you can enable and configure in settings.
Finally got that working, had to run with the appimage instead of flatpack for it to work. Now I just gotta see if I can mimc the font haha. Thanks
I know that's not a ready to use solution but blender has a very powerful python API which should allow you to automate everything including doing calls to a AI backend of your choice if needed.
Interesting. I'm struggling to get transcription add-ons to work in Blender. I've never installed python script stuff so I don't know if I screwed something up. Every time I try transcription it either just stops around 95% or crashes with
Unable to load any of {libcudnn_ops.so.9.1.0, libcudnn_ops.so.9.1, libcudnn_ops.so.9, libcudnn_ops.so}
Invalid handle. Cannot load symbol cudnnCreateTensorDescriptor
Aborted (core dumped)
Do you have a suggestion of where I can get started learning about what you're talking about?
I think this libcudnn is a Nvidia CUDA thing. I guess you have checked that the correct CUDA libs are installed and blended has permission and knows where to look for them?
First start for learning blender Python API would be it's documentation: https://docs.blender.org/api/current/index.html
In general you can skip anything that you can do on the user interface. But video editing is just a very small part of this and if you don't have any programming experience yet this could be overkill for what you are looking for.
Perhaps someone had the same problems like you before and implemented something. Maybe searching explicitly for blender video editing automation or Python API will give you some results.
Honestly I'm new to Linux from about 3 months ago, so it's been a bit of a learning curve on top to learning VE haha. I didn't realize CUDA had versions let alone was anything other than an acronym for using GPU (Nvidia for me) and I now figure CUDA is probably why Davinci Resolve isn't working right. Kdenlive's search for GPU over CPU had CUDA versions listed (mines 12.0, it was searching for 12.3,4,5 etc) which made me realize CUDA and Nvidia drivers differ.
So long story short, no I haven't checked that beyond looking for how to update CUDA haha. I really appreciate you taking the time, I'll look into implementing python next. One thing I love about Linux, I'm constantly learning.