Shimitar

joined 1 year ago
[–] Shimitar@downonthestreet.eu 35 points 5 days ago

Llm is just a tool and it usage will only increase over time.

There is nothing intrinsically bad in that, as with any tool, it's not bad per se, but how we use it.

So, push for ethic and proper usage of AI, rather.

Projects will use AI more and more, nothing bad in that. Provided it's used properly, vetted, tested, verified and such.

[–] Shimitar@downonthestreet.eu 3 points 6 days ago

I also wrote mine, a bit of JavaScript that load a json file and populate the page.

[–] Shimitar@downonthestreet.eu 5 points 6 days ago (1 children)

AFAIK Joplin is FOSS, but be aware that it's markdown format is not compatible with... Markdown. Funnily enough.

[–] Shimitar@downonthestreet.eu 6 points 6 days ago

I think you are just ragebaiting anyway

[–] Shimitar@downonthestreet.eu 17 points 6 days ago (3 children)

Its like putting an Audi engine inside a Ford and complaining that making it run is hard.

Complain with cyberpunk developers that didn't developed also for Linux, but only for windows instead.

I find it incredible that you can even try to run a windows game on linux.

Try run a native android app on iphone or vice versa...

[–] Shimitar@downonthestreet.eu 2 points 1 week ago* (last edited 1 week ago)

Yes, more redindancy is good and indeed worth having. Still 5 12tb drives are probably yet more energy and heat efficient than 10 4tb ones.

Even if I had 10 4tb for free I wouldn't use them. Maybe a couple for backup reasons or cold storage, but not active 24/7 for a domestic raid environment.

I actually have 4 6tb hdds that I dismissed for the 4 8tb sdds, and I use two for local backup and keep two spares to replace them when they will fail.

4 8tb in raid5 provide 24tb total space that its far more than I need, and the risk of a double failure is mitogated by a proper 3,2,1 backup strategy in place

As for the higher I/o frankly I never felt the need. 1gbps home network is always the bottleneck anyway and if you require such disk troughput on your network, you are doing something wrong anyway.

Even many 4k video streams would sturate your lan before saturating your disks unless you store uncompressed video streams.

[–] Shimitar@downonthestreet.eu 2 points 1 week ago (2 children)

10x4tb = 40tb can be achieved with 4 12tb drives (actually 36tb in raid5) .

Doubtfully those 12tb uses much more power than the 4tb ones, each. So the 28€/m probably cut down to 14,€/m counted in excess.

Considering 120m (10y) of uptime, you should save enough to justify cutting down from 10 to 4 drives.

[–] Shimitar@downonthestreet.eu 7 points 1 week ago* (last edited 1 week ago) (4 children)

I wouldn't use more than 4 or 6 disks in a home environment. Specially with mechanical drivers, power consumption 24/7 would get me very worried.

I run 4 x 8Tb SSDs, not cheap, but solid, low power AND low heat (even more important).

Consider also heat dissipation as most likely at home you don't have a constant temperature and humidity, so many spinning disks can suffer from heat, and that will kill them faster

Longevity... With so much space I would expect to keep it running a decade or more... So factor in 10x365x24 hours of operation, energy consumed, heat dissipation and failure rate.

On top of that, whatever gpu and ram you throw at it is meaningless, whatever wi work, even an Intel n100 NUC. Having enough cables and port instead... Well.

[–] Shimitar@downonthestreet.eu 2 points 1 week ago (1 children)

Yes, I was trying to be funny.... Don't worry... ;)

[–] Shimitar@downonthestreet.eu 2 points 1 week ago (3 children)

Was it.... Made with AI?

[–] Shimitar@downonthestreet.eu 2 points 1 week ago (5 children)

Thanks! Tomorrow will see to upload to my wiki...

.

[–] Shimitar@downonthestreet.eu 2 points 1 week ago (7 children)

Ahahah I like the "zero AI" logo idea, maybe will use AI to create one... :)

Yes I am that bad with graphics

Anyway check the main page of the wiki I explain why I did it.

 

Hi all, i am quite an old fart, so i just recently got excited about self hosting an AI, some LLM...

What i want to do is:

  • chat with it
  • eventually integrate it into other services, where needed

I read about OLLAMA, but it's all unclear to me.

Where do i start, preferably with containers (but "bare metal") is also fine?

(i already have a linux server rig with all the good stuff on it, from immich to forjeio to the arrs and more, reverse proxy, Wireguard and the works, i am looking for input on AI/LLM, what to self host and such, not general selfhosting hints)

15
Spotify sync web gui (downonthestreet.eu)
submitted 10 months ago* (last edited 10 months ago) by Shimitar@downonthestreet.eu to c/selfhosted@lemmy.world
 

Hi fellow selfhosters!

i pay (i know, i know) for Spotify Premium and i would like to progressively build my self-hosted music collection leveraging the fact that i am a paying customer and i would hate if the pull songs under my rug over time.

Any good self-hostable approach here? Ideally, the flow would be:

  • I listen to spotify on my mobile devices, add songs to playlists and such
  • my self-host setup syncs those playlists
  • ... and download the songs using my paid for premium account from spotify itself
  • Doean't really needs to be web-based, i can access my server anbd run anything CLI based or even plain old GUI (linux).

I don't want fake solutions that use Google Music or Deezer to download, i pay spotify and expect somehow to be able to download 320Kbps music from it.

The overall process can be manual, but better automated.

I already have lidarr, but it's basically impossible to download the same music from it, at least not the music i listen to.

A viable workaround could be something that builds by spotify playlists using what music i have downloaded with lidarr, maybe notifying me what is missing...

EDIT: somebody pointed out this is against Spotify TOS. Anyway i found a solution using Spotizerr, which is a self-hosted web app that does exactly what i was looking for. You still need a paid spotify account unless you want to download low-res from Deezer.

 

As the title says, conduwuit has been forked as Tuwunnel which is labelled as the "successor with stable governance".

Love open source! Glad to see real matrix server alternatives keep pushing.

Will switch to it as soon as available. Will be, of course, 100% upgradeable from conduwuit.

 

I host a minecraft bedrock server user by the family to play, from ps4 and android.

Adding a windows client, do i need to pay again to play? I mean, the price of the windows Minecraft client is... Unbeliable. And we already purchased the android client and the ps4 client...

I tried to look around for a cracked windows client but with no luck.

Is it possible? Anybody running a cracked Minecraft client on windows? No need for online play except connect to our self hosted server ...

111
Self-hosting minecraft (downonthestreet.eu)
submitted 1 year ago* (last edited 1 year ago) by Shimitar@downonthestreet.eu to c/selfhosted@lemmy.world
 

Hi! I want to selfhost a minecraft server for my kid and hjs friends. I havent played minecraft in quite a few years ...

Where do I start to self host one?

I am already seflhosting lost of stuff from 'Arrs to Jellyfin and Immich and more, so I am not asking on how to do it technically, but where to look for and what to host for a proper Minecraft server!

Edit: choosed to setup this https://github.com/itzg/docker-minecraft-bedrock-server and so far, super smooth and easy peasy!

 

Hi fellow self-hoster.

Almost one year ago i did experiment with Immich and found, at the time, that it was not up to pair to what i was expecting from it. Basically my use case was slightly different from the Immich user experience.

After all this time i decided to give it another go and i am amazed! It has grown a lot, it now has all the features i need and where lacking at the time.

So, in just a few hours i set it up and configured my external libraries, backup, storage template and OIDC authentication with authelia. All works.

Great kudos to the devs which are doing an amazing work.

I have documented all the steps of the process with the link on top of this post, hope it can be useful for someone.

16
submitted 1 year ago* (last edited 1 year ago) by Shimitar@downonthestreet.eu to c/selfhosted@lemmy.world
 

I have a remote VPS that acts as a wireguard server (keys omitted):

[Interface]
Address = 10.0.0.2/24
[Peer] # self host server
AllowedIPs = 10.0.0.1/32

(The VPS is configured to be a router from the wg0 to it's WAN via nft masquerading)

And i have another server, my self-host server, which connects to the VPS trough wireguard because it uses wireguard tunnel as a port-forwarder with some nft glue on the VPS side to "port forward" my 443 port:

[Interface]
Address = 10.0.0.1/24
[Peer]
AllowedIPs = 10.0.0.2/24

(omitted the nft glue)

My self-hosted server default route goes trough my home ISP and that must remain the case.

Now, on the self-host server i have one specific user that i need to route trough the wireguard tunnel for it's outgoing traffic, because i need to make sure it's traffic seems to originate from the VPS.

The way i usually handle this is with a couple of nft commands to create a user-specific routing table and assign a different default route to it (uid=1070):

 ip rule add uidrange  1070-1070 lookup 1070
ip route add default via 192.168.0.1 dev eno1 table 1070

(this is the case, and works, to use eno1 as default gateway for user 1070. Traceroute 8.8.8.8 will show user 1070 going trough eno1, while any other user going trough the default gateway)

If i try the same using the wg0 interface, it doesn't work.

 ip rule add uidrange  1070-1070 lookup 1070
ip route add default via 10.0.0.2 dev wg0 table 1070

This doesnt work, wireguard refuses to allow packets trough with an error like:

ping 8.8.8.8
From 10.0.0.1 icmp_seq=3 Destination Host Unreachable                                            
ping: sendmsg: Required key not available 

I tried to change my self-host server AllowedIps like this:

[Interface]
Address = 10.0.0.1/24
[Peer]
AllowedIPs = 10.0.0.2/24, 0.0.0.0/0

and it works! User 1070 can route trough wireguard. BUT... now this works just too much... because all my self-host server traffic goes trough the wg0, which is not what i want.

So i tried to disable the WireGuard messing with routing tables:

[Interface]
Address = 10.0.0.1/24
Table = off
[Peer]
AllowedIPs = 10.0.0.2/24, 0.0.0.0/0

and manually added the routes for user 1070 like above (repeat for clarity):

 ip rule add uidrange  1070-1070 lookup 1070
ip route add default via 10.0.0.2 dev wg0 table 1070

The default route now doesnt get replaced, but now, without any error, the packers for user 1070 just don't get routed. ping 8.8.8.8 for user 1070 just hangs

I am at a loss.... Any suggestions?

(edits for clarity and a few small errors)

 

Hi all.

I have been hosting my mail (not "self" like at home, but hosting on a rented server on the 'net) for the last 20 years going the old good way of postfix+dovecot+OpenDKIM/DMARC/SpamAssassin and all the glue and bells.

Having the opportunity to rethink the entire approach (which works fine, but its pretty cumbersome and complex to replicate) i was looking at Stalwart mail server which looks promising and nice, being written in rust following modern principles and such.

Asking to anybody who has been using Stalwart, is it good? Does it deliver being a solid mail server?

Asking to people hosting it's own mail, is there a better solution out there?

Asking to people commenting against hosting a mail server, please refrain from doing so, as i'have been doing that with success for the past 20 years that's what i will be keep doing for the foreseeable future as well.

 

UPDATE: after many comments, let me be clear that i have nothing against systemd at a technical level. It indeed solves issues that people had and found it's way in most mainstream distros for good reasons, beside being pushed by Redhat and Debian, which makes for basically every other mainstream distro out there without much choice. I never used it long enough to judge it, and i dont intend to judge it from a technical point of view. I am worried that such a centra piece of technology deeply interwined with linux is under direct control of IBM and Microsoft (who is the employer of the systemd lead). This might mean nothing, or this could be important for the long time future of linux freedom.

I have recently been exposed to a lot of stuff against systemd.

I know its an old debate that has inflamed people for a long time, I am not looking into restarting it as I never took a stance into it in the past anyway.

I am myself a almost 30+ years power user of Linux and I have never used systemd much myself since it never fixed any issues I had with the previous approaches, and since I am a good user of Gentoo, always loved the freedom to just keep using OpenRC and din't ever bother with systemd.

I like the Unix approach and at the same time, if it is not broken don't fix it, is my basic idea. So my approach to systemd has been not of dislike, rather of I don't care, I don't need it. And I never needed it anyway.

After reading trough most of the links below I start to think that maybe my stance could be more than simple technical.

What are other lemmy-ers idea on all this?

I didn't knew about Microsoft taking over the Linux Foundation either, and I am getting concerned about the real freedom behind my beloved Linux.

TLDR: I don't dislike systemd, I never cared about systemd. Do I need to start caring now due to all this non technical issues?

Note: i a copying verbatim the following article to stress that these are not my personal opinions and that i didnt do a proper research on the topic, except reading (most) of the links below.


(The following is a post on the #libreware telegram channel on the 7th/8th of February 2025)

Lennart Poettering intends to replace "sudo" with #systemd's run0. Here's a quick PoC to demonstrate root permission hijacking by exploiting the fact "systemd-run" (the basis of uid0/run0, the sudo replacer) creates a user owned pty for communication with the new "root" process.

This isn't the only bug of course, it's not possible on Linux to read the environment of a root owned process but as systemd creates a service in the system slice, you can query D-BUS and learn sensitive information passed to the process env, such as API keys or other secrets.

https://fixupx.com/hackerfantastic/status/1785495587514638559

Nitter mirror: https://xcancel.com/hackerfantastic/status/1785495587514638559

Here are some links about #systemd #alternatives for #Linux in no particular order. Which are your favorite alternatives and distros?

https://suckless.org/sucks/systemd/

https://unixsheikh.com/articles/the-real-motivation-behind-systemd.html

https://sysdfree.wordpress.com/

https://nosystemd.org/

https://skarnet.org/software/systemd.html

https://the-world-after-systemd.ungleich.ch/

https://ewontfix.com/14/

https://forums.debian.net/viewtopic.php?t=120652

https://www.devuan.org/os/announce/

https://www.devuan.org/os/init-freedom

https://thehackernews.com/2019/01/linux-systemd-exploit.html

https://judecnelson.blogspot.com/2014/09/systemd-biggest-fallacies.html

https://chiefio.wordpress.com/2016/05/18/systemd-it-keeps-getting-worse/

https://systemd-free.artixlinux.org/why.php

Some more added here too: https://start.me/p/Kg8keE/priv-sec

#systemd #Linux

 

Hi all!

This is my first post from my self-hosted Lemmy instance!

Thanks all you guys who gave me suggestions and help!

Hope you can see it, BTW :)

view more: next ›