IsoKiero

joined 1 year ago
[–] IsoKiero@sopuli.xyz 9 points 6 days ago

Not that it's really relevant for the discussion, but yes. You can do that, with or without chroot.

That's obviously not the point, but we're already comparing oranges and apples with chroot and containers.

[–] IsoKiero@sopuli.xyz 2 points 1 week ago

All of those are still standing on Firefox's shoulders and the actual rendering engine on the browser isn't really trivial thing to build. Sure, they're not going away, and likely Firefox will be around too for quite a while, but the world wide web as we currently know it is changing and Google and Microsoft are few of the bigger players pushing the change.

If you're old enough you'll remember the banners 'Best viewed with on ', and it's not too far off from the future we'll have if the big players get their wishes. Things like google suite, whatever meta is offering and pretty much "the internet" as your Joe Average understands it wants to implement technology where it's not possible to block ads or modify the content you're shown in any other way. It's not too far off from your online banking and other very much real life affecting services start to have boundaries in place where they require certain level of 'security' from your browser and you can bet that things which allow content modifying things, like adblocker, doesn't qualify for the new standards.

On many places it's already illegal to modify or tamper DRM protected content in any ways (does anyone remember libdvdcss?) and the plan is to include similar (more or less) restrictions to the whole world wide web, which would say that we'll have things like fediverse who allow browsers like firefox and 'the rest' like banking, flight/ticket/hotel/whatever booking sites, big news outlets and so on who only allow the 'secure' version of the browser. And that of course has very little to do with actual security, they just want control over your device and what content is fed to you, regardless if you like it or not.

[–] IsoKiero@sopuli.xyz 2 points 1 week ago

I have no idea about cozy.io, but just to offer another option, I've been running Seafile for years and it's pretty solid piece of hardware. And while it does have other stuff than just file storage/sharing, it's mostly about just files and nothing else. Android client isn't the best one around, but gets the job done (background tasks at least on mine tend to freeze now and then), on desktop it just works.

[–] IsoKiero@sopuli.xyz 4 points 2 weeks ago

I have absolutely zero insight on how the foundation and their financing works, but in general it tends to be easier to green light a one time expense than a recurring monthly payment. So it might be just that, a years salary at first to get the gears running again and getting some time to fit the 'infinite' running cost into plans/forecasts/everything.

[–] IsoKiero@sopuli.xyz 5 points 3 weeks ago

It depends. I've ran small websites and other services on a old laptop at home. It can be done. But you need to realize the risks that come with it. If the thing I'm running for fun goes down. someone might be slightly annoyed that the thing isn't accessible all the time, but it doesn't harm anyones business. And if someones livelihood is depending on the thing then the stakes are a lot higher and you need to take suitable precautions.

You could of course offload the whole hardware side to amazon/hetzner/microsoft/whoever and run your services on leased hardware which simplifies things a lot, but you still run into a problem where you need to meet more or less arbitary specs for an email server so that Microsoft or Google even accept what you're sending, you need to have monitoring and staff available to keep things running all the time, plan for backups and other disaster recovery and so on. So it's "a bit" more than just 'apt install dovecot postfix apache2' on a Debian box.

[–] IsoKiero@sopuli.xyz 16 points 3 weeks ago (4 children)

Others have already mentioned about the challenges on the software/management side, but you also need to take into consideration hardware failures, power outages, network outages, acceptable downtime and so on. So, even if you could technically shoehorn all of that into a raspberry pi and run it on a windowsill, and I suppose it would run pretty well, you'll risk losing all of the data if someone spills some coffee on the thing.

So, if you really insist doing this on your own hardware and maintenance (and want to do it properly), you'd be looking (at least):

  • 2 servers for reundancy, preferably 3rd one laying around for a quick swap
  • Pretty decent UPS setup, again multiple units for reundancy
  • Routers, network hardware, internet uplinks and everything at least duplicated and configured correctly to keep things running
  • A separate backup solution, on at least two different physical locations, so a few more servers and their network, power and other stuff taken care of
  • Monitoring, alerting system in case of failures, someone being on-call for 24/7

And likely a ton of other stuff I can't think of right now. So, 10k for hardware, two physical locations and maintenance personnel available all the time. Or you can buy a website hosting (VPS even if you like) for few bucks a month and email service for a 10/month (give or take) and have the services running, backed up and taken care of for far longer than your own hardware lifetime is for a lot cheaper than that hardware alone.

[–] IsoKiero@sopuli.xyz 3 points 3 weeks ago (1 children)

I live in Europe. No unpaid overtime here and productivity requirements are reasonable, so no way to blame for my tools on that. And even if my laptop OS broke itself completely then I'm productive at reinstallation, as keeping my tools in a running shape is also on my job description. So, as long as I'm not just scratching my balls and scrolling instagram reels all day long that's not a concern.

[–] IsoKiero@sopuli.xyz 6 points 3 weeks ago (3 children)

I'm currently more of an generic sysadmin than linux admin, as I do both. But the 'other stuff' at work runs around teams, office, outlook and things like that, so I'm running a win11 with WSL and it's good enough for what I need from a workstation. There's technically a policy in place that only windows workstations are supported, but I suppose I could run linux (and I have separate laptop for linux-only stuff). At the current environment it's just not worth the hassle, spesifically since I need to maintain windows servers too.

So, I have my terminals, firefox and whatever I need and I also have the mandated office-suite, malware protection/IDR/IDS by the book and in my mindset I'm using company tools for company jobs. If they take longer, could be more efficient or whatever, it's not my problem. I'll just browse my (personal) cellphone while the throbber spins on the screen and I get paid to do that.

If I switched to linux I'd need to personally take care of my system to meet specs and I wouldn't have any kind of helpdesk available should I ever need one. So it's just simpler to stick with what the company provides and if it's slow then it's not my headache and I've accepted that mindset.

[–] IsoKiero@sopuli.xyz 1 points 3 weeks ago

The package file, no matter if it's rpm, deb or something else, contains few things: Files for the software itself (executables, libraries, documentation, default configuration), depencies for other packages (as in to install software A you need also install library B) and installation scripts for the package. There's also some metadata, info for uninstallation and things like that, but that's mostly irrelevant for end user.

And then you need suitable package manager. Like dpkg for deb-packages, rpm (the program) for rpm-packages and so on. So that's why you mostly can't run Debian packages on Fedora or other way around. But with derivative distributions, like kubuntu and lubuntu, they use Ubuntu packages but have different default package selection and default configuration. Technically it would be possible to build a kubuntu package which depends on some library version which isn't on lubuntu and thus the packages wouldn't be compatible, but I'm almost certain that on those spesific two it's not the case.

And then there's things like Linux Mint, which originally based on Ubuntu but at least some point they had builds from both Debian and Ubuntu and thus they had different package selection. So there's a ton of nuances on this, but for the most part you can ignore them, just follow documentation for your spesific distribution and you're good to go.

[–] IsoKiero@sopuli.xyz 2 points 3 weeks ago

Filtering incoming spam, while not 100% correct, is a pretty straightforward thing to do. Use DNSBL and other lists from spamhaus and it takes care of 90+% of the problem. Incoming spam has not been a huge issue for me, but when people try to send mail to someone in M365 cloud or to Gsuite and they just decide that your server isn't important enough they just block you out and that's it. Trying to circumvent that takes a ton of time and effort and while it can be done it's a huge pain in the rear. And trying to fight your way trough the 1st tier support to someone who actually understands the problem and attempts to fix that while you customers are complaining that "problem with email" is actually affecting on their income is the part I'll happily leave behind.

I'll set up a couple of new VPS servers to host my personal and friends emails, but if they complain that the service I'm paying from my personal pocket isn't what they're after then they're free to switch into whatever they like. And as infrastructure for that is something like 100€/year I'll happily pay it by myself so that no one has an option to say 'I paid for this so you need to fix it' anymore. On commercial case that's obviously not an option and I've had my share of running a business in a very hostile environment.

[–] IsoKiero@sopuli.xyz 6 points 3 weeks ago (2 children)

Also if you're running an email server for others, it takes very little from single individual, like a small webshop newsletter, which enough people manually marks as junk and you're on a block list again. Latest one with microsoft took several days to clear, even if all of their tools and 1st tier support claimed that my IP isn't on a black list.

I've jumped all the hoops and done everything by the book, but that still doesn't mean that any of the big players won't just screw you up because some of their automaton happens to decide so. That's why I'm shutting my small ISP business down, there's no more money to make on that and a ton of customers have moved to the cloud anyways, mostly to microsoft due to their office-suite pricing. It was kind of fun while it lasted, but that ship has sailed.

[–] IsoKiero@sopuli.xyz 29 points 4 weeks ago (17 children)

Phobia, by definition, is uncontrollable, irrational, and lasting fear for something. In the current geopolitics situation I'd say that it's not uncontrollable and very much not irrational. Fear, as a fellow Finn, might be a bit strong word, but it's a definetly a concern.

When I first read that I thought that the response is a bit harsh, as Russian (and Soviet Union) individuals have traditionally been a big part of open source community and their achievements on computing are pretty significant, but when you dig a bit deeper on that, a majority of Soviet era things are actually built by Ukrainians in Kyiv (obviously Ukraine as a country wasn't a thing back then).

Also, based on my very limited sight on the matter, Russians are not banned from contributing, but this is more of an statement that anyone working for the government in Russia can't be a part of kernel development team. There's of course legal reasons for that, very much including the trade bans against Russia, but also the moral part of it, which Linus seems to take a stand on.

Personally I've seen individuals at Russia to do quite amazing feats with both hardware and software, but as none of us are in a void without any external infcluence nor affect, I think that, while harsh, the "sanctions" (for a lack of better word) aren't overshooting anything, but they're instead leveling the playing field. Any Joe Anynymous could write a code which compromises the kernel as a whole, but should that Joe live in Russia, it might bring a government backed team which can hide their tracks on a quite a bit different level with their resources than any individual could ever even dream about.

So, while that decision might slow down some implementations and it might include some of the most capable of developers, the fear that one of them might corrupt the whole project isn't unreasonable and, with ongoing sanctions in place (and legal requirements that follow) the core dev team might not even have a choice on this.

In current global environment we're living in, I'd rather have a bit too careful management than one which doesn't take things seriously enough. We already have Canonical and others to break stuff way too often, we don't need malicious government to expand on that with nefarious purposes which could compromise a shit on of stuff on a very fundamental level if left unattended.

 

I've spent far longer than expected to set up an VLAN on my network for IoT devices which I don't want to have access to the internet. I'm running RB4011iGS+ router with RouterOS 6.48.4 and what I thought was a simple change took the whole network down for a while.

Granted, I'm not the most skilled network admin around, but I have built networks in the past and I'm (partly) maintaining them at work, but apparently I'm approaching this somehow from the wrong angle.

The current setup is a single subnet (172.17.0.0/24) where Mikrotik manages firewalling and DHCP without VLAN. WAN side has SPF module for the uplink, couple of bridged ports for that to provide raw internet to my server, some static mappings on the DHCP and things like that, pretty basic stuff. Other hardware includes Unifi access points, manageable switch and various stuff which just connects to the network.

Now, I'd like to add a VLAN (id 20, not that it matters) on the setup so I could have another /24 subnet for IOT devices. What I tought would be enough to take couple of ports from the existing LAN bridge, create a new bridge, set up an VLAN interface with IP, DHCP server and just connect tagged port on my switch, connect laptop for testing for untagged port and configure switch so that I could have another SSID on access points on that VLAN and connect couple of other things directly on the switch.

There's plenty of guides around the net, but when I attempted to follow them I ended up in a situation where untagged port just would not work with ARP. I could dump traffic on my laptop with wireshark and there's ARP 'who-has' requests running, but Mikrotik won't reply on those no matter what I do. Same of course goes with DHCP requests and all traffic in general. My laptop would receive ARP query when attempting to ping it from the router, and laptop would respond, but sniffing traffic from the mikrotik port the reply just disappears somewhere. No matter if I have the switch in between to untag VLAN for the port or directly connecting cable to the mikrotik or even moving the laptop to VLAN20 and using that as a test setup.

What I'm currently assuming is that the problem is with non-tagged "general" network I'm running. As in VLAN20 and VLANnothing somehow are fundamentally incompatible on RouterOS, but that seems kind of backwards.

The end goal would be to have a trunk port on the router and on the switch and distribute VLAN to ports as needed. Or even a port for generic use and another for VLAN networks. Maybe someone here is more experienced with RouterOS and could point me to the right direction?

 

This question has already been around couple of times, but I haven't found an option which would allow multiple users and multiple OS's (Linux and Windows mostly, mobile, both android and ios, support would be nice at least for viewing) to conviniently share the same storage.

This has been an issue on my network for quite some time and now when I rebuilt my home server I installed TrueNAS on a VM and I'm currently organizing my collections over there with Shotwell so the question became acute again.

Digikam seems to be promising for the rest than organizing the actual files (which I can live with, either shotwell or a shell script to sort them by exif-dates), but I haven't tried that yet with windows and my kubuntu desktop seems to only have snap-package of that without support for external SQL.

On "editing" part it would be pretty much sufficient to tag photos/folders to contain different events, locations and stuff like that, but it would be nice to have access to actual file in case some actual editing needs to be done, but I suppose SMB-share on truenas will accomplish that close enough.

Other need-to-have feature is to manage RAW and JPG versions of the same image at least somehow. Even removing JPGs and leaving only RAW images would be sufficient.

And finally, I really like to have the actual files laying around on a network share (or somewhere) so that they're easy to back up, copy to external nextcloud for sharing and in general have more flexibility in the future in case something better comes up or my environment changes.

view more: next ›