this post was submitted on 25 Jan 2024
92 points (94.2% liked)

Linux

48323 readers
637 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
92
submitted 10 months ago* (last edited 10 months ago) by Kalcifer@sh.itjust.works to c/linux@lemmy.ml
 

I've spent some time searching this question, but I have yet to find a satisfying answer. The majority of answers that I have seen state something along the lines of the following:

  1. "It's just good security practice."
  2. "You need it if you are running a server."
  3. "You need it if you don't trust the other devices on the network."
  4. "You need it if you are not behind a NAT."
  5. "You need it if you don't trust the software running on your computer."

The only answer that makes any sense to me is #5. #1 leaves a lot to be desired, as it advocates for doing something without thinking about why you're doing it -- it is essentially a non-answer. #2 is strange -- why does it matter? If one is hosting a webserver on port 80, for example, they are going to poke a hole in their router's NAT at port 80 to open that server's port to the public. What difference does it make to then have another firewall that needs to be port forwarded? #3 is a strange one -- what sort of malicious behaviour could even be done to a device with no firewall? If you have no applications listening on any port, then there's nothing to access. #4 feels like an extension of #3 -- only, in this case, it is most likely a larger group that the device is exposed to. #5 is the only one that makes some sense; if you install a program that you do not trust (you don't know how it works), you don't want it to be able to readily communicate with the outside world unless you explicitly grant it permission to do so. Such an unknown program could be the door to get into your device, or a spy on your device's actions.

If anything, a firewall only seems to provide extra precautions against mistakes made by the user, rather than actively preventing bad actors from getting in. People seem to treat it as if it's acting like the front door to a house, but this analogy doesn't make much sense to me -- without a house (a service listening on a port), what good is a door?

you are viewing a single comment's thread
view the rest of the comments
[–] h3ndrik@feddit.de 7 points 10 months ago* (last edited 10 months ago) (16 children)

You're right. If you don't open up ports on the machines, you don't need a firewall to drop the packages to ports that are closed and will drop the packets anyways. So you just need it if your software opens ports that shouldn't be available to the internet. Or you don't trust the software to handle things correctly. Or things might change and you or your users install additional software and forget about the consequences.

However, a firewall does other things. For example forwarding traffic. Or in conjunction with fail2ban: blocking people who try to guess ssh passwords and connect to your server multiple times a second.

Edit:

  1. “It’s just good security practice.” => nearly every time I've heard that people followed up with silly recommendations or were selling snake-oil.
  2. “You [just] need it if you are running a server.” => I'd say it's more like the opposite. A server is much more of a controlled environment than lets say a home network with random devices and people installing random stuff.
  3. “You need it if you don’t trust the other devices on the network.” => True, I could for example switch on and off your smarthome lights or disable the alarm and burgle your home. Or print 500 pages.
  4. “You need it if you are not behind a NAT.” => Common fallacy, If A then B doesn't mean If B then A. Truth is, if you have a NAT, it does some of the jobs a firewall does. (Dropping incoming traffic.)
  5. “You need it if you don’t trust the software running on your computer.” => True
[–] wolf@lemmy.zip 2 points 10 months ago (3 children)

You’re right. If you don’t open up ports on the machines, you don’t need a firewall to drop the packages to ports that are closed and will drop the packets anyways.

Sorry, hard disagree.

I assume you are assuming: 1.) You know about all open ports at all times, which is usually not the case 2.) There are no bugs/errors in the network stacks or services with open ports (e.g. you assume a port is only available to localhost) 3.) That there are no timing attacks which can easily be mitigated by a firewall 4.) That software one uses does not trigger/start other services transitively which then open ports you are not even aware of w/o constant port scanning

I agree with your point, that a server is a more controlled environment. Even then, as you pointed out, you want to rate limit bad login attempts via firewall/fail2ban etc. for the simple reason, that even a fully updated ssh server might use a weak key (because of errors/bugs in software/hardware during key generation) and to prevent timing attacks etc.

In summary: IMHO it is bad advice to tell people they don't need a firewall, because it is demonstrably wrong and just confuses people like OP.

[–] h3ndrik@feddit.de 7 points 10 months ago* (last edited 10 months ago) (1 children)

Sure, maybe I've worded things too factually and not differentiated between theory and practice. But,

  1. "you know everything": I've said that. Configurations might change or you you don't pay enough attention: A firewall adds an extra layer of security. In practice people make mistakes and things are complex. In theory where everything is perfect, blocking an already closed port doesn't add anything.
  2. "There are no bugs in the network stack": Same applies to the firewall. It also has a network stack and an operating system and it's connected to your private network. Depends on how crappy network stacks you're running and how the network stack of the firewall compares against that. Might even be the same as on my VPS where Linux runs a firewall and the services. So this isn't an argument alone, it depends.
  3. Who migitates for timing attacks? I don't think this is included in the default setup of any of the commonly used firewalls.
  4. "open ports you are not even aware of": You open ports then. And your software isn't doing what you think it does. We agree that this is a use-case for a firewall. that is what I was trying to convey with the previous argument no 5.

Regarding the summary: I don't think I want to advise people not to use a firewall. I thought this was a theoretical discussion about single arguments. And it's complicated and confusing anyways. Which firewall do you run? The default Windows firewall is a completely different thing and setup than nftables and a Linux server that closes everything and only opens ports you specifically allow. Next question: How do you configure it? And where do you even run it? On a seperate host? Do you always rent 2 VPS? Do you do only do perimeter security for your LAN network and run a single firewall? Do you additionally run firewalls on all the connected computers in the network? Does that replace the firewall in front of them? What other means of security protection did you implement? As we said a firewall won't necessarily protect against weak passwords and keys. And it might not be connected to the software that gets brute-forced and thus just forward the attack. In practice it's really complicated and it always depends on the exact context. It is good practice to not allow everything by default, but take the approach to block everything and explicitly configure exceptions like a firewall does. It's not the firewall but this concept behind it that helps.

[–] wolf@lemmy.zip 0 points 10 months ago (1 children)

I think I get you and the 'theory vs. practice' point you make is very valid. ;-) I mean, in theory my OS has software w/o bugs, is always up-to-date and 0-days do not exist. (Might even be true in practice for a default OpenBSD installation regarding remote vulnerabilities. :-P)

Who migitates for timing attacks? I don’t think this is included in the default setup of any of the commonly used firewalls.

fail2ban absolutely mitigates a subset of timing attacks in its default setup. ;-)

LIMIT is a high level concept which can easily applied for ufw, don't know about default setups of commonly used firewalls.

If someone exposes something like SSH or anything else w/o fail2ban/LIMIT IMHO that is grossly incompetent.

You are totally right, of course firewalls have bugs/errors/miss configurations... BUT ... if you are using a Linux firewall, good chances are, that the firewall has been reviewed/attacked/pen tested more often and thoroughly than almost all other services reachable from the internet. So, if I have to choose between a potential attacker first hitting a well tested and maintained firewall software or a MySQL server, which got no love from Orcacle and lives in my distribution as an outdated package, I'll put my money on the firewall every single time. ;-)

[–] h3ndrik@feddit.de 5 points 10 months ago* (last edited 10 months ago)

Thank you for pointing out that my arguments don't necessarily apply to reality. Sometimes I answer questions too direct. And the question wasn't "should I use a firewall" or I would have answered with "probably yes."

I think I have to make a few slight corrections: I think we use the word "timing attack" differently. To me a timing attack is something that relies on the exact order or interval/distance packets arrive at. I was thinking of something like TOR does where it shuffles around packets, waits for a few milliseconds, merges them or maybe blows them up so they all have the same size. Brute forcing something isn't exploiting the exact time where a certain packet arrives, it's just sending many of them and the other side lets the attacker try an indefinite amount of passwords. But I wouldn't put that in the same category with timing attacks.

Firewall vs MySQL: I don't think that is a valid comparison. The firewall doesn't necessarily look into the packets and detect that someone is running a SQL injection. Both do a very different job. And if the firewall doesn't do deep-packet-inspection or rate limiting or something, it just forwards the attack to the service and it passes through anyways. And MySQL probably isn't a good example since it rarely should be exposed to the internet in the first place. I've configured MariaDB just to listen on the internal interface and not to packets from other computers. Additionally I didn't open the port in the firewall but MariaDB doesn't listen on that interface anyways. Maybe a better comparison would be a webserver with https. The firewall can't look into the packets because it's encrypted traffic. It can't tell apart an attack from a legitimate request and just forwards them to the webserver. Now it's the same with or without a firewall. Or you terminate the encrypted traffic at the firewall, do packet inspection or complicated heuristics. But that shifts the complexity (including potential security vulberabilities in complex code) from the webserver to the firewall. And it's a niche setup that also isn't well tested. And you need to predict the attacks. If your software has known vulnerabilities that won't get fixed, this is a valid approach. But you can't know future attacks.

Having a return channel from the webserver/software to the firewall so the application can report an attack and order the firewall to block the traffic is a good thing. That's what fail2ban is for. I think it should be included by default wherever possible.

I think there is no way around using well-written software if you expose it to the internet (like a webserver or a service that is used by other people.) If it doesn't need to be exposed to the internet, don't do it. Any means of assuring that are alright. For crappy software that is exposed and needs to be exposed, a firewall doesn't do much. The correct tools for that are virtualization, containers, VPNs, and replacing that software... Maybe also the firewall if it can tell apart good and bad actors by some means. But most of the time that's impossible for the firewall to tell.

I agree. You absolutely need to do something about security if you run services on the internet. I do and have ran a few services. And especially webserver-logs (especially if you have a wordpress install or some other commonly attacked CMS), SSH and Voice-over-IP servers get bombarded with automated attacks. Same for Remote-Desktop, Windows-Networkshares and IoT devices. If I disable fail2ban, the attackers ramp up the traffic and I can see attacks scroll through the logfiles all day.

I think a good approach is:

  1. Choose safe passwords and keys.
  2. Don't allow people to brute-force your login credentials.
  3. If you don't need a service, deactivate it entirely and remove the software.
  4. If you just need a service internally, don't expose it to the internet. A firewall will help, and most software I use can be configured to either listen on external requests or don't do it. Also configure your software to just listen on/to localhost (127.0.0.1). Or just the LAN that contains the other things that tie into it. Doing it at two distinct layers helps if you make mistakes or something happens by accident or complexity or security vulnerabilities arise. (Or you're not in complete control of everything and every possibility.)
  5. If only some people need a service, either make it as secure as a public service or hide it behind a VPN.
  6. Perimeter security isn't the answer to everything. The subject is complex and we have to look at the context. Generally it adds, though.
  7. If you run a public service, do it right. Follow state of the art security practices. It's always complicated and depends on your setup and your attackers. There are entire books written about it, people dedicate their whole career to it. For every specific piece of software and combination, there are best practices and specific methods to follow and implement. Lots of things aren't obvious.
  8. Do updates and backups.
load more comments (1 replies)
load more comments (13 replies)