this post was submitted on 25 Jan 2024
92 points (94.2% liked)

Linux

48328 readers
652 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
92
submitted 10 months ago* (last edited 10 months ago) by Kalcifer@sh.itjust.works to c/linux@lemmy.ml
 

I've spent some time searching this question, but I have yet to find a satisfying answer. The majority of answers that I have seen state something along the lines of the following:

  1. "It's just good security practice."
  2. "You need it if you are running a server."
  3. "You need it if you don't trust the other devices on the network."
  4. "You need it if you are not behind a NAT."
  5. "You need it if you don't trust the software running on your computer."

The only answer that makes any sense to me is #5. #1 leaves a lot to be desired, as it advocates for doing something without thinking about why you're doing it -- it is essentially a non-answer. #2 is strange -- why does it matter? If one is hosting a webserver on port 80, for example, they are going to poke a hole in their router's NAT at port 80 to open that server's port to the public. What difference does it make to then have another firewall that needs to be port forwarded? #3 is a strange one -- what sort of malicious behaviour could even be done to a device with no firewall? If you have no applications listening on any port, then there's nothing to access. #4 feels like an extension of #3 -- only, in this case, it is most likely a larger group that the device is exposed to. #5 is the only one that makes some sense; if you install a program that you do not trust (you don't know how it works), you don't want it to be able to readily communicate with the outside world unless you explicitly grant it permission to do so. Such an unknown program could be the door to get into your device, or a spy on your device's actions.

If anything, a firewall only seems to provide extra precautions against mistakes made by the user, rather than actively preventing bad actors from getting in. People seem to treat it as if it's acting like the front door to a house, but this analogy doesn't make much sense to me -- without a house (a service listening on a port), what good is a door?

you are viewing a single comment's thread
view the rest of the comments
[–] Kalcifer@sh.itjust.works 1 points 9 months ago (6 children)

for c d and e one might also want to filter some outgoing connection…

Is there any way to reliably do this in practice? There's no way of really knowing what outgoing source ports are being used, as they are chosen at random when the connection is made, and if the device is to be practically used at all, some outgoing destination ports must be allowed as well e.g. DNS, HTTP, HTTPS, etc. What other methods are there to filter malicious connections originating from the device using a packet filtering firewall? There is the option of using a layer 7 firewall like OpenSnitch, but, for the purpose of this post, I'm mostly curious about packet filtering firewalls.

one could also use an ip filtering firewall to keep logs small by disallowing those who obviously have intentions you dislike (fail2ban i.e.)

This is a fair point! I hadn't considered that.

[–] smb@lemmy.ml 1 points 9 months ago (5 children)

you do not need to know the source ports for filtering outgoing connections.

(i usually use "shorewall" as a nice and handy wrapper around iptables and a "reject everything else policy" when i configured everything as i wanted. so i only occasionally use iptables directly, if my examples dont work, i simply might be wrong with the exact syntax)

something like:

iptables -I OUTPUT -p tcp --dport 22 -j REJECT

should prevent all new tcp connection TO ssh ports on other servers when initiated locally (the forward chain is again another story)

so ... one could run an http/s proxy under a specific user account, block all outgoing connections except those of that proxy (i.e. squid) then every program that wants to connect somewhere using direct ip connections would have to use that proxy.

better try this first on a VM on your workstation, not your server in a datacenter:

iptables -I OUTPUT -j REJECT iptables -I OUTPUT -p tcp -m owner --owner squiduser -j ACCEPT

"-I" inserts at the beginning, so that the second -I actually becomes the first rule in that chain allowing tcp for the linux user named "squiduser" while the very next would be the reject everything rule.

here i also assume "squiduser" exists, and hope i recall the syntax for owner match correctly.

then create user accounts within squid for all applications (that support using proxies) with precise acl's to where (the fqdn's) these squid-users are allowed to connect to.

there are possibilities to intercept regular tcp/http connections and "force" them to go through the http proxy, but if it comes to https and not-already-known domains the programs would connect to, things become way more complicated (search for "ssl interception") like the client program/system needs to trust "your own" CA first.

so the concept is to disallow everything by iptables, then allow more finegrained by http proxy where the proxy users would have to authenticate first. this way your weather desktop applet may connect to w.foreca.st if configured, but not e.vili.sh as that would not be included in its users acl.

this setup, would not prevent everything applications could do to connect to the outside world: a local configured email server could probably be abused or even DNS would still be available to evil applications to "transmit" data to their home servers, but thats a different story and abuse of your resolver or forwarder, not the tcp stack then. there exists a library to tunnel tcp streams through dns requests and their answers, a bit creepy, but possible and already prepaired. and only using a http-only proxy does not prevent tcp streams like ssh, i think a simple tcp-through-http-proxy-tunnel software was called "corckscrew" or similar and would go straight through a http proxy but would need the other ond of the tunnel software to be up and running.

much could be abused by malicious software if they get executed on your computer, but in general preventing simple outgoing connections is possible and more or less easy depending on what you want to achieve

[–] Kalcifer@sh.itjust.works 1 points 9 months ago (4 children)

should prevent all new tcp connection TO ssh ports on other servers when initiated locally (the forward chain is again another story)

But the point that I was trying to make was that that would then also block you from using SSH. If you want to connect to any external service, you need to open a port for it, and if there's an open port, then there's a opening for unintended escape.

so … one could run an http/s proxy under a specific user account, block all outgoing connections except those of that proxy (i.e. squid) then every program that wants to connect somewhere using direct ip connections would have to use that proxy.

I don't fully understand what this is trying to accomplish.

[–] smb@lemmy.ml 1 points 9 months ago (1 children)

But the point that I was trying to make was that that would then also block you from using SSH. If you want to connect to any external service, you need to open a port for it, and if there’s an open port, then there’s a opening for unintended escape.

now i have the feeling as if there might be a misunderstanding of what "ports" are and what an "open" port actually is. Or i just dont get what you want. i am not on your server/workstation thus i cannot even try to connect TO an external service "from" your machine. i can do so from MY machine to other machines as i like and if those allow me, but you cannot do anything against that unless that other machine happens to be actually yours (or you own a router that happens to be on my path to where i connect to)

lets try something. your machine A has ssh service running my machine B has ssh and another machine C has ssh.

users on the machines are a b c , the machine letters but in small. what should be possible and what not? like: "a can connect to B using ssh" "a can not connect to C using ssh (forbidden by A)" "a can not connect to C using ssh (forbidden by C)" [...]

so what is your scenario? what do you want to prevent?

I don’t fully understand what this is trying to accomplish.

accomplish control (allow/block/report) over who or what on my machine can connect to the outside world (using http/s) and to exactly where, but independant of ip addresses but using domains to allow or deny on a per user/application + domain combonation while not having to update ip based rules that could quickly outdate anyway.

[–] Kalcifer@sh.itjust.works 1 points 9 months ago (2 children)

now i have the feeling as if there might be a misunderstanding of what “ports” are and what an “open” port actually is. Or i just dont get what you want. i am not on your server/workstation thus i cannot even try to connect TO an external service “from” your machine.

This is most likely a result of my original post being too vague -- which is, of course, entirely my fault. I was intending it to refer to a firewall running on a specific device. For example, a desktop computer with a firewall, which is behind a NAT router.

so what is your scenario? what do you want to prevent?

What is your example in response to? Or perhaps I don't understand what it is attempting to clarify. I don't necessarily have any confusion regarding setting up rules for known and discrete connections like SSH.

accomplish control (allow/block/report) over who or what on my machine can connect to the outside world (using http/s) and to exactly where, but independant of ip addresses but using domains to allow or deny on a per user/application + domain combonation while not having to update ip based rules that could quickly outdate anyway.

Are you referring to an application layer firewall like, for example, OpenSnitch?

[–] smb@lemmy.ml 2 points 9 months ago

This is most likely a result of my original post being too vague – which is, of course, entirely my fault.

Never mind, and i got distracted and carried away a bit from your question by the course the messages had taken

What is your example in response to?

i thought it could possibly help clarifying something, sort of it did i guess.

Are you referring to an application layer firewall like, for example, OpenSnitch?

no, i do not conside a proxy like squid to be an "application level firewall" (but i fon't know opensnitch however), i would just limit outbound connections to some fqdn's per authenticated client and ensure the connection only goes to where the fqdns actually point to. like an atracker could create a weather applet that "needs" https access to f.oreca.st, but implements a backdoor that silently connects to a static ip using https. with such a proxy, f.oreca.st would be available to the applet, but the other ip not as it is not included in the acl, neither as fqdn nor as an ip. if you like to say this is an application layer firewall ok, but i dont think so, its just a proxy with acls to me that only checks for allowed destination and if the response has some http headers (like 200 ok) but not really more. yet it can make it harder for some attackers to gain the control they are after ;-)

[–] smb@lemmy.ml 2 points 9 months ago* (last edited 9 months ago)

so here are some reasons for having a firewall on a computer, i did not read in the thread (could have missed them) i have already written this but then lost the text again before it was saved :( so here a compact version:

  • having a second layer of defence, to prevent some of the direct impact of i.e. supply chain attacks like "upgrading" to an malicously manipulated version.
  • control things tightly and report strange behaviour as an early warning sign 'if' something happens, no matter if attacks or bugs.
  • learn how to tighten security and know better what to do in case you need it some day.
  • sleep more comfortable when knowing what you have done or prevented
  • compliance to some laws or customers buzzword matching whishes
  • the fun to do because you can
  • getting in touch with real life side quests, that you would never be aware of if you did not actively practiced by hardening your system.

one side quest example i stumbled upon: imagine an attacker has ccompromised the vendor of a software you use on your machine. this software connects to some port eventually, but pings the target first before doing so (whatever! you say). from time to time the ping does not go to the correct 11.22.33.44 of the service (weather app maybe) but to 0.11.22.33 looks like a bug you say, never mind.

could be something different. pinging an IP that does not exist ensures that the connection tracking of your router keeps the entry until it expires, opening a time window that is much easier to hit even if clocks are a bit out of sync.

also as the attacker knows the IP that gets pinged (but its an outbound connection to an unreachable IP you say what could go wrong?)

lets assume the attacker knows the external IP of your router by other means (i.e. you've send an email to the attacker and your freemail provider hands over your external router address to him inside of an email received header, or the manipulated software updates an dyndns address, or the attacker just guesses your router has an address of your providers dial up range, no matter what.)

so the attacker knows when and from where (or what range) you will ping an unreachable IP address in exact what timeframe (the software running from cron, or in user space and pings at exact timeframes to the "buggy" IP address) Then within that timeframe the attacker sends you an icmp unreachable packet to your routers external address, and puts the known buggy IP in the payload as the address that is unreachable. the router machtes the payload of the package, recognizes it is related to the known connection tracking entry and forwards the icmp unreachable to your workstation which in turn gives your application the information that the IP address of the attacker informs you that the buggy IP 0.11.22.33 cannot be reached by him. as the source IP of that packet is the IP of the attacker, that software can then open a TCP connection to that IP on port 443 and follow the instructions the attacker sends to it. Sure the attacker needs that backdoor already to exist and run on your workstation, and to know or guess your external IP address, but the actual behaviour of the software looks like normal, a bit buggy maybe, but there are exactly no informations within the software where the command and control server would be, only that it would respond to the icmp unreachable packet it would eventually receive. all connections are outgoing, but the attacker "connects" to his backdoor on your workstation through your NAT "Firewall" as if it did not exist while hiding the backdoor behind an occasional ping to an address that does not respond, either because the IP does not exist, or because it cannot respond due to DDos attack on the 100% sane IP that actually belongs to the service the App legitimately connects to or to a maintenance window, the provider of the manipulated software officially announces. the attacker just needs the IP to not respond or slooowly to increase the timeframe of connecting to his backdoor on your workstation before your router deletes the connectiin tracking entry of that unlucky ping.

if you don't understand how that example works, that is absolutely normal and i might be bad in explaining too. thinking out of the box around corners that only sometimes are corners to think around and only under very specific circumstances that could happen by chance, or could be directly or indirectly under control of the attacker while only revealing the attackers location in the exact moment of connection is not an easy task and can really destroy the feeling of achievable security (aka believe to have some "control") but this is not a common attack vector, only maybe an advanced one.

sometimes side quests can be more "informative" than the main course ;-) so i would put that ("learn more", not the example above) as the main good reason to install a firewall and other security measures on your pc even if you'ld think you're okay without it.

load more comments (2 replies)
load more comments (2 replies)
load more comments (2 replies)