486

joined 1 year ago
[–] 486@kbin.social 2 points 6 months ago (1 children)

Another option is subpaths: xyz.ddns.net/portainer

While you can do that, you should be aware of the security implications (every application can see and modify every other application's cookies). If at all possible, I would try to avoid this setup.

[–] 486@kbin.social 3 points 6 months ago

Oh, I didn't want to suggest that there is no value in using a reverse-proxy, there certainly is. Just don't expect it to do anything for you in terms of application security. The application behind it is just as exposed as it would be without a proxy. So if there was a security flaw in that application, the reverse-proxy does not help at all.

[–] 486@kbin.social 8 points 6 months ago (2 children)

I am not sure where this idea comes from, but putting a service behind a reverse-proxy does not increase its security in any way, unless you'd do authentication right at the reverse-proxy.

[–] 486@kbin.social 2 points 6 months ago

No, even the earliest Ryzens support ECC reporting just fine, given the motherboard used supports it, which many boards do. Only the non-Pro APUs do not support ECC.

[–] 486@kbin.social 1 points 7 months ago (1 children)

You were talking about adversaries discovering the backdoor. That's something entirely different from compromised keys. So your sacrasm is quite misplaced here.

[–] 486@kbin.social 1 points 7 months ago (3 children)

In order to successfully implement a backdoor, you need to ensure that you are more clever than your adversaries, because those same backdoors can be used against you.

In this instance, that's not the case. Only those in possession of the right key can use the backdoor. Also, discovering infected systems from the outside, appears to be impossible - the backdoor simply does not do anything to reveal itself if you don't have the key.

[–] 486@kbin.social 2 points 8 months ago (1 children)

Sure, cloud services can get quite expensive and I agree that using used hardware for self-hosting - if it is at least somewhat modern - is a viable option.

I just wanted to make sure, the actual cost is understood. I find it rather helpful to calculate this for my systems in use. Sometimes it can actually make sense to replace some old hardware with newer stuff, simply because of the electricity cost savings of using newer hardware.

[–] 486@kbin.social 2 points 8 months ago* (last edited 8 months ago) (3 children)

Well, what they are stating is obviously wrong then. No need to use some website for that anyway, since it is so easy to calculate yourself.

[–] 486@kbin.social 2 points 8 months ago (5 children)

Before anyone loses their minds, imagine you get the i3-8300T model that will peak at 25W, that’s about 0.375$ a month to run the thing assuming a constant 100% load that you’ll never have.

Not sure how you came to that conclusion, but even in places with very cheap electricity, it does not even come close to your claimed $0.375 per month. At 25 W you would obviously consume about 18 kWh per month. Assuming $0.10/kWh you'd pay $1.80/month. In Europe you can easily pay $0.30/kWh, so you would already pay more than $5 per month or $60 per year.

[–] 486@kbin.social 2 points 8 months ago* (last edited 8 months ago)

Lots of answers about use-cases of additional wifi networks, so I won't go into that. I haven't seen the downsides mentioned here, though. While technically you can run lots of wifi networks of off the same wifi router/ap, each SSID takes a bit of air time to broadcast. While this might sound rather insignificant since this is only a rather tiny bit of information transmitted, it is actually more significant than one might expect. For one the SSIDs are broadcast quite often, but also they are always transmitted at the lowest possible speed (meaning they require a lot more airtime than normal WiFi traffic would require for the same amount of data) for compatibility reasons. This is also the reason why it is a good idea to disable older wifi standards if not needed by legacy clients (such as 54 Mbit/s 802.11G wifi).

Having two networks is usually fine and doesn't cause noticable performance degradation, having 4 or more networks is usually noticable, particularily in an already crowded area with lots of wifi networks.

[–] 486@kbin.social 2 points 8 months ago* (last edited 8 months ago)

For many li-ion laptop batteries, the manufacturer's configuration of a 100 % charge is pretty much equivalent to overcharging. I've seen many laptops over the years with swollen batteries, almost all of them had been plugged in all the time, with the battery kept at 100 % charge.

As an electrical engineer you should know that technically there is no 100 % charge for batteries. A battery can more or less safely be charged up to to a certain voltage. The 100 % charge point is something the manufacturer can choose (of course within limits depending on cell chemistry). A manufacturer can choose a higher cell voltage than another to gain a little more capacity, at the cost of longterm reliability. There are manufacturers that choose a cell voltage of 4250 mV and while that's possible and works okay if charged only occasionally, if plugged in all the time, this pretty much ensures killing the batteries rather quickly. I would certainly call that overcharging.

Since you already mentioned charging thresholds, I just want to say, anyone considering using a laptop as a server should absolutely make use of this feature and limit the maximum charge.

view more: next ›