tofubl

joined 2 years ago
[–] tofubl@discuss.tchncs.de 1 points 2 weeks ago

Airtable or nocodb might be suitable for this. Or Nextcloud Forms. But hard to advise since it's not clear if your focus is on data entry or visualization.

[–] tofubl@discuss.tchncs.de 1 points 2 weeks ago

I'm low key on the lookout for something like this as well, to gain independence from mail providers, and I've had a browser tab for Mail Archiver open for a few months now but never got around to trying it out. Maybe this would solve your problem?

[–] tofubl@discuss.tchncs.de 4 points 3 weeks ago (2 children)

This looks friendly. I gave up setting up Authelia after my last attempt, but I might give it another go with this when motivation hits me. Some documentation for Traefik integration would be nice.

[–] tofubl@discuss.tchncs.de 1 points 1 month ago

I wasn't advocating to get a J4125 in 2025, I was sharing my experience with it. I can't confirm it choking with Jellyfin.

[–] tofubl@discuss.tchncs.de 3 points 1 month ago (4 children)

I'm doing everything you list and quite a bit more on a QNAP with a Celeron J4125. A fraction of the cpu performance you'll have, yet very capable of all the tasks I ask of it. 16gb of memory is a good starting point I think.

What does your build come out at?

[–] tofubl@discuss.tchncs.de 1 points 1 month ago* (last edited 1 month ago) (1 children)

"Just" some highly specific VM settings, in the end. I don't know much about that, and terms like qemu don't mean anything to me so I followed blog posts until it worked. (This one and maybe this one, I think.) It's possible that it is actually trivial.

It's been a while, but I can look up what I have when you need it. Feel free to ping me!

Yes, it was exactly that: Once I got the NICs set up the way I wanted them it was a breeze and everything just works. And I really like that I made every part work myself, no magic. I learned a lot, and wouldn't have had I relied on Proxmox fiddling with the right parts for me.

[–] tofubl@discuss.tchncs.de 4 points 1 month ago* (last edited 1 month ago) (6 children)

I was in a similar spot not too long ago, setting up a firewall and general network box. I was going to go with Proxmox but a fellow Lemmy guy strongly advocated for Incus on top of vanilla Debian. I was intrigued and ended up going for it. Learned a lot about networking with systemd (bridging, IP assignment and so on) for things I could have gotten for free in Proxmox (literally a few clicks), and had to fight Incus to work with a FreeBSD VM for Opnsense, but I love the setup now. Pure debian with a few Incus VMs and Docker inside of those as needed. So clean!

[–] tofubl@discuss.tchncs.de 4 points 2 months ago (1 children)
[–] tofubl@discuss.tchncs.de 1 points 2 months ago

As a first step, why don't you try to trigger a rescan sudo -u www-data php occ files:scan --all

If that doesn't improve things, try to find and delete the image file the log complains about misc/m-t0627-01511-00434 (2).jpg.

Still nothing after that, I'd try to hunt down individual contacts in the DB.

[–] tofubl@discuss.tchncs.de 1 points 2 months ago* (last edited 2 months ago) (2 children)

What happens if you delete this contact from the web UI?

Edit: Unclear whether the web UI is functional. If it isn't, try deleting it from the database directly.

[–] tofubl@discuss.tchncs.de 2 points 2 months ago

Using it to backup from a QNAP. Works very well and hassle-free. I'm using the QNAP backup app, but would be just as easy with any other tool. Just make sure to encrypt the backups.

[–] tofubl@discuss.tchncs.de 0 points 4 months ago* (last edited 4 months ago)

We found more common ground and more things that separate us, too.

I agree with your idea of regulating social media and I'd add that platforms should be mandated to open their walled gardens by implementing open protocols and force them to play nice with other platforms (said the guy on Lemmy.)

On the other hand, I strongly disagree with the notion that an addiction only hurts the addict. I'd argue that's never the case. On the contrary, alcoholism or gambling can drag whole families or more into poverty. On top of the microcosm impact, albeit more of a European problem, I suppose (although I wouldn't want it any other way), substance-related addictions are a huge cost factor on our social health system, costing the public hand (us, me) huge sums and taking up ever scarcer hospital beds and treatment slots. Here comes my main point: History (especially yours with the prohibition period) proves that outlawing substances doesn't work, and neither am I for it. But our minds are vulnerable to suggestion and manipulation, and advertisement is utilising that fact by e.g., creating associations between drinking or smoking and sexual desirability. This is well known and it works too, or it wouldn't be the enormous industry it is. Now then, why should we allow the manipulation of our desires for something that is ultimately bad for EVERY part of society except the leeches directly profiteering from it? (I'm not even talking about the fact that children's minds are even more susceptible to this, but are for the most part just as exposed to the same stimuli our adult ones are. One of the restrictions for wine/beer ads here in my country, by the way: Not on daytime TV. Somewhat sensible at least.)

I wonder why you draw the line at medicine, by the way. What's the difference there for you?

Edit: Thanks for the respectful discussion, by the way. I appreciate it.

 

I have a home setup with private services and Wireguard to phone in from outside, and would sometimes like to be able to access some of these services from devices that don't have their own Wireguard client like an eBook reader.

Ideally, I would have Wireguard on my Android phone, create a WiFi hotspot and allow other devices to use that Wireguard connection. Out of the box this doesn't work. Does anybody know how to achieve it?

 

In my home network, I'm currently hosting a public facing service and a number of private services (on their own subdomain resolved on my local DNS), all behind a reverse proxy acting as a "bouncer" that serves the public service on a subdomain on a port forward.

I am in the process of moving the network behind a hardware firewall and separating the network out and would like to move the reverse proxy into its own VLAN (DMZ). My initial plan was to host reverse proxy + authentication service in a VM in the DMZ, with firewall allow rules only port 80 to the services on my LAN and everything else blocked.

On closer look, this now seems like a single point of failure that could expose private services if something goes wrong with the reverse proxy. Alternatively, I could have a reverse proxy in the DMZ only for the public service and another reverse proxy on the LAN for internal services.

What is everyone doing in this situation? What are best practices? Thanks a bunch, as always!

22
submitted 2 years ago* (last edited 2 years ago) by tofubl@discuss.tchncs.de to c/selfhosted@lemmy.world
 

Hi there, hoping to find some help with a naive networking question.

I recently bought my first firewall appliance, installed Opnsense and am going to use it with my ISP modem in bridge mode, but while I'm learning I added it to my existing LAN with a 192.168.0.0/24 address assigned to the WAN port by my current DHCP. On the firewall's LAN port I set up a 10.0.0.0/24 network and am starting to build up my services. So far so good, but there's one thing I can't get to work: I can't port forward the firewall's WAN IP to a service on the firewall's LAN network and I can't figure out why.

To illustrate, I would like laptop with IP 192.168.0.161 to be able to reach service on 10.0.0.22:8888 by requesting firewall WAN IP 192.168.0.136:8888.

Private IPs and bogons are permitted on the WAN interface and I have followed every guide I can find for the port forwarding, but the closest I have come to this working is a "connection reset" browser error.

Hope my question is clear and isn't very dumb. Thanks for the help or any explanation why I might be struggling to get this to work. Am I missing something obvious?


UPDATE The thread is all over the place, but I have made some progress:

  • RDR rule gets triggered when requesting 192.168.0.136:8888 from 192.168.0.123
  • Apache logs show 2024-02-09T17:39:17.056208857Z 192.168.0.123 - - [09/Feb/2024:17:39:17 +0000] "GET / HTTP/1.1" 200 161
  • a tcpdump (in spoiler below) on the apache container looks inconspicuous to my untrained eye, with the exception of checksum errors in some packets from the docker container (172.20.0.2). The last five lines, after the second GET request (why is there a second GET request?) appear in tcpdump after a delay of about five seconds.
    tcpdump
    192.168.0.123.54120 > 172.20.0.2.80: Flags [S], cksum 0xfdc5 (correct), seq 4106772895, win 64240, options [mss 1460,sackOK,TS val 1485594466 ecr 0,nop,wscale 7], length 0
17:45:14.918207 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
    172.20.0.2.80 > 192.168.0.123.54120: Flags [S.], cksum 0x6d68 (incorrect -> 0x2fd7), seq 3999845366, ack 4106772896, win 65160, options [mss 1460,sackOK,TS val 1469298770 ecr 1485594466,nop,wscale 7], length 0
17:45:14.924098 IP (tos 0x0, ttl 62, id 63128, offset 0, flags [DF], proto TCP (6), length 52)
    192.168.0.123.54120 > 172.20.0.2.80: Flags [.], cksum 0x5b30 (correct), ack 3999845367, win 502, options [nop,nop,TS val 1485594472 ecr 1469298770], length 0
17:45:14.924102 IP (tos 0x0, ttl 62, id 63129, offset 0, flags [DF], proto TCP (6), length 134)
    192.168.0.123.54120 > 172.20.0.2.80: Flags [P.], cksum 0x70f5 (correct), seq 4106772896:4106772978, ack 3999845367, win 502, options [nop,nop,TS val 1485594472 ecr 1469298770], length 82: HTTP, length: 82
        GET / HTTP/1.1
        Host: 192.168.0.136:8888
        User-Agent: curl/7.74.0
        Accept: */*

        <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
        <html>
         <head>
          <title>Index of /</title>
         </head>
         <body>
        <h1>Index of /</h1>
        <ul></ul>
        </body></html>

17:45:14.924119 IP (tos 0x0, ttl 64, id 34500, offset 0, flags [DF], proto TCP (6), length 52)
    172.20.0.2.80 > 192.168.0.123.54120: Flags [.], cksum 0x6d60 (incorrect -> 0x5ad1), ack 4106772978, win 509, options [nop,nop,TS val 1469298776 ecr 1485594472], length 0
17:45:14.924407 IP (tos 0x0, ttl 64, id 34501, offset 0, flags [DF], proto TCP (6), length 364)
    172.20.0.2.80 > 192.168.0.123.54120: Flags [P.], cksum 0x6e98 (incorrect -> 0x0a74), seq 3999845367:3999845679, ack 4106772978, win 509, options [nop,nop,TS val 1469298776 ecr 1485594472], length 312: HTTP, length: 312
        HTTP/1.1 200 OK
        Date: Fri, 09 Feb 2024 17:45:14 GMT
        Server: Apache/2.4.58 (Unix)
        Content-Length: 161
        Content-Type: text/html;charset=ISO-8859-1
17:45:14.929077 IP (tos 0x0, ttl 61, id 0, offset 0, flags [DF], proto TCP (6), length 40)
    192.168.0.123.54120 > 172.20.0.2.80: Flags [R], cksum 0x1833 (correct), seq 4106772978, win 0, length 0
17:45:15.138862 IP (tos 0x0, ttl 62, id 63130, offset 0, flags [DF], proto TCP (6), length 134)
    192.168.0.123.54120 > 172.20.0.2.80: Flags [P.], cksum 0x701e (correct), seq 4106772896:4106772978, ack 3999845367, win 502, options [nop,nop,TS val 1485594687 ecr 1469298770], length 82: HTTP, length: 82
        GET / HTTP/1.1
        Host: 192.168.0.136:8888
        User-Agent: curl/7.74.0
        Accept: */*

17:45:15.138872 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40)
    172.20.0.2.80 > 192.168.0.123.54120: Flags [R], cksum 0xb48d (correct), seq 3999845367, win 0, length 0
17:45:19.995097 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 172.20.0.1 tell 172.20.0.2, length 28
17:45:19.995161 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 172.20.0.2 tell 172.20.0.1, length 28
17:45:19.995164 ARP, Ethernet (len 6), IPv4 (len 4), Reply 172.20.0.2 is-at 02:42:ac:14:00:02, length 28
17:45:19.995164 ARP, Ethernet (len 6), IPv4 (len 4), Reply 172.20.0.1 is-at 02:42:b8:07:c2:99, length 28```


***

**UPDATE 2**
I see the exact same behaviour with a second VM and apache directly installed on it instead of in a docker container.

***
**UPDATE 3**
Thank you everybody for coming up with ideas. And thank you most of all to [@maxwellfire@lemmy.world](https://lemmy.world/u/maxwellfire): The culprit was the `Filter rule association` in my Port Forward settings which I had as `Add associated filter rule` but needs to be `Pass`. As soon as that is set, everything works.

The full solution is a NAT Port forwarding rule with filter rule "pass", an outbound NAT rule for hairpinning, and everything related to reflection turned off in Settings > Advanced. It's that easy! 😵‍💫
133
submitted 2 years ago* (last edited 2 years ago) by tofubl@discuss.tchncs.de to c/selfhosted@lemmy.world
 

Nextcloud seems to have a bad reputation around here regarding performance. It never really bothered me, but when a comment on a post here yesterday talked about huge speed gains to be had with Postgres, I got curious and spent a few hours researching and tweaking my setup.

I thought I'd write up what I learned and maybe others can jump in with their insights to make this a good general overview.

To note, my installation initially started out with this docker compose stack from the official nextcloud docker images (as opposed to the AIO image or a source installation.) I run this behind an NGINX reverse proxy.

Sources of information

Improvements

Migrate DB to Postgres

What I did first is migrate from maridb to postgres, roughly following the blog post I linked above. I didn't do any benchmarking, but page loads felt a little faster after that (but a far cry from the "way way faster" claims I'd read.)

Here's my process

  • add postgres container to compose file like so. I named mine "postgres", added a "postgres" volume, and added it to depends_on for app and cron
  • run migration command from nextcloud app container like any other occ command. The migration process stopped with an error for a deactivated app so I completely removed it, dropped the postgres tables and started migration again and it went through. after migration, check admin settings/system to make sure Nextcloud is now using postgres. ./occ db:convert-type --password $POSTGRES_PASSWORD --all-apps pgsql $POSTGRES_USER postgres $POSTGRES_DB
  • remove old "db" container and volume and all references to it from compose file and run docker compose up -d --remove-orphans

Redis over Sockets

I followed above guide for connecting to Redis with sockets with details as stated below. This improved performance quite significantly. Very fast loads for files, calendar, etc. I haven't yet changed the postgres connection over to sockets since the article spoke about minor improvements, but I might try this next.

Hints

  • the redis configuration (host, port, password, ...) need to be set in config/config.php, as well as config/redis.config.php
  • the cron container needs to receive the same /etc/localtime and /etc/timezone volumes the app container did, as well as the volumes_from: tmp

EDIT Postgres over Sockets

I'm now connecting to Postgres over sockets as well, which gave another pretty significant speed bump. When looking at developer tools in Firefox, the dashboard now finishes loading in half the time it did before the change; just over 6s. I followed the same blog article I did for Redis.

Steps

  • in the compose file, for the db container: add volumes /etc/localtime and /etc/timezone; add user: "70:33"; add command: postgres -c unix_socket_directories='/var/run/postgresql/,/tmp/docker/'; add tmp container to volumes_from and depends_on
  • in nextcloud config.php, replace 'dbhost' => 'postgres', with 'dbhost' => '/tmp/docker/',

Outlook

What have you done to improve your instance's performance? Do you know good articles to share? I'm happy to edit this post to include any insights and make this a good source of information regarding Nextcloud performance.

 

Hi fellow self-hosting lemmings,

In an SME setting, I'm looking for a service to regularly fetch mails from an IMAP server and print incoming mails and attachments on a local network printer based on rules (e.g., only print mails where the subject contains a specific word.)

Does a solution like that exist, ideally with a browser frontend to set it up?

Thank you!

view more: next ›