trilobite

joined 1 year ago
[–] trilobite@lemmy.ml 1 points 3 weeks ago

I think vTiger community efition is still open source?

[–] trilobite@lemmy.ml 2 points 3 weeks ago

Really helpful, thank.

[–] trilobite@lemmy.ml 2 points 4 weeks ago (1 children)

Let me get this straight. Ur saying that on the laptop I have two instances, one for me and one for my wife and they both sync to nas instance. I have syncthing installed via apt on my Debian laptop. How do u get two instances going?

[–] trilobite@lemmy.ml 5 points 4 weeks ago (1 children)

I just find nextcloud bloated for my use case.

22
submitted 4 weeks ago* (last edited 4 weeks ago) by trilobite@lemmy.ml to c/selfhosted@lemmy.world
 

I moved from next cloud to syncthing some months back. I had nextcloud as an app for Truenas scale. Several times after app dates, next cloud would stop running and would have to setup up everything again.

Syncthing is OK but 2 things annoy me:

A. I get huge amounts of conflict file generated that use up space

B. File sharing with family is complicated. I tried to setup a share account that everyone uses but as syncthing works with device ids, it refuses two accounts from the same machine. I share my Linux laptop with my wife. We each have our own linux account. I've got syncthing running but can't even get my wife's account to sync because I get errors that device I'd already exists.

I don't want to go back to next cloud just for file sharing. I don't generally like the idea of relying on one service for multiple objectives (calendar, file sharing, etc.).

Is there a way to get syncthing to do what I want?

[–] trilobite@lemmy.ml 1 points 4 weeks ago* (last edited 4 weeks ago)

I think hoarder must ne similari to wallabag which i use solely for preserving content I like. I use floccus plus WebDAV for bookmarks saving because floccus is avaible for android too. So bookmarks are the same on all my devices. Sounds like linkwarden does both but I can't find android app which is a killer for me. Waljabag also is available for android where idomost of my reading.

 

I've been running VMs on some old DELL T110ii but realise that I've loaded it a bit too much so want to leave it doing the job of NAS with Truenas Scale and move all my VMs to Proxmox. The idea is that I would have two optiplex that provide redundancy. Truenas Scale has got me used to ZFS but clear may not be an option with Optiplex 3020 as ZFS is pointless with one SSD. Has anyone got some similar arrangement and has their VMs and containers running on these simple desktop machines? How are you managing high availability and resilience?

[–] trilobite@lemmy.ml 3 points 1 month ago* (last edited 1 month ago) (1 children)

make me shake ... brrr

I'm going to try and see is I can get a VM running on the second Truenas server using the replicated dataset. I only use the second machine to duplicate datasets in case the first machine fails and have to rebuild it.

 

Hi folks, I've got a VM that is running my Firefly iii instance and Paperless instance as containers. A lot of work and time goes into managing these tools and I want to make sure I don't lose them. This is my setup:

Turenas Scale machine 1 -> VM1 - Docker containers. The VM sits on its own dataset in Truenas.

I replicate the dataset to Truenas Scale 2 one a week and this machine only goes on on Sunday to save power.

I Rsync the dataset to a 3rd machine where there is a hard disk that I store offsite.

I recognize that I could lose up to one week of work but that is nothing compared to the human hrs spent building those databases from scratch.

Apart from snapshotting e rsyncing every day, what else could I do to make this more resilient without increasing CAPEX and OPEX costs?

[–] trilobite@lemmy.ml 3 points 1 month ago (1 children)

I've been asking myself the same question for a while. The container inside a VM is my setup too. It feels like the container in the VM in the OS is a bit of an onion approach which has pros and cons. If u are on low powered hardware, I suspect having too many onion layers just eat up the little resources you have. On the other hand, as Scott@lem.free.as suggests, it easier to run a system, update and generally maintain. It would be good to have other opinion on this. Note that not all those that have a home lab have good powered labs. I'm still using two T110's (32GB ECC ram) that are now quite dated but are sufficient for my uses. They have Truenas scale installed and one VM running 6 containers. It's not fast, but its realiable.

[–] trilobite@lemmy.ml 2 points 2 months ago

Same here but now struggling to keep on top of it. I wish there was a mobile solution that would just nicely integrate with selfhosted

[–] trilobite@lemmy.ml 1 points 2 months ago (1 children)

Is there a mobile app that syncs with self hosted?

[–] trilobite@lemmy.ml 2 points 2 months ago (1 children)

So are you saying you have it running on a VM with all data stored on NAS?

I have it as a Truenas Scale app. The upgrade brokemiy installation and the rollback didn't work. Want to move it to a VM with Docker

[–] trilobite@lemmy.ml 1 points 2 months ago (1 children)

All these models appear to ne quote old.Oldrer t'ha the R310, R510 and R610?

[–] trilobite@lemmy.ml 1 points 6 months ago (3 children)

This thread has reminded me that I have Ruckus APs that mesh. But support had been dropped because they are "old". Presumably there is no open source solution that I can flsh these with, still allowing me the meshing?

 

Hi, I have my TIM (Italy) ONT installed (its a ZXHN F6005, which I think is also installed by OpenFibre in the UK). This is connected to a TIM router and them to a minipc machine that has pfsense installed. I believe the ZTE ONT can be directly connected to the WAN port of the pfSense machine by having pppoe set on the WAN interface. That way I can drop this intermediate TIM router which is simply sucking up energy. I tried setting a pppoe connection the pfsense machine by giving it userid and password but the connection never comes up. Strangely, even when leaving the WAN interface set to pppoe on pfsense and reconnecting it to the intermediate TIM router, the connection comes up (i.e. doesn't seem to be a requirement).

Any thoughts?

 

My old setup was:

VSDL modem -> pfsense on mini J1900 Celeron (2 GHz) -> CISCO SG300 10MPP switch -> Rukus R310 wifi -> Laptop

Currnet setup

Fiber model -> pfsense on mini J1900 Celeron (2 GHz) -> CISCO SG300 10MPP switch -> Rukus R310 wifi -> Laptop

Today i got my 1GBit fiber installed (big deal for those like me living in rural areas) only to discover that my current network setup is not allowing me to benefit from it.

I was on VSDL copper wire before and was probably in the region of 50-60 MBit/s with my above current setup. Even when removing the wifi bottle and linking with Cat5 UTP wire directly to switch, I'm not getting major improvements.

When I got the fiber installed this morning I was disappointed when I saw only marginal gain running at 80 MBit/s (c. +30 MBit). So I decided to connect the laptop via LAN cable directly to modem. I got a starkling 900MBit/s. So, along my network I have bottlenecks.

THe first one I tested was my little pfsense machine. I installed the speedtext-cli command and was surprised to find that it was giving my around 300 MBit/s. So a lot better than my laptop on its usual wifi connection but still only 33% of what I get directly off the modem.

So my first question is how can it be that my little mini J1900 Celeron (2 GHz) with 4 GB RAM cannot handle this bandwith? Do I need an upgrade for my pfsense machine? I noticed that the peak CPU demand as speedtest-cli was running was in the 60% region, far from a saturated CPU and RAM only occupied for about 30%. If it is my little pfsense machine, how far do I have to go with finding the right little machine that can handle 1 GBit/s.

The next question is if I'm getting 300 MBit/s on the WAN connection of the pfSense machine, how is it that I only see a small percentage of this on my laptop? i.e. a drop from 300 MBit/s to 80 MBit/s? I guess I would have to test the switch to start and then move to the wifi access points ...

 

Hi folks,

I installed Radicale earlier today and when I installed it as a user as described on the homepage using $ python3 -m pip install --upgrade radicale.

I initially created a local storage and ran as normal user $ python3 -m radicale --storage-filesystem-folder=~/.var/lib/radicale/collections. I was able to see the webpage when I type the server address (VM on Truenas) http://192.168.0.2:5234. So the install went well. But I wanted to create system wide so that I can have multiple users loggin in (family members).

So i did the following:

  • $sudo useradd --system --user-group --home-dir / --shell /sbin/nologin radicale

  • $sudo mkdir -p /var/lib/radicale/collections && sudo chown -R radicale:radicale /var/lib/radicale/collections

  • sudo mkdir -p /etc/radicale && sudo chown -R radicale:radicale /etc/radicale

Then I created the config file which looks like:

[server]
# Bind all addresses
hosts = 192.168.0.2:5234, [::]:5234
max_connections = 10
# 100 MB
max_content_length = 100000000
timeout = 30

[auth]
type = htpasswd
htpasswd_filename = /etc/radicale/users
htpasswd_encryption = md5

[storage]
filesystem_folder = /var/lib/radicale/collections

[logging]
level = debug

Of course the users file also exists in the /etc/radicale. Then I created the service file as per the guidance without changing anything:

[Unit]
Description=A simple CalDAV (calendar) and CardDAV (contact) server
After=network.target
Requires=network.target

[Service]
ExecStart=/usr/bin/env python3 -m radicale
Restart=on-failure
User=radicale
# Deny other users access to the calendar data
UMask=0027
# Optional security settings
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
PrivateDevices=true
ProtectKernelTunables=true
ProtectKernelModules=true
ProtectControlGroups=true
NoNewPrivileges=true
ReadWritePaths=/var/lib/radicale/collections

[Install]
WantedBy=multi-user.target

Then I hit the usual sequence:

$ sudo systemctl enable radicale
$ sudo systemctl start radicale
$ sudo systemctl status radicale

and of course it all seems to be running:

user@vm101:/$ sudo systemctl status radicale
● radicale.service - A simple CalDAV (calendar) and CardDAV (contact) server
     Loaded: loaded (/etc/systemd/system/radicale.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2024-05-25 19:44:54 BST; 18min ago
   Main PID: 313311 (python3)
      Tasks: 1 (limit: 4638)
     Memory: 13.1M
        CPU: 166ms
     CGroup: /system.slice/radicale.service
             └─313311 python3 -m radicale

May 25 19:44:54 vm101 systemd[1]: Started A simple CalDAV (calendar) and CardDAV (contact) server.

When I run $ journalctl --unit radicale.service it only provide the following output, despite the logging level is set to debug:

user@vm101:/etc/radical$ sudo journalctl --unit radicale.service
-- Journal begins at Sat 2022-12-31 15:45:51 GMT, ends at Sat 2024-05-25 20:04:37 BST. --
May 25 19:25:46 vm101 systemd[1]: Started A simple CalDAV (calendar) and CardDAV (contact) server.
May 25 19:44:46 vm101 systemd[1]: Stopping A simple CalDAV (calendar) and CardDAV (contact) server...
May 25 19:44:46 vm101 systemd[1]: radicale.service: Succeeded.
May 25 19:44:46 vm101 systemd[1]: Stopped A simple CalDAV (calendar) and CardDAV (contact) server.
May 25 19:44:54 vm101 systemd[1]: Started A simple CalDAV (calendar) and CardDAV (contact) server.

Any clue as to why i get "Can't establish a connection ..." error when I type http://192.168.0.2:5234. I'm clearly missing something but can't quite get what it is. Any help would be appreciated.

BTW, I'm connecting to the Truenas server (where the VM runs) from my laptop, the same one that allowed me to connect when I used the normal user approach described at the start.

 

Just installed Syncthing on my Scale server. It looks like it doesn't have users but rather folder IDs that are then used to sync devices. One of the cool features of Nextcloud is the ability to share files with other users. Can this be done with Syncthing?

 

Just thinking of ditching nextcloud and its just too much for my family use. All i needis carddav, caldav and file sync. Have a Debian VM running on Scale and was thinking of using Cloudron docker install. Is this the way others are installing on VMs?

view more: next ›