• 0 Posts
  • 40 Comments
Joined 1 year ago
cake
Cake day: September 25th, 2023

help-circle
  • As other mentioned, an advantage is that it blocks ads on phone apps too. My other use case is to add extra DNS entries to name devices on my local network. Finally, after using pihole for a while I switched to blocky. It has similar features but it lacks the UI and the dchp server, but in exchange it uses much less resources. Since I didn’t use either of these it sounded a good trade to me



  • I started using headscale (the opensource reimplementation of tailscale server) on a private vps. It is incredibly better compared to plain wireguard. I regret waiting so much before switching.

    Something that really made my life easier: wireguard is poor at roaming: switching to and from my wifi created issues because the server wasn’t reachable anymore from its public ip and wireguard didn’t bother to query the DNS again to check the new IP. Also, configuration is dead simple because it takes care of iptables for you (especially good when you enables forwarding to a node).

    Since the server just sends small messages for the control plane and all the traffic is p2p between the devices, the smallest vps with the smaller connectivity is more than enough to handle it.



  • Nginx for my intranet because configuration is fully manual and I have complete control over it.

    Caddy for the public services on my vps because it handles cert renewal automatically and most of its configuration is magic which just works.

    It is unbelievable how shorter caddy configuration is, but on my intranet:

    1. I don’t want my reverse proxy to dial on internet to try to fetch new SSL certs. I know it can be disabled, but this is the default.
    2. I like to learn how stuff works, Nginx forces you to know more details but it is full of good documentation so it is not too painful compared to Caddy.




  • There are plenty of zigbee options available on Amazon UK (and I would expect some wifi too, but I have already a zigbee network so I prefer it when possible). Do a quick search there, most of them have the wiring diagram in the photos. Some of them can be installed in the box, so you don’t have to replace the switches (which may be ugly compared of the others you have). Also note that if both the switches you want to replace control the same light you just need to replace one of them.


  • You can configure caddy to use 80 and be a reverse proxy for both the services, serving one site or the other depending on the name (you will need a second DNS entry pointing to the same IP). about not exposing 443, I really doubt that caddy can automatically retrieve SSL certificates for you if not running on the default port. Check the documentation, if I’m right either you open an empty website on 443 just for the sake of getting SSL certs to run https, and manually configure the other port to do the same, or you get the certificates manually using the DNS verification (check let’s encrypt documentation) and configure caddy to use them.


  • lorentz@feddit.ittoSelfhosted@lemmy.worldNetwork server/NAS
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 months ago

    NAS are essentially small computers made for connecting a lot of storage and with a fancy OS that can be configured with a browser.

    So the real question between the NAS or a custom build is how much time do you want to spend being a sysadmin. NAS mostly work out of the box, you can configure them to autoupdate and get notification only when something important happens. While with a custom build everything is completely on your own. Are you already familiar with some linux distribution? How much do you want to learn?

    Once you answer the previous question, the next is about the power. To store files on the network you don’t need any big CPU, on the contrary, you may want something small that doesn’t cost too much in electricity. But you mentioned you want to stream video. If you need transcoding (because you have a chromcast that wants only video in a specific format for example) you need something more powerful. If you stream only to computer there is no need for transcoding because they can digest any format, so anything will work.

    After this you need to decide how much space you need, and what type. NMVE are faster, but spinning HD were still more reliable (and cheaper per TB) last time I checked. Also, do you want some kind of raid? RAID1 is the bare minimum to protect you from a disk failure, but you need twice as much disks to store the same amount of data. RAID5 is more efficient but you need at least 3 disks. Said so, remember that RAID is not backup. You still need a backup for important stuff.

    My honest suggestion is to start experimenting with your raspberry and see what you need. Likely it will fit already most of your needs, just attach an external HD and configure samba shares. I don’t do any automated backup, but I know that syncthing and Syncthing-Fork are very widely used tools. On linux you can very easily use rsync in a crontab.

    If you want an operating system that offers you an out of the box experience more similar to a commercial NAS you can check FreeNAS. I personally started with a QNAP and have been happy for years, but after starting self hosting some stuff I wanted more flexibility so I decided to change to a TerraMaster where I installed a plain Debian and I’m happy with it, but it definitely requires more knowledge and patience to configure and administrate it.



  • FAT32 doesn’t support unix file permission, so when you mount the disk linux has to assign a default ownership which usually is to root. And this is the issue you are facing.

    You confused the disk permission with the filesystem permission. The udev rule you wrote gives you permission to write the disk (in other words, you can format it or rewrite the whole content) but doesn’t give you permission on the files stored inside because they are on a higher abstraction level.

    If you use this computer in interactive mode (in other words if you usually sit in front of it and plug the disk on demand) my suggestion is to remove that line in /etc/fstab and let the ubuntu desktop environment mounting the external hard drive for the current logged in user.

    If you use this computer as a server with the USB disk always connected (likely since you mention Jellyfin) you need to modify the fstab line to specify which user should get permission on the files written on the disk.

    You can see the full list of options at https://www.kernel.org/doc/Documentation/filesystems/vfat.txt

    You either want uid=Mongostein (assuming that’s your username on your computer too) to assign to yourself the ownership of all the files, or umask=000 to give everyone all the permissions to the files and directories while ownership will remain to root. You should prefer the second option if jellifin runs as a different user, while the first one is better if there are other users on your computer which shouldn’t access your external disk.

    To summarize, the line in /etc/fstab should be one of these two.

    LABEL=drivename /mnt/drivename/ auto rw,user,exec,nofail,x-gvfs-show,dev,auto,umask=000 0 0
    
    LABEL=drivename /mnt/drivename/ auto rw,user,exec,nofail,x-gvfs-show,dev,auto,uid=Mongostein 0 0
    

  • There is no need to add a udev rule to make the device writeble by your user. If you have a full Ubuntu setup the external drive should appear in Nautilus as soon as you attach it and it can be mounted and umounted from UI.

    if it doesn’t work you can add a line te /etc/fstab like

    /dev/sdb1 /mnt/mydisk noauto,user,uid=yourname 0 0

    duble check the man page for the right sintax (I’m going by memory), but what you are saying here is that any user can mount this device which shouldn’t be mount automatically on boot, and files there are owned by the user “yourname” The issue with this approach is that the device name changes depending on what you have connected, Udev should also add some symlink which contains the device ID so it is more stable.



  • Il punto è proprio questo, non c’è bisogno di un supporto nativo, fintanto che puoi collegare più di un disco al computer puoi fare un raid software con uno degli strumenti detti prima. non ho mai usato trueNAS, ma dato che è un sistema orientato a fare NAS mi aspetto che si integri con almeno uno (se non tutti) gli strumenti citati. Ti consiglio di fare un’installazione di test e vedere che opzioni ci sono. Puoi usare una macchina virtuale per fare degli esperimenti veloci. Durante l’installazione di Debian ricordo che c’è uno step per configurare il raid software ed è abbastanza esplicito. Altrimenti si può sempre installare il sistema su lvm o btrfs ed attivare la replica in un secondo momento.

    Non ho mai usato madman, quindi non so dare consigli a riguardo. Ho usato un po’ lvm, è un po’ ostile a mio parere ma molto flessibile e ci sono molti tutorial e guide sparsi per internet. Non ho mai usato zfs, ma ho fatto qualche esperimento con btrfs (che ha preso molta ispirazione da zfs) e credo sia l’opzione più semplice. Se zfs è supportato nativo dall’installer verosimilmente avrà l’opzione per usare due dischi in raid. Altrimenti installi il tutto su un singolo disco zfs, aggiungi il secondo disco e poi esegui qualcosa tipo zpool add disco1 raid0 /dev/disk2 per aggiungere il nuovo disco. Potrebbe essere necessario anche qualche comando per ribilanciare i dischi. Ti consiglio comunque di cercare qualche manuale e dare una lettura approfondita così anche da sapere cosa fare nel caso qualcosa vada storto


  • Il raid hardware è da evitare: se si rompe il controller bisogna comprarne uno uguale per recuperare i dati. Il raid software su linux si può fare a vari livelli: madman è al livello più basso lavorando direttamente sulle partizioni o sui dischi. Lvm lavora ad un livello d’astrazione un po’ più alto permettendo di gestire volumi logici sparsi tra più dischi. Poi ci sono filesystem come btrfs o zfs che offrono opzioni simili ad alcuni livelli raid.


  • I got a terramaster nas and I’m super happy https://www.terra-master.com/global/f4-5067.html

    The main reason to choose it is that it is just a PC in the form factor of a NAS. You can just boot it from a pendrive and install your favourite operating system. I had a Qnap before, and while it was great to start, self hosting wasn’t the best experience on their OS.

    this is a small form factor, it should be low power consumption (I’ve never measured to confirm it) and supports both nvme and sata drives. Currently I’ve an nvme for the OS and two sata for storage. CPU is powerful enough to run home assistant, vpn, pihole, commafeed, and a bunch of other Docker images. I just plan to increase the ram soonish because the stock feels a little constrained.


  • I did some experiments in the past. The nicer option I could find was enabling webdav API on the hosting side (it was an option on cPanel if I recall correctly, but there are likely other ways to do it). These allow using the webserver as a remote read/write filesystem. After you can use rclone to transfer files, the nice part is that rclone supports client side encryption so you don’t have to worry too much about other people accessing files.


  • After looking around a little I couldn’t find any zigbee thermostat which met all my needs (mostly, I couldn’t find any which switches high voltage and has a wireless sensor that can stay in a different room).

    so I went for the fully custom setup: a normal zigbee switch connected to home assistant and controlled by their software implementation of a thermostat. The temperature sensor is a template sensor which takes the temperature of the living room during daytime and the bedroom during nighttime. I have automation to change the target temperature during day, night and when the house is empty.

    pro: fully customizable by software, dead cheap con: the heating needs your server to work correctly

    Some failure modes I found and their workaround:

    • The temperature sensor goes offline. I have automation to turn off the heating and send a notification
    • the server goes offline: I left the old dumb thermostat wired in parallel, it can guarantee the home will not go too cold.

    the only failure mode I’m still concerned is if the server goes offline while heating is on. In this case there is nothing to turn it off again. I was looking for zigbee switches with a timer to switch off automatically but I couldn’t find any. So if I’m out of home for more than one day I disable it and revert to the dumb thermostat.

    my suggestion here is: whatever solution you choose, be sure to have a plan b in case whatever smartness you have stops working (cloud service or local home assistant offline)