There is a post about getting overwhelmed by 15 containers and people not wanting to turn the post into a container measuring contest.

But now I am curious, what are your counts? I would guess those of you running k*s would win out by pod scaling

docker ps | wc -l

For those wanting a quick count.

  • mogethin0@discuss.online
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    I have 43 running, and this was a great reminder to do some cleanup. I can probably reduce my count by 5-10.

  • ℍ𝕂-𝟞𝟝@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 days ago

    I know using work as an example is cheating, but around 1400-1500 to 5000-6000 depending on load throughout the day.

    At home it’s 12.

    • slazer2au@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      I was watching a video yesterday where an org was churning 30K containers a day because they didn’t profile their application correctly and scaled their containers based on a misunderstanding how Linux deals with CPU scheduling.

      • ℍ𝕂-𝟞𝟝@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        Yeah that shit is more common than people think.

        A big part of the business of cloud providers is that most orgs have no idea how to do shit. Their enterprise consultants are also wildly variable in competence.

        There was also a large amount of useless bullshit that I needed to cut down since being hired at my current spot, but the amount of containers is actually warranted. We do have that traffic, which is both happy and sad, since while business is booming, I have to deal with this.

  • kaedon@slrpnk.net
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    12 LXCs and 2 VMs on proxmox. Big fan of managing all the backups with the web ui (It’s very easy to back to my NAS) and the helper scripts are pretty nice too. Nothing on docker right now, although i used to have a couple in a portainer LXC.

  • Itdidnttrickledown@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    2 days ago

    None. I run my services they way they are meant to be run. There is no point in containers for a small setup. Its kinda lazy and you miss out on how to install them.

    • SpatchyIsOnline@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      Small setups can very easily turn into large setups without you noticing.

      The only bare-metal setup I’d trust to be scaleable is Nix flakes (which I’m actually very interested in migrating to at some point)

      • Itdidnttrickledown@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        I’ve never even heard of NIX flakes before today. It looks like another soluion in search of a problem. I trust debian and I trust bare metal more than any container setup. I run multiple services on one machine. I currently have two machines to run all my services. No problems and no downtime other than a weekly update and reload. All crontabed, all automatic.

        At work I have multiple services all running in KVM including some windows domain controllers. Also no problem and weekly full backups are a worry free. Only requiring me to checks them for consistency.

        In short as much as people try to push containers they are only useful if you are dealing with more than few services. No home setup should be that large unless someong is hosting for others.

  • Nico198X@europe.pub
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    13 with podman on openSUSE MicroOS.

    i used to have a few more but wasn’t using them enough so i cut them.

      • ToTheGraveMyLove@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        I’m using docker. Tried to set up Jellyfin in one but I couldn’t for the life of me figure out how to get it to work, even following the official documentation. Ended up just running the jellyfin package from my distros repo, which worked fine for me. Also tried running a tor snowflake, which worked, but there was some issue with the NAS being restricted and I couldn’t figure out how to fix that. I kinda gave up at that point and saved the whole container thing to figure out another day. I only switched to Linux last year, so I’m still pretty new to all of this.

        • kylian0087@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          If you do decide to look in to containers again and get stuck please make a post. We are glad to help out. A tip I can give you when asking for help. Tell the system you are using and how. Docker with compose files or portainer or something else etc. If using compose also add the yaml file you are using.

          • ToTheGraveMyLove@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            I will definitely try again at some point in the next year, so I will keep that in mind! I appreciate the kind words. A lot of what you said is over my head at the moment though, so I’ve got my work cut out for me. 😅

            • F04118F@feddit.nl
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              18 hours ago

              Docker Compose is really the easiest way to self-host.

              Copy a file, usually provided by the developers of the app you want to run, change some values if instructed by the # comments, run docker compose up and it “just works”.

              And I say that as someone who has done everything from distro-provided packages to compiling from source, Nix, podman systemd, and currently running a full-blown multi-node distributed storage Kubernetes cluster at home.

              Just use docker compose.

        • Chewy@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          I’m pretty sure I was at the same point years ago. The good thing is, next time you look into containers it’ll likely be really easy and you’ll wonder where you got stuck a year or two ago.

          At least that’s what has happened to me more times than I can remember.

  • K-Money@lemmy.kmoneyserver.com
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    3 days ago

    140 running containers and 33 stopped (that I spin up sometimes for specific tasks or testing new things), so 173 total on Unraid. I have them gouped into:

    • 118 Auto-updates (low chance of breaking updates or non-critical service that only I would notice if it breaks)
    • 55 Manual-updates (either it’s family-facing e.g. Jellyfin, or it’s got a high chance of breaking updates, or it updates very infrequently so I want to know when that happens, or it’s something I want to keep particular note of or control over what time it updates e.g. Jellyfin when nobody’s in the middle of watching something)

    I subscribe to all their github release pages via FreshRSS and have them grouped into the Auto/Manual categories. Auto takes care of itself and I skim those release notes just to keep aware of any surprises. Manual usually has 1-5 releases each day so I spend 5-20 minutes reading those release notes a bit more closely and updating them as a group, or holding off until I have more bandwidth for troubleshooting if it looks like an involved update.

    Since I put anything that might cause me grief if it breaks in the manual group, I can also just not pay attention to the system for a few days and everything keeps humming along. I just end up with a slightly longer manual update list when I come back to it.

  • Culf@feddit.dk
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    Am not using docker yet. Currently I just have one Proxmox LXC, but am planning on selfhosting a lot more in the near future…

  • drkt@scribe.disroot.org
    link
    fedilink
    English
    arrow-up
    11
    ·
    3 days ago

    All of you bragging about 100+ containers, please may in inquire as to what the fuck that’s about? What are you doing with all of those?

    • Routhinator@startrek.website
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      Kube makes it easy to have a lot, as a lot of things you need to deploy on every node just deploy on every node. As odd as it sounds, the number of containers provides redundancy that makes the hobby easy. If a Zimaboard dies or messes up, I just nuke it, and I don’t care whats on it.

    • K-Money@lemmy.kmoneyserver.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      2 days ago

      A little of this, a little of that…I may also have a problem… >_>;

      The List

      Quickstart

      • dockersocket
      • ddns-updater
      • duckdns
      • swag
      • omada-controller
      • netdata
      • vaultwarden
      • GluetunVPN
      • crowdsec

      Databases

      • postgresql14
      • postgresql16
      • postgresql17
      • Influxdb
      • redis
      • Valkey
      • mariadb
      • nextcloud
      • Ntfy
      • PostgreSQL_Immich
      • postgresql17-postgis
      • victoria-metrics
      • prometheus
      • MySQL
      • meilisearch

      Database Admin

      • pgadmin4
      • adminer
      • Chronograf
      • RedisInsight
      • mongo-express
      • WhoDB
      • dbgate
      • ChartDB
      • CloudBeaver

      Database Exporters

      • prometheus-qbittorrent-exporter
      • prometheus-immich-exporter
      • prometheus-postgres-exporter
      • Scraparr

      Networking Admin

      • heimdall
      • Dozzle
      • Glances
      • it-tools
      • OpenSpeedTest-HTML5
      • Docker-WebUI
      • web-check
      • networking-toolbox

      Legally Acquired Media Display

      • plex
      • jellyfin
      • tautulli
      • Jellystat
      • ErsatzTV
      • posterr
      • jellyplex-watched
      • jfa-go
      • medialytics
      • PlexAniSync
      • Ampcast
      • freshrss
      • Jellyfin-Newsletter
      • Movie-Roulette

      Education

      • binhex-qbittorrentvpn
      • flaresolverr
      • binhex-prowlarr
      • sonarr
      • radarr
      • jellyseerr
      • bazarr
      • qbit_manage
      • autobrr
      • cleanuparr
      • unpackerr
      • binhex-bitmagnet
      • omegabrr

      Books

      • BookLore
      • calibre
      • Storyteller

      Storage

      • LubeLogger
      • immich
      • Manyfold
      • Firefly-III
      • Firefly-III-Data-Importer
      • OpenProject
      • Grocy

      Archival Storage

      • Forgejo
      • docmost
      • wikijs
      • ArchiveTeam-Warrior
      • archivebox
      • ipfs-kubo
      • kiwix-serve
      • Linkwarden

      Backups

      • Duplicacy
      • pgbackweb
      • db-backup
      • bitwarden-export
      • UnraidConfigGuardian
      • Thunderbird
      • Open-Archiver
      • mail-archiver
      • luckyBackup

      Monitoring

      • healthchecks
      • UptimeKuma
      • smokeping
      • beszel-agent
      • beszel

      Metrics

      • Unraid-API
      • HDDTemp
      • telegraf
      • Varken
      • nut-influxdb-exporter
      • DiskSpeed
      • scrutiny
      • Grafana
      • SpeedFlux

      Cameras

      • amcrest2mqtt
      • frigate
      • double-take
      • shinobipro

      HomeAuto

      • wyoming-piper
      • wyoming-whisper
      • apprise-api
      • photon
      • Dawarich
      • Dawarich—Sidekiq

      Specific Tasks

      • QDirStat
      • alternatrr
      • gaps
      • binhex-krusader
      • wrapperr

      Other

      • Dockwatch
      • Foundry
      • RickRoll
      • Hypermind

      Plus a few more that I redacted.

      • drkt@scribe.disroot.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        I look at this list and cry a little bit inside. I can’t imagine having to maintain all of this as a hobby.

        • Chewy@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          From a quick glance I can imagine many of those services don’t need much maintenance if any. E.g. RickRoll likely never needs any maintenance beyond the initial setup.

    • StrawberryPigtails@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 days ago

      In my case, most things that I didn’t explicitly make public are running on Tailscale using their own Tailscale containers.

      Doing it this way each one gets their own address and I don’t have to worry about port numbers. I can just type http://cars/ (Yes, I know. Not secure. Not worried about it) and get to my LubeLogger instance. But it also means I have 20ish copies of just the Tailscale container running.

      On top of that, many services, like Nextcloud, are broken up into multiple containers. I think Nextcloud-aio alone has something like 5 or 6 containers it spins up, in addition to the master container. Tends to inflate the container numbers.

        • StrawberryPigtails@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Possibly. I don’t remember that being an option when I was setting things up last time.

          From what I’m reading it’s sounding like it’s just acting as a slightly simplified DNS server/reverse proxy for individual services on the tailnet. Sounds Interesting. I’m not sure it’s something I’d want to use on the backend (what happens if Tailscale goes down? Does that DNS go down too?), but for family members I’ve set up on the tailnet, it sounds like an interesting option.

          Much as I like Tailscale, it seems like using this may introduce a few too many failure points that rely on a single provider. Especially one that isn’t charging me anything for what they provide.

    • irmadlad@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      Not bragging. It is what it is. I run a plethora of things and that’s just on the production server. I probably have an additional 10 on the test server.

    • slazer2au@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      Things and stuff. There is the web front end, API to the back end, the database, the redis cache, mqtt message queues.

      And that is just for one of my web crawlers.

      /S

    • Encrypt-Keeper@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      100 containers isn’t really a lot. Projects often use 2-3 containers. Thats only something like 30-50 services.