• 1 Post
  • 49 Comments
Joined 1 month ago
cake
Cake day: October 3rd, 2025

help-circle



  • I look at it differently, everything used to be hobbled together messes without real consideration for running live. Then when you go to scale, you had to redo the whole thing because it’s base architecture was garbage. This is going to sound dumb, but the philosophy behind DevOps creates an environment that encourages building extensible systems that hopefully will not require taking a fucking sledge hammer to the systems to upgrade when you get users. My role in particular would require time from a dev from each team and a SysAdmin that understands software and OS at a low levell. That is really inefficient and has communication gaps.



  • When I was a Sysadmin at a MSP, we had client with 2 main sites and multiple satellite sites. At one of the satellite locations there were two servers. The first ran a bunch of VMs and the second was the backup. If you disconnected the backup, the AD stopped working everywhere and half of the NAS storage was not reachable. As a far as anyone knew the second server was set to spin up replacement VMs if the first went down and nothing else. We were a pretty shitty MSP and never spent any time doing proactive work. So when that server dies, that company is going to have the most epic outage that will cost them a fortune.















  • There is no right answer. The only hard rule i have is that if it only requires key-value pairs keep it real simple. I use a variety of databases and things I can used as a database. One project uses Google sheets. Another uses a bunch of CSVs on a NAS as document oriented type database. And then there are the usuals; SQLite, MySQL, PostgreSQL, Mongo, and some god awful MS Azure crap that uses KQL.