It’s well worth it to get a $50 coral tpu for object detection. Fast inference speed and nearly zero CPU usage.
It’s well worth it to get a $50 coral tpu for object detection. Fast inference speed and nearly zero CPU usage.
Keys spread out? I don’t understand…
I appreciate the reply, but I guess I wasn’t clear on what I was asking.
It’s obvious who this is for in the literal sense, what I mean is: what is the use case for this?
On the homelab front, I don’t see enough need to unify my GUI access, and i have roughly 30 containers to manage. At that point, most homelab admins gravitate to automation.
On the professional front, I can tell you that unifying the keys to mgmt interfaces to critical infrastructure in a single app is not a welcome tool to see on my junior admin desktops. And if it’s simply the interface to mgmt portals without storing keys, then I would have my doubts about a junior admin who hasn’t developed a personal strategy to manage this themselves.
Don’t get me wrong, I’m happy to encourage you to develop this, but the second you write “trying to make a living from this”, you should know that these questions are coming.
If I were across the table from you trying to understand what you’re selling me, I would want to know:
You can see where this is going. If I buy this tool for use by several people, I don’t want to have to wrap it in vault entries and update scripts just to meet compliance with my client’s environment.
What is your target audience for this? I’m having trouble understanding who this product is for.
Sounds reasonable, and I’m sure you’re on your way to solving this.
In my experience thinking hard about my storage needs, I’ve found that as long as I can get decent performance and a bit of redundancy, a solid and tested backup plan can fill in the rest in terms of data safety and integrity.
“Which goes to show that being a bum knows no gender.”
Your focus shouldn’t be on what technologies to use, because you can’t know what will help until you know what you’re trying to do.
Define your use case and the problems you can see now, and the technologies to address them will become more apparent.
Please draw from the context of this thread that I mean future deaths.
You’re lost in the semantics. Outcomes like fewer deaths resulting from foreign policy decisions, including belligerent invasions, matter more than perceived political “moral” calculus.
I did. It’s a culture vulture article, you just need to use an incognito tab.
As unpleasant as the content is, just read the article. And remember that lots of folks have trusted Neil Gaiman for a long time (I’m 50) to tell stories they connect with, especially in the 90s when there were fewer writers to do so.
sombre reflection
You apparently still haven’t read the article. Given the reactions to your comment, you may want to go see why the comments are “sombre”, as you put it.
Problem with Poettering is that he was right, but he was a dick about it. Like Rick Sanchez.
No clue what he did (have not yet read the article). Haven’t really consumed any of his media.
I’m surprised everyone else is surprised
This comment didn’t need to be made.
You really, really should use this as an example for yourself in the future to read the room. That means read the article before making a thoughtless comment on something you obviously didn’t fully grasp.
Pen danger to the eye is obvious to most people. Cancer caused by a lifetime of drinking is not.
I’m trying to indicate that docker has its own kinds of problems that don’t really occur for software that isn’t containerized.
I used the immich issue because it was actually NOT indicated as a breaking change by the devs, and the few of us who had migrated the same compose yml from older veraions and had a problem were met with “oh, that is a very old config, you should be using the modern one”.
Docker is great, but it comes with some specific understanding that isn’t necessarily obvious.
For one, if the compose file syntax or structure and options changes (like it did recently for immich), you have to dig through github issues to find that out and re-create the compose with little guidance.
Not docker’s fault specifically, but it’s becoming an issue with more and more software issued as a docker image. Docker democratizes software, but we pay the price in losing perspective on what is good dev practice.
zfs overlay / docker snapshot issue has been solved since 2021. Proxmox is also well into 8.3, 8.0 has been stable since early 2023.
Current modern supercomputers are actually a mesh of relatively lower spec machines, not a single “computer”, per se. The cost of these isn’t the hardware, it’s the low-latency interconnects and writing the software that can carry out jobs in a massively parallel way.
There’s a give-and-take here, where disclosing the vulnerability should be done soon enough to be responsible to affected users, but not so late that it’s seen as pandering to the vendor.
We’ve already seen how much vendors drag their feet when they are given time to fix a vuln before the disclosure, and almost all the major vendors have tried to pull this move where they keep delaying fix unless it becomes public.
Synology hasn’t been very reactive to fixing CVEs unless they’re very public. One nasty vulnerability in the old DSM 6 was found at a hackathon by a researcher (I’ll edit and post the number later), but the fix wasn’t included in the main update stream, you had to go get the patch manually and apply it.
Vendors must have their feet held to the fire on vulns, or they don’t bother doing anything.
How efficient is using a GPU? I understood the efficiency wasn’t nearly as good, but that may have been info from a while back.