Does “ignore all previous instructions” actually work on anything anymore? I’ve tried getting some AI bots to do that and it didn’t change anything. I know it’s still very much possible, but it’s not nearly as simple as that anymore
Xylight
I’m the developer of the Photon client. Try it out
- 61 Posts
- 283 Comments
Xylightto Android•ICE is stepping up its smartphone tracking, but Android 16 has a secret weaponEnglish4·2 天前that’ll get pushed to you regardless of if you upgrade
Xylightto Asklemmy@lemmy.ml•how did you make the switch from reddit to lemmy.. i'm trying to myself but struggling to nglEnglish101·2 天前The majority of the communities I visit on reddit have no real equivalent on Lemmy. The only things in Lemmy are politics, open source, linux, android, anti ai, immediate downvote of the majority of news, etc.
Lemmy feels more like an individual community rather than a real platform, like lobste.rs with more emphasis on politics.
Xylightto Google•NO I'm NOT using GMail for "business" ... I'd appreciate if you don't pull a "META Business Account" stunt #Google #Gmail.English1·6 天前It looks like images don’t federate between Lemmy and Mastodon well
1). Yeah that’d be nice. The reason I don’t do it is because there are 2 separate API endpoint types: one for only getting the number of notifications (which is lighter on the server), and another for getting the content. 5). The context in photon is quite annoying. I’ll probably make it better in some way soon 7). I have a branch in the works that turns menus on mobile into that 9). You can, you change “location” to “subscriptions”. it’s probably not intuitive though
Xylightto Selfhosted@lemmy.world•Using rsync for backups, because it's not shiny and newEnglish41·7 天前rsync for backups? I guess it depends on what kind of backup
for redundant backups of my data and configs that I still have a live copy of, I use restic, it compresses extremely well
I have used rsync to permanently move something to another drive though
I guess it’s like a CAPTCHA. It doesn’t completely solve the problem the hoster wishes to solve, but it deters a lot of people from trying.
I tried this again, with a gguf quantized to Q4_K_M. It works quite well and can generate in ~7 minutes! Thanks!
XylighttoMicroblog Memes@lemmy.world•Recontextualizing posts (more images in post body)English5·11 天前On reddit there’s r/recontext. Maybe someone can make a lemmy community that will get 3 posts and then die permanently
My guy stop following me around
fixed in a now merged branch 2 days ago
I don’t know why the charge limit isn’t just a slider
That error message corresponds to lemmy error
rate_limit_error
. It’s probably not a photon issue.
XylightOPto Selfhosted@lemmy.world•Suggestions to have a home server VPN and and Mullvad at the same time?English31·13 天前I don’t trust an external third party to manage the coordination server.
Headscale has an issue open for wireguard only exit nodes though, I guess ill wait for that.
XylightOPto Selfhosted@lemmy.world•Suggestions to have a home server VPN and and Mullvad at the same time?English11·13 天前I tried self-hosting tailscale with headscale, but you cannot have a wireguard only exit node with headscale–and so I can’t have mullvad as my exit node.
NodeBB is a modern forum that i believe is activitypub enabled unless i’m thinking of something else
XylightOPto Selfhosted@lemmy.world•Suggestions to have a home server VPN and and Mullvad at the same time?English51·14 天前If it turns on with mobile data automatically, that turns off my Mullvad VPN.
yeah lemme just pull out safari on android and linux for its insane fingerprinting protection and great content blocking support
that is a bug in your browser’s rendering
There is a reason there is sometimes a notable decrease in quality of the same AI model a while after it’s released.
Hosters of the models (like OpenAI or Microsoft) may have switched to a quantized version of their model. Quantization is a common practice to increase power efficiency and make the model easier to run, by essentially rounding the weights of the model to a lower precision. This decreases VRAM and storage usage significantly, at the cost of a bit of quality, where higher quantization results in worse quality.
For example, the base model will likely be in FP16, full floating point precision. They may switch to a Q8 version, which nearly halves the size of the model, with about a 3-7% decrease in quality.