• 1 Post
  • 351 Comments
Joined 1 year ago
cake
Cake day: December 14th, 2023

help-circle
  • My first guess would be that those nodes had a poor GPS lock and are actually much closer. Unless you’ve received multiple reports from them where the location is about the same (better if it changes a little bit so you know they’re not just retransmitting the same inaccurate position because they can’t get a good GPS lock), in which case some spooky rf stuff is probably happening.


  • That seems kind of like pointing to reverse engineering communities and saying that binaries are the preferred format because of how much they can do. Sure you can modify finished models a lot, but what you can do with just pre trained weights vs being able to replicate the final training or changing training parameters is just an entirely different beast.

    There’s a reason why the OSI stipulates that code and parameters used to train is considered part of the “source” that should be released in order to count as an open source model.

    You’re free to disagree with me and the OSI though, it’s not like there’s 1 true authority on what open source means. If a game that is highly modifiable and moddable despite the source code not being available counts as open source to you because there are entire communities successfully modding it, then all the more power to you.


  • It’s worth noting that OpenR1 have themselves said that DeepSeek didn’t release any code for training the models, nor any of the crucial hyperparameters used. So even if you did have suitable training data, you wouldn’t be able to replicate it without re-discovering what they did.

    OSI specifically makes a carve-out that allows models to be considered “open source” under their open source AI definition without providing the training data, so when it comes to AI, open source is really about providing the code that kicks off training, checkpoints if used, and details about training data curation so that a comparable dataset can be compiled for replicating the results.


  • It really comes down to this part of the “Open Source” definition:

    The source code [released] must be the preferred form in which a programmer would modify the program

    A compiled binary is not the format in which a programmer would prefer to modify the program - it’s much preferred to have the text file which you can edit in a text editor. Just because it’s possible to reverse engineer the binary and make changes by patching bytes doesn’t make it count. Any programmer would much rather have the source file instead.

    Similarly, the released weights of an AI model are not easy to modify, and are not the “preferred format” that the internal programmers use to make changes to the AI mode. They typically are making changes to the code that does the training and making changes to the training dataset. So for the purpose of calling an AI “open source”, the training code and data used to produce the weights are considered the “preferred format”, and is what needs to be released for it to really be open source. Internal engineers also typically use training checkpoints, so that they can roll back the model and redo some of the later training steps without redoing all training from the beginning - this is also considered part of the preferred format if it’s used.

    OpenR1, which is attempting to recreate R1, notes: No training code was released by DeepSeek, so it is unknown which hyperparameters work best and how they differ across different model families and scales.

    I would call “open weights” models actually just “self hostable” models instead of open source.



  • I think it’s normal to see some variation, I have an official Google replacement battery from ifixit, the marked capacity on the battery is 4050, in battery settings it says the design capacity is 4180, and my current capacity is 4042. Also accubattery says the design capacity is 4000.

    If you’re worried I would use something like accubattery and let it take measurements for a week (while trying to discharge down to a few percent a couple times and also charging uninterrupted to 100%) and then see if the estimated battery capacity measures up to what you expect. If you get close to or above 4000 (would make it about 90%) then I think it should be fine. If it’s much less, then think about having it replaced.



  • I’m pretty sure if you rip CDs directly to FLAC, it’s a perfect copy assuming you’re using good software. PCM isn’t lossy or lossless because it’s not a compressed format, it’s an uncompressed bitstream. Think of it like the original data. If it was burned to a CD as digital MP3 data and then ripped that to FLAC, then yes you’d be going from lossy compressed to lossless, which would hide the fact that quality was lost when it went to MP3 in the first place.

    Just as an example, you can rip a CD directly to FLAC (you should also find and use the correct sample offset for your CD drive), rip the cue sheet for track alignment, then burn the FLAC back to a new CD using the cuesheet (and the correct write offset configuration), and you’ll get a CD with the exact bit for bit pattern of “pits” burned into the data layer.

    You can then rip both CDs to a raw uncompressed wav file (wav is basically just a container for PCM data) and then you’ll be able to MD5sum both wav files and see that they are identical.

    This is how I test my FLAC rips to make sure I’m preserving everything. This is also how CD checksum databases (like CDDB) work - people across the globe can rip to wav or flac and because it’s the same master of the CD, they’ll get identical checksums, and even after converting the PCM/wav into a flac you are still able to checksum and verify it’s identical bit for bit.


  • I basically don’t notice that I don’t have a headphone jack. My usb-c adapter is just permanently affixed to my wired IEMs and it basically makes no difference to me if the plug is round or usb-c shaped.

    I definitely recommend biting the bullet and getting a good adapter. Since I have a pixel I use the Google one. I made sure my partner got an official apple one for their iPhone since I remember seeing rumors about a volume difference between them if mixed and matched. Aside from Apple shenanigans I haven’t really had an issue with them. I also only charge at night so I never have the problem of needing to charge and listen at the same time.


  • I think they’re making it more complicated than it needs to be. On any other social media site, you find people by their username. So just ask for your friends username (username@instance.com) and put it in the search bar and it’ll come up. Using the URL can be convenient on desktop because you can just copy and paste it from the address bar when you’re looking at someone’s profile.

    And if you want to discover new people where you don’t already know their username, then I believe that is the same as any other social media as well, you can come across them in the comments of people you follow or go to the discover tab or search hashtags and you’ll find new people that you can tap on and follow.

    I feel like this basically covers how you would find people. A lot of people get hung up on how you know what instance other people are on but it doesn’t usually matter. Either someone will give you their username which includes @instance.com, or if you don’t know the instance you can search for their name and all known accounts with that name will show up.

    For example if I just search my username “BakedCatboy” (not my real username), the search results show both my mastodon and Pixelfed accounts.



  • I’m curious to see what suggestions you get. When it comes to “easy to set up”, usually Plex is the answer, and the only major alternative I know of is Jellyfin, which I assume is what you’re referring to when you mention reverse proxies.

    I’m even willing to build docker containers, set up wireguard tunnels to a VPS, and reverse proxy through a chain of 2 reverse proxies (haproxy on the VPS and traefik on the local machine which is how I reverse proxy Plex and Jellyfin alike) yet I still use Plex because most of my friends/family prefer the Plex UI (though with how buggy the Plex app has been for some of my iOS users, I think Jellyfin apps could close the gap soon™)


  • Yep, I’m pretty sure you can still just use spools without tags and then manually set the filament settings, but since they control the firmware and can block downgrades, they can at any time require RFID tags for it to print. And since the tags have proven to be mostly cryptographically secure, that leaves open an avenue for them to lock out third party filament. It looks like you can currently clone the tags, but in theory they could program them like printer cartridges where it will recognize when you’ve printed a full spools length from any specific RFID tag ID, and then block printing using that tag ID. That could make cloning the tags useless and force you to only buy bamboo filament just like HP and printer companies and their ink.



  • Sounds likely, I haven’t used port forwarding with my VPN since Mullvad stopped supporting it, so when I recently shared my own torrent I paid for 1 month of a seedbox just to make sure it seeds well and the seedbox uploaded ~50GB while my local setup on a VPN without port forwarding only uploaded 1.8GB (and it hardly showed any peers as if nobody was trying to download). So it seems peers had a much easier time connecting to the seedbox.

    I have since setup port forwarding in gluetun for my local torrent client. I just wish there was more support for it because gluetun only has built in support for port forwarding for 2 providers (I guess automated requesting a forwarded port), and even then you still have to make your own script to automatically set the port in the torrent client when it’s assigned / changed. It’s possible that some providers do it more like Mullvad where you get assigned a port via the website that is tied to the VPN credentials, so you just have to plug the assigned port into the torrent client settings (that’s how it worked with Mullvad so I could just enter the port once and forget about it) but I haven’t checked other providers to see.




  • Partially yes, the tricky thing is that when using network_mode: "service:tailscale" (presumably on the caddy container since that’s what needs to receive traffic from the tailscale network), you won’t be able to attach the caddy container to any networks since it’s using the tailscale network stack. This means that in order for caddy to reach your containers, you will need to add the tailscale container itself to the relevant networks. Any attached containers will be connected as well.

    (Not sure if I misread the first time or if you edited but the way you say it is right, add the tailscale container to the proxy network so that caddy will also be added and can reach the containers)

    Here’s the super condensed version of what matters for connecting traefik/caddy to a VPN like wireguard/tailscale.

    • I left out all WG config since presumably you know how to configure tailscale
    • Left out acme / letsencrypt stuff since that would be different on caddy anyway
    • You may need to configure caddy to trust the tailscale tunnel IP of the machine on the other end that will be reverse proxying over the tunnel.
    • Traefik I guess requires you to specify the docker network to use to reach stuff, I just put anything that should be accessible into “ingress” as you can see. I’m not sure if my setup supports using a different proxy network per app but maybe caddy allows that.

    My traefik compose:

    services:
      wireguard:
        container_name: wireguard
        networks:
          - ingress
    
      traefik:
        network_mode: "service:wireguard"
        depends_on:
          - wireguard
        command:
          - "--entryPoints.web.proxyProtocol.trustedIPs=10.13.13.1" # Trust remote tunnel IP, the WG container is 10.13.13.2
          - "--entrypoints.websecure.address=:443"
          - "--entryPoints.websecure.proxyProtocol.trustedIPs=10.13.13.1"
          - "--entrypoints.web.http.redirections.entrypoint.to=websecure"
          - "--entrypoints.web.http.redirections.entrypoint.scheme=https"
          - "--entrypoints.web.http.redirections.entrypoint.priority=100"
          - "--providers.docker.exposedByDefault=false"
          - "--providers.docker.network=ingress"
    
    networks:
      ingress:
        external: true
    
    

    And then in a service’s docker-compose:

    services:
      ui:
        image: myapp
        read_only: true
        restart: always
        labels:
          - "traefik.enable=true"
          - "traefik.http.routers.myapp.rule=Host(`xxxx.xxxx.xxxx`)"
          - "traefik.http.services.myapp.loadbalancer.server.port=80"
          - "traefik.http.routers.myapp.entrypoints=websecure"
          - "traefik.http.routers.myapp.tls.certresolver=mytlschallenge"
        networks:
          - ingress
    
    networks:
      ingress:
        external: true
    
    

    (edited to fix formatting on mobile)


  • I’ve done something similar but I’m not sure how helpful my example would be because I use wireguard instead of tailscale and traefik instead of caddy.

    The principle is the same though, iirc I have my traefik container set to network_mode: “service:wireguard” so that the traefik container uses the wireguard container’s network stack. That way the traefik container also sees the wireguard interface and can receive traffic going to the wireguard IP. Then at the other end of the wireguard tunnel I can use haproxy to pass traffic to the wireguard IP through the tunnel and it automatically hits traefik.