People resoundingly suggested using containers. So I’ve been reading up. I know some things about containers and docker and what not. But there are a few decision points in the jellyfin container install instructions that I don’t know the “why”.

Data: They mount the media from disk, which is good cause it’s on a NAS. But for the cache and config they use docker volumes. Why would I want a docker volume for the config? Wouldn’t I want to be able to see it from outside the container easier? What am I gaining by having docker manage the volume?

Cache: I saw a very old post where someone mentioned telling docker to use ram for the cache. That “seems” in theory like a good idea for speed. I do have 16gb on the minipc that I am running this all on. But I don’t see any recent mentions of it. Any pros/cons?

The user. I know from work experience that generally you don’t want things running as root in the container. But… do you want a dedicated user for each service (jellyfin, arr*)? Or one for all services, but not your personal user? Or just use your personal user?

DLNA. I had to look that up. But I don’t know how it is relevant. The whole point seems to be that jellyfin would be the interface. And DLNA seems like it would allow certified devices to discover media files?

  • tvcvt@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    ·
    7 days ago

    I don’t think there’s a right answer for most of these, but here are my thoughts.

    Data: I almost always prefer bind mounts. I find them easier to manage for data that I’ll need to deal with (e.g. with backups). Docker volumes make a lot of sense to me when you start dealing with multiple nodes and central management, where you want to move containers between nodes (like a swarm).

    Cache: streaming video isn’t super latency sensitive, so I can’t think of a need for this type of caching. With multiple users hitting the web interface all the time it might help, but I think I’d do that caching in my reverse proxy instead.

    User: I don’t use the *arr stack, but I’d imagine that suite of applications and Jellyfin all need to handle the same files, so I’d be inclined to use the same user (or at least group) on all of them.

    DLNA: this is a feature I don’t make much use of, but it allows for Jellyfin to serve media to devices that don’t run a Jellyfin client. It’s an open standard for media sharing among local devices. I don’t think I would jump through any hoops for it unless you have a use, but the default setup won’t get in your way.

    Hope that helps a little.

    • SailorsLife@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      It does help thanks. And part of this set of questions was just me exploring stuff thoughts and looking to learn… so I have a follow up question or two.

      You mention docker volumes make a lot of sense with multiple nodes. How does that work out? We use pv’s and such with k8s at work, and the ones we use can only be mounted on one node at a time. From what others have said, allowing many write from multiple nodes has a lot of complications. Do docker volumes handle writing from multiple nodes?

      And… “streaming video isn’t super latency sensitive”. I’m super new to streaming video. I would have expected it to be sensitive to latency. I mean you expect the video to keep playing and not stop. Whereas most of the things I work with (api’s and what not) can have an extra second or two to respond with little relevant difference. So clearly there is some depth here I don’t understand.

  • schizo@forum.uncomfortable.business
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 days ago

    The only thing I’d mention on the cache is to be a little careful, because depending on your actual use case you can use a LOT of transcode cache space.

    If it’s just you, doing one stream, it probably doesn’t matter.

    If it’s you, and your 20 closest friends, well, uh, it can be quite a lot and maybe you won’t want it in RAM.

    As for the media, a bind mount is the way to go, and I’d also recommend doing it as a read-only mount: Jellyfin doesn’t need the ability to modify that data, and in the event of a security oopsie (or a misconfigured user, or a 6 year old that gets 5 minutes alone with your mouse or…), it keeps someone from trashing your entire media library, assuming that’s something you wouldn’t want to have to spend the time gathering again.

    For the user, I just have a ‘service’ account, and run the vast majority of my containers under that UID. Sure, maybe that’s not the MOST secure, but it’s worlds better than root, and container escapes are not exactly common so it’s probably sufficient.

    …and if you get DLNA working let me know, because I never have. I just use Jellyfin clients everywhere because that at least does what you expect in terms of showing the media in a usable format and playing it.

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      As for the media, a bind mount is the way to go, and I’d also recommend doing it as a read-only mount: Jellyfin doesn’t need the ability to modify that data, and in the event of a security oopsie (or a misconfigured user, or a 6 year old that gets 5 minutes alone with your mouse or…), it keeps someone from trashing your entire media library, assuming that’s something you wouldn’t want to have to spend the time gathering again.

      My way to solve this:
      My main user is a regular user with no deletion permissions in jellyfin. Anything that requires editing necessitates logging out and in with the admin account.
      My docker container is mapped to a non-root user. Not perfectly save but sufficient (hopefully).
      But my jellyfin container has R/W because I store nfo/metadata files alongside the media file.

      • I understand@mstdn.social
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        5 days ago

        @Appoxo

        I use 2 media folders, one for “new” media and one for existing media. Only the “new” media folder is R/W. Once it’s metadata files are written out the media is moved to the existing media folder (which is mounted R-only).

        • Appoxo@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 days ago

          What if you manually edit the metadata?
          Seems like a hassle to me that requires too much manual input.

          • SailorsLife@lemmy.worldOP
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 days ago

            by the way… great discussion. I’m reading along and learning of things I didn’t think of before. So thanks.

    • SailorsLife@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      “or a 6 year old that gets 5 minutes alone with your mouse” haha. I have a 10 year old with a tendency to be inquisitive with electronic devices. He is pure of heart, but we joke that some day the NSA is going to come knocking. He wouldn’t hack a bank to get money, he would just be “exploring” what is possible instead of reading directions. lol. Question though. When you do want to delete something. I am guessing you logon to your media server and do it from your user account?

      • schizo@forum.uncomfortable.business
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 days ago

        I use the *arr stack for deletion, usually.

        Lots of people have accounts on the jellyfin/jellyseerr stack, but I’m the only one with access to the *arrs, so I just manage it (mostly) from there.

  • Appoxo@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 days ago

    Data:
    I mounted my config to my host system and passed it through exactly due to the reason you mentioned. I had some situation which necessitated deleting stuff in /config or reading the log inside with tail.
    Cache: When I used a Pi, I used a USB-key as a sacrifice storage to not hit the micro-sd for swap and fill up the limited RAM.
    Now I have a SSD. Don’t care that much as I have daily backups
    User: Linuxserver.io offers to map it to a host-user. The local user in the container is called “abc”. Just make sure the files have proper permissions
    DLNA: Only matters if you have devices that can’t use the app like smart-TVs. Those usually can still interact via DLNA. No devices that matter = No issue.
    I disabled it.

  • LainTrain@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    7 days ago

    Can’t speak to the RAM thing. My cache is a 320GB Toshiba hard drive I dug out of an old laptop in 2014, haven’t really had issues but I don’t do a lot of high fidelity transcodes as my local devices tend to support the codecs natively and remotely I’m limited by upload speed anyway because residential fiber and asymmetric speed.

    They mount the media from disk, which is good cause it’s on a NAS. But for the cache and config they use docker volumes. Why would I want a docker volume for the config?

    Better performance, useful for cache.

    The user. I know from work experience that generally you don’t want things running as root in the container.

    Doesn’t matter if you don’t use expose to the internet.

    I run docker Daemon as root, only have one user on the server with sudo and I removed all firewall packages, idc about any of it because NAT means nothing can access it from outside without a VPN, all other stuff that needs to be public is via cloudflare tunnels, and I have a separate device with only an exposed VPN server using key+pw auth for using services available only on LAN.

    A good NAT fixes all problems, just don’t use that demonic ipv6 crap, don’t use UPnP, don’t expose random ports (ssh etc) and you’re good, speaking as MSc and employed cybersec engineer of several years and aspiring pentester (hacker rank on HtB btw I use arch btw etc etc.).

    If it needs to be public that’s a very different story.

    If you want actual security/defense in depth then yes you want a separate user with no path to root per service with ACLs on least privilege principle where they only have access to run executables needed for the absolute barest essentials so no interactive shells or most of your bins (use facl for this), any scripts should have hard coded paths etc., be especially careful with what you actually expose via any mounts, make sure to also run something like LinPEAS to look for misconfigs and if you do any SMB/Windows/AD stuff run enum4linux-ng and such and such and ofc use unattended upgrades and refresh containers regularly.

    The whole point seems to be that jellyfin would be the interface. And DLNA seems like it would allow certified devices to discover media files?

    Basically if you want other devices to control Jellyfin without a client native to them enable it. Ever use casting? It works like that in practice. It uses broadcast like Bonjour/Rendezvous protocols in principle.

    • SailorsLife@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      Thanks.

      Interesting. I didn’t think about performance. I can see how a docker volume would be better optimized. And for a cache that makes sense. I was considering doing a bind mount for the config for easier visibility when debugging things. But keeping the volume for the cache now makes sense… thanks for that.

      I technically work for a company that is in the security space. But I myself just can’t really get into it. It seems like there is always so many things that could be done to improve security, but there is never the resources to do most of them in companies. And that would really eat at me. We hire companies to do pen testing. They seem like home inspectors. They have to find a few things to help the customer (us) justify the expense, but once they do, they don’t need to look much deeper. And half the things they find will be low/mediums that will never get fixed. And in the end, the only reason companies seem to hire them is so they can advertise that they did, or to meet their customers security requirements. All in all, it just feels so sad. :(

      anyway. If I am following you… you run a custom NAT for your home network? I know my router has one, but sounds like you don’t trust the routers? Is that right? And then you run a vpn server on the inside to handle any external access. That seems smart. Is that like common practice, or something you do because of your background?

      • LainTrain@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 days ago

        I don’t run a custom NAT, I just don’t port-forward much.

        I have my ISP’s router as a gateway/fiber endpoint, hooked up to a TP-link 1gig switch and an Archer C7 with OpenWRT as a semi-dumb AP/switch and handle DHCP to give clients my recursive DNS.