Skip Navigation

A few jellyfin via container set up questions

People resoundingly suggested using containers. So I've been reading up. I know some things about containers and docker and what not. But there are a few decision points in the jellyfin container install instructions that I don't know the "why".

Data: They mount the media from disk, which is good cause it's on a NAS. But for the cache and config they use docker volumes. Why would I want a docker volume for the config? Wouldn't I want to be able to see it from outside the container easier? What am I gaining by having docker manage the volume?

Cache: I saw a very old post where someone mentioned telling docker to use ram for the cache. That "seems" in theory like a good idea for speed. I do have 16gb on the minipc that I am running this all on. But I don't see any recent mentions of it. Any pros/cons?

The user. I know from work experience that generally you don't want things running as root in the container. But... do you want a dedicated user for each service (jellyfin, arr*)? Or one for all services, but not your personal user? Or just use your personal user?

DLNA. I had to look that up. But I don't know how it is relevant. The whole point seems to be that jellyfin would be the interface. And DLNA seems like it would allow certified devices to discover media files?

14 comments
  • I don’t think there’s a right answer for most of these, but here are my thoughts.

    Data: I almost always prefer bind mounts. I find them easier to manage for data that I’ll need to deal with (e.g. with backups). Docker volumes make a lot of sense to me when you start dealing with multiple nodes and central management, where you want to move containers between nodes (like a swarm).

    Cache: streaming video isn’t super latency sensitive, so I can’t think of a need for this type of caching. With multiple users hitting the web interface all the time it might help, but I think I’d do that caching in my reverse proxy instead.

    User: I don’t use the *arr stack, but I’d imagine that suite of applications and Jellyfin all need to handle the same files, so I’d be inclined to use the same user (or at least group) on all of them.

    DLNA: this is a feature I don’t make much use of, but it allows for Jellyfin to serve media to devices that don’t run a Jellyfin client. It’s an open standard for media sharing among local devices. I don’t think I would jump through any hoops for it unless you have a use, but the default setup won’t get in your way.

    Hope that helps a little.

  • The only thing I'd mention on the cache is to be a little careful, because depending on your actual use case you can use a LOT of transcode cache space.

    If it's just you, doing one stream, it probably doesn't matter.

    If it's you, and your 20 closest friends, well, uh, it can be quite a lot and maybe you won't want it in RAM.

    As for the media, a bind mount is the way to go, and I'd also recommend doing it as a read-only mount: Jellyfin doesn't need the ability to modify that data, and in the event of a security oopsie (or a misconfigured user, or a 6 year old that gets 5 minutes alone with your mouse or....), it keeps someone from trashing your entire media library, assuming that's something you wouldn't want to have to spend the time gathering again.

    For the user, I just have a 'service' account, and run the vast majority of my containers under that UID. Sure, maybe that's not the MOST secure, but it's worlds better than root, and container escapes are not exactly common so it's probably sufficient.

    ...and if you get DLNA working let me know, because I never have. I just use Jellyfin clients everywhere because that at least does what you expect in terms of showing the media in a usable format and playing it.

  • Can't speak to the RAM thing. My cache is a 320GB Toshiba hard drive I dug out of an old laptop in 2014, haven't really had issues but I don't do a lot of high fidelity transcodes as my local devices tend to support the codecs natively and remotely I'm limited by upload speed anyway because residential fiber and asymmetric speed.

    They mount the media from disk, which is good cause it's on a NAS. But for the cache and config they use docker volumes. Why would I want a docker volume for the config?

    Better performance, useful for cache.

    The user. I know from work experience that generally you don't want things running as root in the container.

    Doesn't matter if you don't use expose to the internet.

    I run docker Daemon as root, only have one user on the server with sudo and I removed all firewall packages, idc about any of it because NAT means nothing can access it from outside without a VPN, all other stuff that needs to be public is via cloudflare tunnels, and I have a separate device with only an exposed VPN server using key+pw auth for using services available only on LAN.

    A good NAT fixes all problems, just don't use that demonic ipv6 crap, don't use UPnP, don't expose random ports (ssh etc) and you're good, speaking as MSc and employed cybersec engineer of several years and aspiring pentester (hacker rank on HtB btw I use arch btw etc etc.).

    If it needs to be public that's a very different story.

    If you want actual security/defense in depth then yes you want a separate user with no path to root per service with ACLs on least privilege principle where they only have access to run executables needed for the absolute barest essentials so no interactive shells or most of your bins (use facl for this), any scripts should have hard coded paths etc., be especially careful with what you actually expose via any mounts, make sure to also run something like LinPEAS to look for misconfigs and if you do any SMB/Windows/AD stuff run enum4linux-ng and such and such and ofc use unattended upgrades and refresh containers regularly.

    The whole point seems to be that jellyfin would be the interface. And DLNA seems like it would allow certified devices to discover media files?

    Basically if you want other devices to control Jellyfin without a client native to them enable it. Ever use casting? It works like that in practice. It uses broadcast like Bonjour/Rendezvous protocols in principle.

  • Data:
    I mounted my config to my host system and passed it through exactly due to the reason you mentioned. I had some situation which necessitated deleting stuff in /config or reading the log inside with tail.
    Cache: When I used a Pi, I used a USB-key as a sacrifice storage to not hit the micro-sd for swap and fill up the limited RAM.
    Now I have a SSD. Don't care that much as I have daily backups
    User: Linuxserver.io offers to map it to a host-user. The local user in the container is called "abc". Just make sure the files have proper permissions
    DLNA: Only matters if you have devices that can't use the app like smart-TVs. Those usually can still interact via DLNA. No devices that matter = No issue.
    I disabled it.

14 comments