I'm not sure why Docker would be a particularly good (or particularly bad) fit for the scenario you're referring to.
If you're suggesting that Docker could make it easy to transfer a system onto a new SD card if one fails, then yes that's true ... to a degree. You'd still need to have taken a backup of the system BEFORE the card failed, and if you're making regular backups then to be honest it will make little difference if you've containerised the system or not, you'll still need to restore it onto a new SD card / clean OS. That might be a simpler process with a Docker app but it very much depends on which app and how it's been set up.
I think the idea is rather, that read only container - as the name implies - only read and drive write. Since SD cards aren't exactly great at being written to often, that could increase the lifetime of the SD card.
I'm still struggling to understand what advantage Docker brings to the set-up.
Maybe the application doesn't need to write anything to disk at all (which seems unlikely) but if so, then you're not saving any disk-write cycles by using docker.
Or maybe you want it only to write to filesystems mounted from longer-life storage e.g. magnetic disk and mark the SD card filesystems as --read-only. In which case you could mount those filesystems directly in the host OS (indeed you have to do this to make them visible to docker) and configure the app to use those directly, no need for docker.
Docker has many great features, but at the end of the day it's just software - it can't magic away some of the foundational limitiations of system architecture.
honestly, it's not worth it. hard drives are cheap, just plug one via USB 3 and make all the write operations there. that way your little SBC doesn't suffer the performance overhead of using docker.
The point with an external drive is fine (I did that on my RPi as well), but the point with performance overhead due to containers is incorrect. The processes in the container run directly on the host. You even see the processes in ps. They are simply confined using cgroups to be isolated to different degrees.
docker images have a ton of extra processes from the OS they were built in. Normally a light distro is used to build images, like Alpine Linux. but still, you're executing a lot more processes than if you were installing things natively.
Of course the images does not contain the kernel, but still they contain a lot of extra processes that would be unnecessary if executing natively.
I use docker myself on my RPi4, but the OS is on a 128 GB SSD connected through USB3. These SSD are pretty cheap nowadays and (likely?) more resilient than sdcards...
Unless you make your host OS read-only, it itself will keep writing while running your docker containers. Furthermore slapping read-only in a docker container won't make the OS you're running in it able to run correctly with an RO root fs. The OS must be able to run with an RO root fs to begin with. Which is the same problem you need to solve for the host OS. So you see, it's the same problem and docker doesn't solve it. It's certainly possible to make an Linux OS that runs on an RO root fs and that's what you need to focus on.
I think Docker is a tool, and it depends on how you implement said tool. You can use Docker in ways that make your infra more complicated, less efficient, and more bloated with little benefit, if not a loss of benefits. You can also use it in a way that promotes high uptime, fail-overs, responsible upgrades, etc. Just "Docker" as-is does not solve problems or introduce problems. It's how you use it.
Lots of people see Docker as the "just buy a Mac" of infra. It doesn't make all your issues magically go away. Me, personally, I have a good understanding of what my OS is doing, and what software generally needs to run well. So for personal stuff where downtime for upgrades means that I, myself, can't use a service while it's upgrading, I don't see much benefit for Docker. I'm happy to solve problems if I run into them, also.
However, in high-uptime environments, I would probably set up a k8s environment with heavy use of Docker. I'd implement integration tests with new images and ensure that regressions aren't being introduced as things go out with a CI/CD pipeline. I'd leverage k8s to do A-B upgrades for zero downtime deploys, and depending on my needs, I might use an elastic stack.
So personally, my use of Docker would be for responsible shipping and deploys. Docker or not, I still have an underlying Linux OS to solve problems for; they're just housed inside a container. It could be argued that you could use a first-party upstream Docker image for less friction, but in my experience, I eventually want to tweak things, and I would rather roll my own images.
For SoC boards, resources are already at a premium, so I prefer to run on metal for most of my personal services. I understand that we have very large SoC boards that we can use now, but I still like to take a simpler, minimalist approach with little bloat. Plus, it's easier to keep track of things with systemd services and logs anyway, since it uniformly works the way it should.
Just my $0.02. I know plenty of folks would think differently, and I encourage that. Just do what gives you the most success in the end 👍
I don't use those two flags, but have several pis running docker with no issues. They've been running (almost) 24/7/365 going on maybe 2 years now with the same sd cards.