You can probably find evidence of this in the system logs, on most modern systems, this should show you higher prio logs from last boot:
journalctl -p 4 -b
You should see something like this, if it was killed by OOM killer
MESSAGE=Killed process 3029 (Web Content) total-vm:10206696kB, anon-rss:6584572kB, file-rss:0kB, shm em-rss:8732kB
And you should definitely think about extending the memory of the system or reducing the number of containers/they memory footprint.
The question is not so much if you have enough physical ram but if your docker management tool has established resource limits for the containers. Oom killer will stop the process regardless of the fact that there is enough free memory if the container goes over its Ressource contraints.
Really not enough information to go on here. I'm not familiar with unraid, but can you add restart: always to the compose file for these containers? Does it have a compose file?
If you're sure it's not a memory issue (and it would be a bit odd for the OOM-kill to kill a bunch of containers like this), it could be an IO issue. A hard drive dying, maybe, but I think unraid should let you know about that?