Related to the outage that happened last night, we rebooted the Lemmy services but we're still trying to figure out the root cause, which seems to point to an out of memory issue in the logs. However it's not what we see in our monitoring console.
In the meantime, we will monitor the service more closely until we are confident the issue is resolved, and we will improve our tools to detect such a problem faster.
EDIT: Also happened at night on July 2nd, still trying to find the root cause..
Good morning and happy Canada Day. Thanks for working tirelessly to get things running.
Seems like the API, apps and other frontends were working but the main web frontend wasn't? I wonder if it is anything similar to what happened to the unfortunate feddit.de
Something got into a weird state and restarting either the backend or frontend didn't help. Taking the entire stack down and then bringing it back up, resolved it.
It's weird since it crashed at 1am and at 3am we gradually restart all backend and frontends, so that automatic restart should have fixed it too. All the containers reported healthy, but nginx wasn't reporting any available frontends.
I suspect some sort of weird lemmy bug, but we'll just have to improve monitoring for now and try to debug this more if it happens again.
I found a post with context on oldfeddit's main, it's someone writing in German saying the site [note: specifically the frontend] is down and the admin is MIA. Feddit.org is ran by a Viennese non-profit.
Hi there! Looks like you linked to a Lemmy community using a URL instead of its name, which doesn't work well for people on different instances. Try fixing it like this: !main@feddit.org
As I suggested elsewhere, could you maybe setup a "status" community on another instance (e.g. sh.itjust.works, it's Canadian as well), so that people can go there to see updates about the potential outages?