Skip Navigation

Workaround for the performance issue with posting in large communities

We're still working to find a solution for the posting slowness in large communities.

We have seen that a post does get submitted right away, but yet the page keeps 'spinning'

So right after you clicked 'Post' or 'Reply' you can refresh the page and the post should be there.

(But maybe to be sure you could copy the contents of your post first, so you can paste again if anything would go wrong..)

62 comments
  • Just hopping into the chain to say that I appreciate you and all of your hard work! This place—Lemmy in general, but specifically this instance—has been so welcoming and uplifting. Thank you!

  • Have you tried enabling the slow query logs @ruud@lemmy.world? I went through that exercise yesterday to try to find the root cause but my instance doesn’t have enough load to reproduce the conditions, and my day job prevents me from devoting much time to writing a load test to simulate the load.

    I did see several queries taking longer than 500ms (up to 2000ms) but they did not appear related to saving posts or comments.

  • I assume that there is something that is O(N), which explains why wait time scales with community size (amount of posts, comments)

    • Oh, Big-O notation? I never thought I’d see someone else mention big O notation out in the wild!

      :high-five:

      • you are going to meet a lot of OG redditors in the next few weeks. Old reddit had Big O in every post, even posts with cute animals.

  • Thanks for your and the other Lemmy devs work on this. These growing pains are a good thing as frustrating as it can be for users and maintainers alike. Lemmy will get bigger and this optimization treadmill is really just starting.

  • I’ve done this twice in the last 20 minutes and the content is not there. This workaround was working earlier today though.

  • One of the large applications I was working on had the same issue, to solve it we ended up creating multiple smaller instances and started hosting a set of related API's in each server.

    for example read operations like list posts, comments etc could be in one server. write operations can be clusered in one server.

    Later, whichever server is getting overloaded can be split up again. In our case 20% of API's used around 3/4th of server resources, so we split those 20% API's in 4 large servers and kept the remaining 80% API's in 3 small servers.

    This worked for us because the DB's were maintained in seperate servers.

    I wonder if a quasi micro-services approach will solve the issue here.

    Edit 1: If done properly this approach can be cost effective, in some cases it might cost 10 to 20 percentage more in server costs, however it will lead to a visible improvement in performance.

  • My comments seem to not go through sometimes, whenever I upvote posts on Jerboa I get an error that just says timeout. Doesn't seem to happen on other instances.

  • Yea I started just using my clipboard alot, and hitting the x or back on jerboa and comments go thru at about 90% success rate. For posts I let it do its thing just incase. But been copying stuff to save as draft.

62 comments