Lemmy Lead Developer

I also develop Ibis, a federated wiki.

  • 73 Posts
  • 163 Comments
Joined 5 years ago
cake
Cake day: January 17th, 2020

help-circle




  • “Help Design Lemmy” sounds good, thanks for the suggestion. I looked around for some info about A/B testing but it seems relatively complicated to setup. Do you have any tools to suggest for that? And I can see what you mean about the text sounding unsure. What do you think about this one?

    We provide Lemmy as free and open source platform without any tracking or advertising, and work every day to improve it. Yet we also need money to pay our bills and provide for our families. Only 2% of Lemmy users donate, so we need your donation to keep this model working. Thank you for helping to create a new form of social media.




















  • Nutomic@lemmy.mlOPMtoAnnouncements@lemmy.mlLemmy AMA March 2025
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 month ago

    The stack is great, I wouldnt want to change anything. Postgres is very mature and performant, with a high focus on correctness. It can sometimes be difficult to optimize queries, but there are wizards like @dullbananas@lemmy.ca who know how to do that. Anyway there is no better alternative that I know of. Rust is also great, just like Postgres it is very performant and has a focus on correctness. Unlike most programming languages it is almost impossible to get any runtime crashes, which is very valuable for a webservice.

    The high performance means that less hardware is required to host a given number of users, compared to something like NodeJS or PHP. For example when kbin.social was popular, I remember it had to run on multiple beefy servers. Meanwhile lemmy.ml is still running on a single dedicated server, with much more active users. Or Mastodon having to handle incoming federation activities in background tasks which makes the code more complicated, while Lemmy can process them directly in the HTTP handler.

    Nevertheless, scaling for more users always has its surprises. I remember very early in development, Lemmy wasnt able to handle more than a dozen requests per second. Turns out we only used a single database connection instead of a connection pool, so each db query was running after that last one was finished, which of course is very slow. It seems obvious in retrospect, but you never notice this problem until there are a dozen or so users active at the same time.

    With the Reddit migration two years ago a lot of performance problems came up, as active users on Lemmy suddenly grew around 70 times. You can see some of that in the 0.18.x release announcements. One part of the solution was to add missing database indexes. Another was to remove websocket support, which was keeping a connection open for each user. That works fine with 100 users, but completely breaks down with 1000 or more.

    After all there is nothing I would do different really. It would have been good to know about these scaling problems earlier, but thats impossible. In fact for my project Ibis (federated wiki) Im using the exact same architecture as Lemmy.