I’m happy to see this being noticed more and more. Google wants to destroy the open web, so it’s a lot at stake.

Google basically says “Trust us”. What a joke.

  • lifluf@lemm.ee
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    8
    ·
    1 year ago

    explain like i’m a developer why wei is bad? ad blocking can already be detected

    • arthurpizza@lemmy.world
      link
      fedilink
      English
      arrow-up
      79
      ·
      1 year ago

      Only browsers blessed by a single company can view the entire web. Not exactly a feature of the free and open web.

    • SeriousBug@infosec.pub
      link
      fedilink
      English
      arrow-up
      68
      ·
      1 year ago

      What people are rightfully scared of is that:

      • Big websites will only accept attestations from big companies like Google, Apple, and Microsoft
      • Google, Apple, and Microsoft will refuse to attest your browser if you have an adblocker installed, or if you are using a browser or operating system they don’t approve, or if you made modifications to your browser or your operating system etc.

      While adblocking can be detected, you can block anti-adblock scripts, it’s sort of a weapons race. Depending on how deep an attestation goes, it might be extremely difficult to fight. Attestations might also be used to block more than just adblockers, for example using Firefox, or rooting/jailbreaking your phone, or installing an alternative OS might make your phone ineligible for attestations and thus locked out of a lot of the internet.

    • shrugal@lemm.ee
      link
      fedilink
      English
      arrow-up
      41
      ·
      edit-2
      1 year ago

      This is much much more than just ad blocking. The mechanism is so generic that it can be used to lock out users for whatever reason. If the “attester” doesn’t provide the requested proof then you’re just shit outa luck. We should not hand such a power to anyone, let alone big for-profit companies.

    • complacent_jerboa@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      ·
      edit-2
      1 year ago

      Their proposal is that, when you visit a website using WEI, it doesn’t let you see it right away. Instead, it first asks a third party if you’re “legit”, as opposed to maybe a bot or something.

      The problem is, it would be really tricky to tell if you’re “legit”, because people get very, very tricky and clever with their bots (not to mention things like content farms, which aren’t even bots, they’re real humans, just doing the same job as a bot would). So, in order to try to do their jobs at all, these kind of third parties would have to try to find out a whole bunch of stuff about you.

      Now, websites already try to do that, but for now the arms race is actually on our side; the end user has more or less full control over what code a website can run on their browser (which is how extensions like u-block and privacy badger work).

      But if the end user could just block data collection, the third-party is back to square one. How can they possibly verify (“attest”) that you aren’t sus, if you’re preventing all attempts at collecting data about yourself, or your device / operating system / browser / etc?

      The answer is, they can’t. So, to do a proper attestation, they have to have a whole bunch of information about you. And if they can’t, they logically have no way of knowing if you’re a bot. And if that’s the case, when the third-party reports that back to the website you’re trying to visit, they’ll assume you’re a bot, and block you. Obviously.

      That’s pretty much my understanding of the situation. In order to actually implement this proposal, it would require unprecedented invasive measures for data collection; and for people who try to block it, they might just end up being classified as “bots” and basically frozen out of major parts of the internet. Especially because, when you consider how people can essentially just use whatever hardware and software they want, it would be in these big companies’ interests to restrict consumer choice to only the hardware and software they deem acceptable. Basically, it’s a conflict of interest, especially because the one trying to push this on everyone is Google themselves.

      Now, Google obviously denies all that. They assure us it won’t be used for invasive data collection, that people will be able to opt out without losing access to websites, that there won’t be any discrimination against anyone’s personal choice of browser/OS/device/etc.

      But it’s bullshit. They’re lying. It’s that shrimple.

      • lobster_teapot@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 year ago

        The proposal explicitly goes against “more fingerprinting”, which is maybe the one area where they are honest. So I do think that it’s not about more data collection, at least not directly. The token is generated locally on the user’s machine and it’s supposedly the only thing that need to be shared. So the website’s vendor do get potentially some infos (in effect: that you pass the test used to verify your client), but I don’t think that it’s the major point.
        What you’re describing is the status quo today. Websites try to run invasive scripts to get as much info about you as they can, and if you try to derail that, they deem that you aren’t human, and they throw you a captcha.
        Right now though, you can absolutely configure your browser to lie at every step about who you are.
        I think that the proposal has much less to do with direct data collection (there’s better way to do that) than it has to do with control over the content-delivery chain.
        If google gets its way, it would effectively switch control over how you access the web from you to them. This enables all the stuff that people have been talking about in the comment: the end of edge case browser and operating systems, the prevention of add blocking (and with it indeed, the extension of data collection), the consolidation of chrome’s dominant position, etc.

        • hemko@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Wouldn’t this verification allow google to follow you better in the web, as they’re verifying your signature every time you visit a website?

    • lobster_teapot@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      21
      ·
      1 year ago

      As other have pointed out, it goes way beyond ad-blocking. It’s a complete reversal of the trust model, and is basically DRM for your OS:
      Right now, websites assume rightfully that clients can’t be trusted. Any security measure happens on the server side, with the rationale that the user has control over the client and you as a dev control the server. If your security is worth two cents, you secure server side. This change propose to extend vendor power, by defining a set of rule about what they deem acceptable as a client app, and enforcing it through a token system. It gives way too much power to the vendor, who gets to dictate what you can do on your machine.
      We actually have a live experience of how that could go down with safetynet on android. Instead of doubling down on the biggest security issue there (OEM that refuses to support their software for more than 1 or 2 year after release which, quite frankly, should be universally considered as unacceptable), google decided that OEMs should be allowed way more trust than the user. Therefore modifying your own OS in any way, even if it’s ripe with security flaws to begin with and you’re just trying to fix that, breaks safetynet. If you break safetynet, “critical apps” like banking apps stop working altogether.
      The worst part is that there are ways to circonvent safetynet breakage, because in the end, if DRM taught us anything, it is that if you control the client and know your way around, with enough work you can do pretty much anything you want with it. So bad actors are certainly not kept at bay, you just unjustly annoy people with legitmate usecases or even just experimenting with their hardware because in the end, you consider that your user are at best dumb security flaws, at worst huge cash machine, often both at the same time.

        • lobster_teapot@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 year ago

          Yeah, moreover you give server admins the illusion that they CAN control what happens client side, which is bonkers.
          Honestly the most infuriating thing in this whole controversy is that the proposed approach fix almost none of the issues that the authors say their proposal should fix.
          What it does however is break the open web principles in major ways.

      • vvvvv@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        We actually have a live experience of how that could go down

        Another example: latest iteration of Google Captcha. Released with promises to end manually inputting text captchas, the main thing it turned out to check for is whatever you are logged into your google account. If so, you get through automatically, or, at worst have to press a checkbox. If you are not logged in, enjoy selecting fire hydrants and crosswalks.

    • HurlingDurling@lemm.ee
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      1 year ago

      Basically the website will just not render if the browser does not have a proper credential, or if the ad’s are blocked. He’ll they could also block Linux OS clients from accessing these same websites.