• 0 Posts
  • 12 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle
  • When most sites refer to passkeys, they’re typically talking about the software-backed kind that are stored in password managers or browsers. There are still device-bound passkeys though. Also since they’re just FIDO/WebAuthn credentials under the hood, you can still use hardware-backed systems to store them if you really want.

    While you’re right that device bound and non-exportable would be best from a security standpoint, there needs to be sufficient adoption of the tech by sites for it to be usable at all and sufficient adoption requires users to have options that have less friction/cost associated with them, like browser and password-manager based ones.

    Looking at it through the lens of replacing passwords instead of building the absolutely highest-security system helps explain why they’re not limited to device-bound anymore.


  • Sadly I’ve run into the same type of problem with a newer TLD as well. My solution was to get a domain in the older TLD space (e.g. .com, .net, .org). I doubt this will be the last site you run into that doesn’t support a newer TLD and the low likelihood that you’re going to be able to convince someone to fix the issue at every one of those outdated sites means that you’ll eventually need a backup domain for something.



  • Spotlight7573@lemmy.worldtoPrivacy@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    6 months ago

    I think it was more targeting the client ISP side, than the VPN provider side. So something like having your ISP monitor your connection (voluntarily or forced to with a warrant/law) and report if your connection activity matches that of someone accessing a certain site that your local government might not like for example. In that scenario they would be able to isolate it to at least individual customer accounts of an ISP, which usually know who you are or where to find you in order to provide service. I may be misunderstanding it though.

    Edit: On second reading, it looks like they might just be able to buy that info directly from monitoring companies and get much of what they need to do correlation at various points along a VPN-protected connection’s route. The Mullvad post has links to Vice articles describing the data that is being purchased by governments.


  • Spotlight7573@lemmy.worldtoPrivacy@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    6 months ago

    One example:

    By observing that when someone visits site X, it loads resources A, B, C, etc in a specific order with specific sizes, then with enough distinguishable resources loaded like that someone would be able to determine that you’re loading that site, even if it’s loaded inside a VPN connection. Think about when you load Lemmy.world, it loads the main page, then specific images and style sheets that may be recognizable sizes and are generally loaded in a particular order as they’re encountered in the main page, scripts, and things included in scripts. With enough data, instead of writing static rules to say x of size n was loaded, y of size m was loaded, etc, it can instead be used with an AI model trained on what connections to specific sites typically look like. They could even generate their own data for sites in both normal traffic and the VPN encrypted forms and correlate them together to better train their model for what it might look like when a site is accessed over a VPN. Overall, AI allows them to simplify and automate the identification process when given enough samples.

    Mullvad is working on enabling their VPN apps to: 1. pad the data to a single size so that the different resources are less identifiable and 2. send random data in the background so that there is more noise that has to be filtered out when matching patterns. I’m not sure about 3 to be honest.





  • They haven’t released the text publicly but they’re voting on it in less than a week. That’s also one of the many objections that Mozilla et al has to this whole thing: it’s basically being done in secret in a way that won’t give the public any time to react or object.

    Historically, the browser vendors have only distrusted certificate authorities when they had reason to not trust them, not some arbitrary reason.

    One of the examples of them preventing a CA from being trusted is Kazakhstan’s, which was specifically set up to enable them to intercept users’ traffic: https://blog.mozilla.org/netpolicy/2020/12/18/kazakhstan-root-2020/

    Even if all of the EU states turn out to be completely trustworthy, forcing browser vendors to trust the EU CAs would give more political cover for other states to force browser vendors to trust their CAs. Ones that definitely should not be trusted.

    I think there wouldn’t be nearly the same level of objection if it was limited to each country’s CC TLD, rather than any domain on the internet.


  • How is giving any EU state the ability to be a certificate authority in your browser for issing a certificate for any site, without them needing to follow the rules the browser vendors have for what makes an authority trustworthy, with no option to disable them or add additional checks to their validity, “protecting their citizens from (American) corporate abuse”?

    From the Mozilla post:

    Any EU member state has the ability to designate cryptographic keys for distribution in web browsers and browsers are forbidden from revoking trust in these keys without government permission.

    […]

    There is no independent check or balance on the decisions made by member states with respect to the keys they authorize and the use they put them to.

    […]

    The text goes on to ban browsers from applying security checks to these EU keys and certificates except those pre-approved by the EU’s IT standards body - ETSI.



  • How it works: Chrome only displays the lookalike phishing protection screens for sites with similar domains to the ones you visit, which can be detected by a server when the site doesn’t load (the warning first appears instead).

    Summary from the conclusion:

    Lookalike Warnings are arguably a great safety feature that protects users from common threats on the web. It’s hard to balance effectiveness and good user experience, making Site Engagement a vital source of information. However, since disabling Site Engagement or Lookalike Warnings is impossible, we believe it’s important to discuss these features’ privacy implications. For some people, the risk of exposing their browsing history to a targeted attack might be far worse than being tricked by lookalike phishing websites. Especially given that site engagement is also copied into incognito sessions.