greatgib a day ago

At this point you can wonder if there is not an hidden agenda, because this push to reduce certificates is excessive regarding the commonality of the issue that is supposed to be solved by that.

I'm wondering if it is not a push by big tech lobbyists to push everyone to "cloud managed solutions". Because to be clear this makes life harder and more expensive for on prem solutions and device that might not be often online or freely connected to internet. Also, whatever service you will have in the world, you will be bound to be in frequent contacts with a limited number of actors even if only letsencrypt. And most of these actors, like letsencrypt also might be us based and found by whatever US regulation there would be an anytime.

  • greatgib 3 hours ago

    From another post, from Digicert, it looks like that my suspicion is confirmed:

        Prior to this proposal by Apple, Google promoted a 90-day maximum lifetime, but they voted in favor of Apple’s proposal almost immediately after the voting period began.
    
    https://news.ycombinator.com/item?id=43693900

    To be noted that this doesn't just affect certificate generation, but also things like when you have a pool of subcertificates from a given certificate, that will all have to be regenerated and deployed as frequently in cascada, and also, like when you have server or clients or a specific device that validate specific certificates. Their only viable solution would be to accept widely any certificate from one of the big top level "trust" CAs...

  • tptacek 20 hours ago

    I don't think the agenda is at all secret: it's a push to move the WebPKI to full automation, and to eliminate sites where administrators fetch certificate files and install them on servers. If you were an engineer and had to choose one over the other, you'd choose the former too: it's much more agile and it's less exposed to the large-scale distributed systems problem of revocation.

    • taraindara 15 hours ago

      But nothing stops companies from doing this today, without forcing everyone to make the change also.

      You can refresh your cert every single day. the cert that gives me https on my home routers web interface doesn’t need that level of scrutiny.

      • tptacek 13 hours ago

        The browsers aren't going to let you not automate, because automated certificate issuance is safer and better for the ecosystem. Your convenience and the ease with which you can provision the interface to your router, which only you use, is not going to be a factor in that decision.

  • xg15 a day ago

    Yeah, had that thought as well at some point. It definitely feels like some constraint is slowly applied to the web, though I find the exact security properties somewhat nebulous. The one concrete requirement that seems to be pushed by this is that any host that expects to be accessed through a browser has to be connected to the internet at some point - and has to have an internet-registered domain name. (And at least the root of that name will be publicly visible in a CT log and will be processed and stored by who-knows-what)

    Though weirdly, it doesn't even matter to which particular service the host connects or if the domain name actually resolves to anything.

    But it does feel like some creeping "platformization" is going on: The requirements make it reliably a pain to run anything that's not connected to the internet, such as LAN-only websites or local web interfaces of IoT devices.

    I wonder if the end goal is some sort of "app-storization" of the web, where some entity has the power to ban a site globally, even if registrar, server and users are all in a different country and even if the site only exists internally on some LAN. But then again, I'm unsure if browsers couldn't effectively do that already today - so if they could, why all the effort?

    Taking off the conspiracy hat, maybe the tighter update requirements could be a push so that people finally develop some protocol how to securely update certificates on IoT devices. That would at least disprove the conspiracy theory.

rini17 a day ago

Curious when it stops. Why not require new certificate every day?

  • rainsford a day ago

    I think you're joking, but at a certain point you're essentially getting live attestation from the CA, with the certificate duration only serving as sort of a caching function to enable faster responses by the server and to avoid overloading the CA. In that model, you might as well have much shorter duration certificates, with maybe the only limiting factor being the capacity of the CA.

    • rini17 a day ago

      I wish I were joking. When I predicted that HTTPS will be mandatory, many thought it was a joke.

    • chgs a day ago

      CA goes down and everything dies.

  • tptacek 19 hours ago

    When you get to ~30-day duration certificates, everyone's automated, at which point you almost don't care what the duration is anymore.

  • Ekaros 13 hours ago

    Why not one for each new connection? Then CA could host service to verify re-use.

  • GauntletWizard a day ago

    There are already security professionals that are pushing for 5 minute or less certificates, with mandatory OCSP stapling at even shorter intervals.

junaru a day ago

Why did DNSSEC/DANE not kill CAs? Are browsers being paid by CAs to bundle their root certificates?

  • tptacek 20 hours ago

    Nobody is being paid by CAs. CAs and browser root programs are basically natural enemies.

    Three big problems faced browsers that tried to roll out DANE:

    1. DNSSEC adoption on high-traffic/"important" domains is stubbornly very low (this used to be harder to quantify, but there's the Tranco list now; you can just take the top N of it and loop `dig ds` over it to get a %).

    2. Middleboxes hassle DNSSEC queries, not everywhere but enough that the browsers need a fallback for browsers on network paths that won't transact DNSSEC, and once you have that fallback you have to account for an adversary who simply downgrades you to the fallback; this is the "you had 10,000 CA certificates, now you have 10,001" problem --- it's also the core of the drama behind the DANE "stapling" fiasco.

    3. Browsers were able to get the WebPKI transparency ecosystem deployed using market levers they had over CAs, but they don't have the same leverage over DNS TLD administrators, so there's no "DNS transparency" on the agenda, nor could one plausibly be deployed. Similarly, you can revoke certificates (including CA certs) in ways you can't revoke DNS domains.

    It's kind of funny to look at short-duration certificates and ask "why didn't DANE save us from this", because a security engineer might look at this and think "thank God DANE failed an kept us from 30 more years of dangerous, error-prone manual key management".

  • thegagne a day ago

    Efficiency and ease of adoption would be my guess.

    The APNIC Ping podcast did an excellent 2 part series on the problems of DNSSEC. It puts more responsibility on the DNS systems that are already complex and doing a lot of work.

    The CA system with root trusts is just super efficient and easy, so it won.

    It’s not perfect, so I’m sure someday it will be replaced, but not by the current options.