Hacker News new | past | comments | ask | show | jobs | submit login
Symantec Backs Its CA (symantec.com)
153 points by andygambles on March 24, 2017 | hide | past | favorite | 104 comments



> This action was unexpected, and we believe the blog post was irresponsible.

Problems since Oct 2015 and the action unexpected? see 1)

> We hope it was not calculated to create uncertainty and doubt within the Internet community about our SSL/TLS certificates.

Symantec took no ownership of the issue. Snarky underhanded remarks are not a professional way to address shortcomings in managing their product.

> For example, Google’s claim that we have mis-issued 30,000 SSL/TLS certificates is not true. In the event Google is referring to, 127 certificates – not 30,000 – were identified as mis-issued, and they resulted in no consumer harm.

Per Chrome's team an initial set of reportedly 127 certificates has expanded to include at least 30,000 certificates, issued over a period spanning several years see 2)

Summary: No ownership and no action plan conveyed in Symantec's 421 word message.

1) https://security.googleblog.com/2015/10/sustaining-digital-c...

2) https://groups.google.com/a/chromium.org/forum/#!msg/blink-d...


From your 1) link...

"23 test certificates had been issued without the domain owner’s knowledge covering five organizations, including Google"

Guess that explains part of why this particular CA incident has Google's full attention.


My take is this message was written by and for lawyers. As in, this is a coded message from Symantec to Google regarding the basis of damages upon which they will sue Google if Google doesn't backtrack.

The snarky comments were probably not meant as snarky, they just happen to be the basis upon which one can seek damages from a 3rd party for damaging your business or costing you customers.

I would guess that Symantec's lawyers and O-level execs are in deep discussions whether to sue regardless of Google's follow-up actions or retraction.

Not saying a lawsuit would help them, but they are laying the groundwork for it here to keep their options open. And send a message to Google's legal team.

Will be very interesting to see where this goes. Really hope for everyone's sake it doesn't go to court because it will just end up being a tax on users in the end (both Google's and Symantec's).


How would there be any grounds for a lawsuit ? The browser is free to implement whatever set of features it wants. Not trusting a specific CA is just a feature (or a bug), whichever way you look at it. A CA is just providing a service on the web. A service can't sue a browser for not supporting the service. Symantec is free to create its own browser that trusts its CA.


I could see a cause of action based on a libel theory or tortious interference with business affairs. Not sure it'd prevail, but there's possibly a prima facie case there.


The question is, was there a contract signed? A verbal contract? Is there an implied contract? Tortious interference?


Antitrust


Can Symantec really sue google for no longer trusting them after issuing fraudulent google certs? Additionally even if they didn't and google just didn't like Symantec and decided to no longer trust them, would Symantec have any real case if they sued? I'd think not, google owes Symantec nothing.


IANAL, but it seems that one could make a passable argument for tortious interference[1]. Google isn't just affecting their B2B relationship with Symantec, they're using their share in the browser market to affect Symantec's relationship with Symantec's customers.

[1] https://en.m.wikipedia.org/wiki/Tortious_interference


That cuts both ways, Symantec is using their share in the certificate market to affect Google's relationship with their customers.


I'm supportive of Google in this fight, but I really don't think Google would have an argument to counter-sue. Tortious interference isn't just having an effect on the relationship, you also need to have an actual tort involved. In this case Symantec could argue that Google was exaggerating the negative PR, and Symantec would probably have an easier time proving damages (from customers leaving due to their certificates being phased out). I'm not sure what tort Google could claim in response that Symantec performed. Maybe issuing the unauthorized test certificates for Google's domains? (some sort of fraud?) But IM(NAL)O that's a tough sell.


I'd argue that by advertising browser compatibility[1], meeting the browser trust requirements is implicitly part of the business relationship, thus Google enforcing them does not give rise to tortious interference. (FWIW, I studied English law, but that was some years ago and I am largely unfamiliar with tortious interference)

[1] "브라우저 호환성 99.9%" http://www.crosscert.com/symantec/02_0_00.jsp


If so I can't wait to see what comes out during discovery. My gut tells me Symantec will not come out smelling like roses.


It doesn't even really have to do with google certs per se, it has to do with certs in general and the situation probably would not be different if the bad certs had nothing to do with google.

There are rules for inclusion in Google's cert store, and those rules were broken IIRC.


Even if they sued and won, google could pay any damages out of petty cash. You'd have to be extremely sure of yourself to try and sue google.


I believe the 30,000 is from how many certificates 3rd parties validated for Symantec, without keeping adequate records or controls in place.


I think this is it. I think it needs to be worded: "There are 30,000 certificates which no one knows for sure the validity of, and thus need to be revalidated." The 127 merely proved that misissuance was quite possible, and did happen numerous times.

EDIT: I think that's really the crux of the issue. These 127 certs which Symantec claims are "harmless" are merely the ones which were stumbled across and obviously very "how is this even possible" wrong.

That's why the 30,000 is the "size of the risk". The big "Symantec" problem is that there's no good way to distinguish these 30,000 from the many more certificates issued by Symantec under different brands. For Google it's all-Symantec-or-nothing. So they're coming up with measures that apply to all-Symantec.


Any further detail from Ryan or anyone else involved here would be very helpful (their are plenty of other organizations who bootstrap based on Google/Mozilla/Microsoft/Apple's root CA program)


I think the best summary I can link to is here: https://groups.google.com/d/msg/mozilla.dev.security.policy/...

Though it doesn't mention the 30000 certs or 127 certs, it does say:

(long quote from Ryan Sleevi:)

In the current misissuance, my understanding is that Symantec asserts that the totality of the misissuance was related to RAs. Symantec's initial response to the set of questions posed by Google [5] indicated that " At this time we do not have evidence that warrants suspension of privileges granted to any other RA besides CrossCert" in the same message that provided the CP/CPS for other RAs besides CrossCert, and itself a follow-up to Symantec's initial response to the Mozilla community, [6], which acknowledged for the potential of audit issues in the statement "We are reviewing E&Y’s audit work, including E&Y’s detailed approach to ascertaining how CrossCert met the required control objectives.". This appears to be similar to the previous event, in that the proposed remediation was first a termination of relationship with specific individuals. However, in Symantec's most recently reply, [1], it seems that again, on the basis of browser questions from a simple cursory examination that such a statement was not consistent with the data - that is, that the full set of issues were not identified by Symantec in their initial investigation, and only upon prompting by Browsers with a specific deadline did Symantec later recognize the scope of the issues. In recognizing the scope, it was clear that the issues did not simply relate to the use of a particular RA or auditor, but also to the practices of RAs with respect to asserting things were correct when they were not.

It appears that, similar to the Testing Tool's failure to ensure that certificates were adhering to the fulsome standards of authentication, Symantec's newly established compliance team was failing to perform even a cursory review of the CP, CPS, and audit statements presented - despite Symantec having found it necessary in that introspective process themselves in response to [3], as noted above.

Symantec's also stated that, in response to the past misissuance, it deployed a compliance assessment tool, which functionally serves a role similar to a Validation Specialist. However, such compliance assessment was designed in a way that it could be bypassed or overridden without following appropriate policies.


The short summary of what's going on here:

The major CAs outsource to partner companies called Registration Authorities (RAs) to perform the task of verifying that people requesting certs are who they say they are --- this is especially important for markets where the company running the CA is has thin on-the-ground support. Such is the case with Symantec/Verisign and CrossCert, their partner RA in Korea.

The technical relationship between the RA and the CA probably varies a lot from firm to firm, but generally the RA has some ability to cause issuance of certificates through automated requests to the CA's infrastructure.

What Ryan and others discovered in repeated rounds of questioning to Symantec was that Symantec had been relying entirely on 3rd party WebTrust audits (these are technical and process audits for CAs conducted by Big 5 accounting firms) without doing any of its own technical due diligence. But the WebTrust audits Symantec's RA's had been doing were delivered by auditors nobody has any faith in, including (as it turns out) Symantec.

Further, Symantec was required to have technical and process controls for specific kinds of issuance requests from their RAs. And it did. But it turned out those controls were designed so that the RAs could override them on their own recognizance. Which is basically the same as running process controls on the honor system --- not OK in this environment.


Didn't E&Y feature as auditors in the WoSign/StartCom incident as well? Perhaps that decision to only refuse to accept audits from the Hong Kong branch of E&Y wasn't such a great idea...


Yep, There's now 3 different E&Y subsidiaries that are blacklisted by various parties from carrying out audits.


_Some_ major CAs outsource like this. You need this sort of on-the-ground stuff, particularly human employees who can speak the local language and understand local culture, to validate certain subject details, it's not important for the domain validation that most of us care about most of the time. Knowing if the subscriber is really Foo Corp of Shanghai, requires local knowledge, but checking foo-corp-shanghai.example is controlled by the subscriber needs, at the very most, a translated web page of instructions which you can out-source.

It is likely Mozilla policy (or the BRs) will forbid letting the local RA do the domain validation. So, a future CrossCert could lie about whether their subscriber is really Foo Corp, but not about whether they control foo-corp.example

Oh, and it's not the Big Five any more, one of the Five collapsed in scandal because it happily signed off on Enron's obviously bogus accounts. So now we have a Big Four, until another one blows up. For those taking bets, the RA was audited by a local EY, whereas Symantec are audited by a KPMG.


> Symantec has publicly and strongly committed to Certificate Transparency (CT) logging for Symantec certificates and is one of the few CAs that hosts its own CT servers.

Lol.

This is like being court-ordered to do community support and then bragging about all the volunteering you do. Symantec was forced by Google to do CT. See https://security.googleblog.com/2015/10/sustaining-digital-c... , specifically:

> Therefore we are firstly going to require that as of June 1st, 2016, all certificates issued by Symantec itself will be required to support Certificate Transparency.

(By the end of this year, all CAs will be forced to do CT, but Symantec was forced into this last year, because of the stupid shit they keep doing)


(To be clear, they weren't forced to run their own CT log, they just had to have CT set up, however setting up their own log in light of these restrictions is a business decision and not an indication of commitment to CT -- paying someone to run a CT log for them would probably be more expensive. By the end of this year we may see more/cheaper options for CAs who need to log their certs.)


Do CAs have to pay the log providers? From this article it sounds like Google is adding certs to their own log for free.

https://www.certificate-transparency.org/faq

Also

>Anyone can submit a certificate to a log, although most certificates will be submitted by certificate authorities and server operators.

https://www.certificate-transparency.org/how-ct-works


I was under the impression that there wasn't any "free to use" log out there, at least not for someone with the volume of a CA. I could be wrong / this might have changed since last year.


Google's logs are free to use, everybody can shove whatever they like into the Google logs if it chains to a relevant CA.

Most logs don't talk to random civilians about this, but at the ct-policy meeting the one log which did speak about their policies said that they either cut a $$$ deal OR they accept a mutual logging arrangement, on the rationale that if you eat the cost of logging their stuff and they eat the cost of logging your stuff, that works out fine for everybody.


The TL;DR of this whole thing

* Root CA practice allows delegating validation to 3rd parties

* However, the Root CA must accept all responsibility for any mis-validation the 3rd parties do. No throwing them under the bus

* Symantec delegates validation to 4 different companies to serve local markets

* Said companies fail to adequately validate domain ownership

* Symantec attempts to throw them under the bus

Further compounding the issue is that there is no way to separate the certificates that had more rigorous validation than the ones validated by these 4 companies


This is probably intended as damage control, but I think this response will do Symantec more bad than good. No problem description, no explanation of consequences for customers, no acknowledgment of the failure no action plan, no schedule, no options, nothing. As a reader if I'm already aware of the problem, this response provided zero substance to counter Google. If I'm unaware then I'm welcome to google what it's all about, land on the blink-dev post, emotionless and factual. The whole issue is about trust, but so far Symantec does not seem to act responsibly which does not help to re-establish trust.

I also wonder what the exact consequences will be (Symantec post fails to explain this). I mean, which big sites will be hit? When? For how many users?


A much more interesting document is "Symantec Second Response to Mis-Issuance Questions – February 12, 2017":

https://bug1334377.bmoattachments.org/attachment.cgi?id=8836...


> Symantec will vigorously defend the safe and productive use of the Internet, including minimizing any potential disruption caused by the proposal in Google’s blog post.

What they will vigorously defend is disruption to their reputation and their bottom line. What benefit does a business get from Symantec that they do not get from Let's Encrypt? EV?


Well, you can't do wildcard certs, which is fine for most businesses, but for some it's a non-starter. That doesn't make LE bad, of course, it just means it's not the appropriate tool for all use cases.


but certs dont cost anything. you can easily automate the renewal of as many subdomain certs as needed.


If you are a big company and need say 100k wildcard certs (say one per client).. I am pretty sure I can justify spending $50 for a wildcard cert vs a fleet of servers and engineers and monitors to enforce let's encrypt work.

Let's encrypt is great but let's not pretend it's the right solution for all websites.


LE should charge a small fee for wildcard certs and extended validity periods (up to some reasonable limit). They'd likely cover their operating budget very quickly.

What's their stance on wildcard certs with SANs? Is it simply too risky / ripe for abuse?


I suspect they will never allow longer validity certs because they want renewal to be automated.


Absolutely no knowledge here, but my /guess/ has been that they don't do more because it would either require more effort or would have soured the cross-signing deal so that they'd be validated by a CA that's already in older operating systems/devices.


I will a LOT if LE can issue wildcard certificates to me with a 3 year validity. And send me reminder emails when im nearing expiry.


1. Let's Encrypt has fairly stringent rate limits: https://letsencrypt.org/docs/rate-limits/ You can only get up to 2000 subdomains per week (20 certs * 100 names per cert). If you want to provision a large site, this might be way too small. For instance, my college computer club ran a web hosting service that offered everyone a $username.webhost.example.edu hostname, and there are well more than 2000 active users. So it would be a multi-week process to issue all the certs.

GitHub, for instance, has three million users. Provisioning a cert for each github.io domain would take 29 years. Yes, you can contact LE and ask for a limit increase, but three million certs would still spam the public cert transparency logs, LE's internal records, etc. significantly. That is a cost. LE rounds down the cost of an individual cert, even 20 certs per week, to zero, but the cost of three million certs is very nonzero.

2. Wildcards work in mass-hosting situations where individual certs don't. Again, for my college computer club's server, putting a few thousand <VirtualHost> blocks in Apache's configuration makes it take forever to start. I tried this. It was a bad plan. The better plan is mass-hosting tools like mod_vhost_ldap or even RewriteRules based on the hostname, but they don't let you interact with SNI, because as far as Apache (and thus mod_ssl) see, it's just a single <VirtualHost>.

3. In order to issue a cert, you need to publicly respond for the name on the cert. If you're issuing public-CA certs for private-network websites (which is generally a best practice over running an enterprise CA), you need to briefly configure a DNS entry for the private name to respond to either the DNS challenge or the HTTP challenge. You can't just configure a DNS entry internally. So your internal deployment process now involves updating public DNS. For anyone who's not running publicly-facing infrastructure (i.e., most companies, whose public Internet presence is just a website), there may not even be an automated means of updating public DNS.

A wildcard cert, meanwhile, will just email the owner of the registered domain via e.g. WHOIS contact info, which requires no external configuration, and is a one-time setup process. Everything else is internal at that point.

I love Let's Encrypt, but it's ill-suited to use cases where wildcards are popular.


Why didn't you use the classic webserver.example.edu/~$username ? Your current implementation would be a nightmare to admin in comparison


Isn't the same origin policy per domain? I suppose it's a security benefit that students can't xss each other.


Yes, exactly. At least until suborigin header [0] becomes widely supported.

[0]: https://w3c.github.io/webappsec-suborigins/


We originally did that, but we wanted different origins for each user, same reason GitHub uses username.github.io. Also, we wanted to support a few non-web services (we support svn://username.webhost.example.edu).

Administratively, we use mod_vhost_ldap, and it hasn't been complicated. If anything it's less complicated, because a number of larger websites (departments, courses, etc.) get somename.example.edu CNAMEs from the IT department, and those are virtual hosts too. Back when we were using only webhost.example.edu/~username URLs, the separate virtual hosts were a special case. Now they're just an entry in LDAP with a different DocumentRoot, that's all.


Not always possible in all scenarios. For a setup like github.io and github pages it'd be difficult for LE to handle so many domains. Or any scenario where you serve a subdomain up without knowing ahead of time what it'll be (say a tor relay service) it's just not feasible.


LetsEncrypt's software is open-source. A commercial CA could use it, remove the limits and add EV if they wanted, then lower their costs to $50 per domain: much lower and much more trustworthy than Symantec...


What's the turnaround time for LE? Could you possibly deneract an LE certificate on the fly as needed when the request is sent (just block until it's present to be served)?

Not that I would really suggest this, it's a DOS waiting to happen.


There're limits to the amount of subdomain certs you can request, so I imagine this wouldn't work.

https://letsencrypt.org/docs/rate-limits/

> Combined with the above limit, that means you can issue certificates containing up to 2,000 unique subdomains per week.


Dokku can do it in [two commands](https://github.com/dokku/dokku-letsencrypt#usage):

    dokku config:set --no-restart myapp DOKKU_LETSENCRYPT_EMAIL=mail@example.com
    dokku letsencrypt app


The problem is the volume of certs needed, not the simplicity of LE for the smaller usecase.


If your vendor supports those certificates. I am amazed by the amount of software from government contracts or the education field that specify certain certificate vendors. I have two required vendors (government vendor software specified by grant) that require one of three certificate vendors. I am not pleased, but not a real lot I can do.


There is some time, scripting, etc, cost to integrating lots of certs into a single instance of Apache, nginx, etc.

Wildcards certs have value in some settings.


A wider set of devices that accept it.

If you have a service that serves to a fleet of STBs, TVs, or other embedded devices, they have a fixed set of RootCAs and getting the OEM to update those (and possibly through a ton of different resellers) is a nightmare.

While the embedded devices won't be impacted by this announcement (if the deprecation goes ahead), there might be some additional work for anyone that services web browsers AND embedded devices from the same endpoints.


This is a pretty big deal; Symantec controls many root CAs that are pretty old and widely distributed. If you're supporting embedded devices (or phones) with pretty old root stores, you don't have that many choices, and it can be hard to know what they really are, because documentation of root stores is fairly sparse.

Of course, there's not a good way for clients to indicate which root CAs they accept, and so they don't. If you get into the situation where newer devices don't support Symantec certs, and older devices don't support non-Symantec certs, it's time to figure out how to guess if you're talking to a newer or an older device and give appropriate certs. :(


Support.

And as mockable as that might sound, it comes in handy for businesses with special requirements.


Why do i feel like support is one of the reason they issued so many certs they should not have?


Such as the issuance of SHA1 signed certificates long after they've been deprecated? /snark

(Sorry, I couldn't resist.)

EDIT: Apologies, didn't notice the other poster beat me to it.


LE is not scalable. You can't do wildcards, and you don't want to authenticate every subdomain to LE by having it spawn a http server. You don't even want most of your stuff to face the internet to begin with.


I've never hit te rate limit on LE. Seems to me you need to be running a public site that gives each user a subdomain, or similar setup, before you run into such issues.

You don't need a HTTP server to do authentication, it's also possible to use DNS and TLS SNI. There's certainly no need to spawn a server per domain, even for HTTP.

If your implementation is complex enough to require many certs, these problems are already solvable. Either fork over the money for a wildcard cert, or account for the extra work in your planning.

The setup we use is to have our own authoritative nameserver for a 'dyn' subdomain, and do DNS auth. We auth a subdomain with LE in less than 10 seconds.


With Symantec joining the ranks of StartSSL and WoSign, they can hardly claim to be "singled out".

PS: It's funny that Symantec's first google hit for "Encryption Everywhere" prompts for my browser's geolocation unsolicited. If your product is trust, maybe you should think a little bit more about how you present your product.


Perhaps StartSSL was a life-size rehearsal of the process, making sure that everything was in order in preparation for Symantec's dismantling.

No earlier than this week has Chrome 57 been rolled out. I know that because a customer reported to me that one of my website was distrusted – I had forgotten a StartSSL certificate over there, and sure enough I didn't notice it because I was still on Chrome 56. Sounds like a timely coincidence: If I were Google I would exactly try to distrust a small CA before attacking a bigger monster.


By blog post do they mean the blink-dev mailing list thread [1] that announced Google's action plan?

[1]: https://groups.google.com/a/chromium.org/d/msg/blink-dev/eUA...


Straight onto the offensive, as opposed to addressing the quite serious issues and criticisms which face them.

Their response speaks volumes.


It seems 2017 is the year that blustery ignorance of facts became fashionable. Thankfully they can deny deny deny and stamp their foot however much they want and it won't matter.


Seems like escalating this fight with Google is pretty much the last thing they should be doing. They need to be proving they can be trusted, not throwing a tantrum.


> While all major CAs have experienced SSL/TLS certificate mis-issuance events, Google has singled out the Symantec Certificate Authority in its proposal even though the mis-issuance event identified in Google’s blog post involved several CAs.

"We are not the only one doing this, why us Google, why us?"

What a shitty excuse. Laughable. Big F to Symantec and wish you bankrupt :)


I wish Google were more serious. A proof of deceptive falsification of certificates by a CA should lead to an immediate distrust in browsers, period.

The job is on the side of CA companies to run audits at the extent of possible damages, just like in nuclear plants: You just can't do a mistake, and certainly not 127.


So chrome can no longer browse ~30% of the secured web. People instead install a different browser that isn't so "serious", as you put it. Oh and the message that sends to website owners? "Just don't use HTTPS, your SSL certificate could be invalidated with no prior warning for reasons that have nothing to do with you".

Easy, right? Just like a nuclear plant.


I don't know, what's best for security? The message is don't use a dodgy provider. Yes it's a pain for customers, but don't forget we're mostly talking about level 2/3 certificates (EV, wildcards), so we're talking about banks and major companies here. They both should and have enough resources to monitor the trustworthiness of their providers. In the current case, Symantec's falsifications has been know for long enough.


I do think the "drop it like it's hot" approach could work, but it would require browser vendor coordination, so that Symantic https stops working on all major browsers at the same time/day.

And honestly, banks should have the expertise and resources to change certs in a day or two, and if they don't that's on them.


Wow, well with that response now I know for sure never to recommend any Symantec products


I've never heard anyone in my life recommend Symantec products.


Archived copy, which doesn't require JS:

https://archive.fo/4DAKD


Heh, I thought it was funny that I had to whitelist a google domain (ajax.googleapis.com) for the site to load :D


We didn't take the responsibility of being a CA seriously, and now Google is being mean to us. Waah.


Symantec is now a company that operates both as a CA and, having acquired Blue Coat, also as a vendor of TLS intercepting middle boxes that they sell to despots.

With this history of mis-issued certs in mind, Symantec's CA business should be kicked to the ground, left bleeding and never be trusted again.


As I read it Symantec is being tone deaf here about the problem. Throwing around numbers like 127 versus 30,000, they seem to be overlooking the fact that trust flows downward from a small handful of root certs, or certs closer to the root, and that if the root cert or certs and processes around them are not trustworthy, then all the subordinate certs are tainted.

They aren't helping themselves any with this kind of post, imho.


Short SYMC


Exactly what I'm thinking. Who would give Symantec money? I can't imagine the type of business that would pay them for security.


The irony is everything I've been hearing was that Symantec was on their way back to recovery as a company after somehow selling off the woefully under-performing Backup Exec line in order to focus on the security market. Which really is SEP and their SSL lines.


Sadly, security is often reduced to compliance. Nobody ever got fired for buying IBM ....


Site stuck at "Loading Your Community Experience".


Just migrate to DNSSEC/DANE already. CAs have no incentive to mis-issue certs. CAs whole business model is selling trust. Its obvious TLDs (.com/etc) are the one who should validate/issue certs.

Regarding TLDs coming under control of Govts, Its solved by independent mirror nameservers run by app devs (Firefox/Chrome/etc) and NGOs (EFF/etc).


There has never in the history of the Internet been a time when DNSSEC was less acceptable than it is today. People who don't know much about DNSSEC (reasonable, since nobody really deploys it, and it's a dying protocol) really only need to know one thing about it: it's a tree-structured PKI, just like the CA system, except that the top of the tree is run de jure by world governments.

If DNSSEC had been deployed 10 years ago, Muammar Ghaddafi would have been BIT.LY's CA.

https://sockpuppet.org/blog/2015/01/15/against-dnssec/


Except that DNS TLDs are already run by world governments.

Maybe you wanted to say that tree-structured DNS with TLDs owned by governments is bad, but this is a problem of DNS, not DNSSEC.

DANE reduces parties able to issue rouge certificate from TLD owner + all CAs to TLD owner only.


DNS TLDs are run by world governments. But the CA PKI is not. Restricting the ability to issue rogue certificates to world governments is not a win: in fact, it's a regression.


> it's a tree-structured PKI, just like the CA system, except that the top of the tree is run de jure by world governments.

Which can be solved by having bit.ly key validated by .ly key + Firefox key + LetsEncrypt key.

DNSSEC may have other architectural/mplementational faults but this property is not one of them. I believe this way of certification is the way forword for internet scale web of trust.


That doesn't sound like much of a migration.


Yes except it wont be the website server returning chain of trust but nameserver.


You do realise that the argument that that site is presenting is false.

Firstly: "With TLS properly configured, DNSSEC adds nothing."

DNSSEC presents you with a valid, secure chain to know that the meta-data (encoded as TLSA) about the end-point you intend to communicate with is valid.

i.e. Grab the TLSA record for a hostname, validate the hostname (and TLSA record) via DNSSEC and then if the TLS certificate matches the details in the TLSA record you have confirmation that things are valid.

Secondly: "Securing DNS lookups isn’t a high-priority".

Then why there is a whole IETF WG dedicated to addressing it: https://datatracker.ietf.org/wg/dprive/about/

Thirdly: "The real threat to CAs is “the global adversary”

i.e. organisations like the NSA, GCHQ collating everything, everywhere.

Any reasonable person could spend an afternoon - not being paid - providing a counterpoint to each item raised. None of them are (currently) relevant and the only one that is/was is actually being address in the IETF working group linked above.


There's nothing in this comment that rebuts anything in "that site". Essentially, all you've managed to come up with is "it's wrong, because DNSSEC does what it does".

I agree: DNSSEC does do what it does do. The problem is that what it does do isn't meaningful and doesn't add real security. At the same time, DANE/TLSA does create a system in which we replace certificate authorities with an organization run by the US Government.


Currently the DNS roots are already effectively in charge. If an entity, government or otherwise, controls .LY, they can trivially get a trusted X.509 cert for .LY. DANE just makes it much harder for other attackers to do the same.

Having a CT-like feature for DNSSEC would be nice, though.


That's clearly not true. Try this thought experiment: you're GCHQ, the owner of .IO. You want a certificate issued for GOOGLE.IO. You control the TLD and thus the GOOGLE.IO zone. What CA will issue you GOOGLE.IO on a DV basis, and what will happen if they do?

Obviously, there's a reason nobody is issuing Google certificates, despite the insecure DNS that appears to be the only obstacle to that happening. Flesh out why, and try to be specific. The point isn't really about Google, but about the fact that the Web PKI is more complicated than the CA BRs suggest, and has safeguards at multiple levels.

I am not arguing that the CA PKI is a good system. It is not. It's a wretched system. It's only my contention that DNSSEC is worse.


Well, for your exact GOOGLE.IO example, it's very likely the answer is nobody will issue but I suspect you misapprehend why.

Let's Encrypt absolutely would be happy to issue you a certificate for my web site if you can use DNS to convince them you control it.

Why not for GOOGLE.IO? Well Google is very famous, and CAs are obliged to have a "High risk" policy that identifies some names as "High risk" and requires human oversight for those names. Google is probably on every high risk list ever formulated. For Let's Encrypt since they have zero humans in their validation process, the "High risk" list is a blacklist, you just get an error "Policy forbids issuing" instead of your certificate.

But "High risk" doesn't scale. It protects Google, Microsoft, HSBC, Amazon, but it's not going to protect me or most other people.

CAA protects more broadly, but it doesn't protect you against somebody with control over DNS, because it needs DNS records to work, so they could just remove or alter the records. Also CAA is only just recently becoming mandatory, it can't save you if it's ignored by the CA.


tptacek is also arguing against a straw-man. DANE with TLSA usages 0, 1, and 2 just supplies additional constraints on top of existing PKIX validation. It's only TLSA usage 3 that has this issue at all. It would be entirely reasonable for browsers to get started with support for usages 0-2 but not 3.

I suspect the real issue is that the DNS protocol makes it extremely awkward for client machines to do full DNSSEC validation -- the normal approach is to see the AD bit and to blindly trust the upstream resolver, and this is obviously a complete non-starter for DANE.


First off, TLSA Certificate Usage 2 also effectively turns the DNS into a CA, so you're just factually off here.

Second, why is this a compelling argument? The first 2 TLSA usages are essentially what we already have: HPKP, bootstrapped from the DNS. HPKP's competition, TACK, worked the same way and didn't require DNSSEC at all. Further: all the problems people have with pinning (certificate suicide) remain in place with TLSA 0 and 1.

So basically this argument is: "that's a straw man because there's a way to deploy DANE where it doesn't accomplish anything and that's probably the only way DANE will ever be deployed so none of this matters". I am willing to stipulate that's the case.


> First off, TLSA Certificate Usage 2 also effectively turns the DNS into a CA, so you're just factually off here.

Whoops. Usages 0 and 1, then.

> Second, why is this a compelling argument? The first 2 TLSA usages are essentially what we already have: HPKP, bootstrapped from the DNS.

TLSA usages 1 and 2 bootstrap off something that still works even if you've never visited the site before (or if you're in private browsing mode, etc). HPKP can't do that unless you preload.

> Further: all the problems people have with pinning (certificate suicide) remain in place with TLSA 0 and 1.

That's just not true. You can set a reasonable TTL and rescue yourself in a timely manner from a lost certificate without significantly impacting security. With HPKP, if you set a very short expiration, you lost almost all of your security.


I suppose the OP is assuming the government has a trusted root that they can create certs from at will. I would be curious to know, were a government able to DNS spoof or cert spoof to MITM traffic, which would they choose? Which would be more detectable? I assume Google is basically off limits without local machine access due to pinning and other checks built in to Chrome.


Everyone assumes that each of the Five Eyes governments can, in a pinch, get any certificate they need issued.

What people overlook is that on the modern Internet, browsers themselves form an ad hoc certificate surveillance network. If you get a certificate issued for a site that pins, or create a CT discrepancy, there is a very good chance Google and Mozilla are going to discover the CA responsible. And, as you can see, even if you operate the largest, most important CA on the Internet, Google will, if that happens, fuck your shit up.

If you want one (of many) dispositive arguments against DNSSEC, just consider that Google and Apple and Mozilla are not in fact in a position to fuck shit up for .COM. You can't revoke or distrust a TLD: the use of those TLDs is encoded into the brains of hundreds of millions of users. Not so a CA!


Pitty DNSSEC has its own implementation issues.


Something, something, WMD's.


I'm not aware of the details of this particular case, but now that Google owns its own CA, it being in charge of unilaterally banning other CAs from Chrome is a massive conflict of interest.


Google doesn't issue certs commercially, and I can assure you that the team that works on Chrome CA issues isn't concerned about Google's ca


Considering I don't trust Google, and Google employees (most likely) went to great length to downvote my concern, forgive me if I'm not very assured by your statement.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:
  翻译: