Mozilla Security

Unobstructed HTTPS

There’s an interesting discussion on Slashdot about SSL certificates. It brings up two valid points:

  1. Invalid certificates, while providing a secure mechanism between the client/server are extremely annoying to use in Firefox 3 for many people because of the multi-step process. Previously it was just a warning dialog.
  2. There are no free SSL certificates that are really “usable” (not throwing up warnings in a many browsers). has likely gotten the most inclusion, but it’s barely anywhere.

Certificates not signed by a trusted certificate authority (CA) give up a warning because of the idea that a certificate authority verifies the certificate belongs to the person whose name is on the certificate. This concept was busted a while back as CA’s started doing “domain validation” to offer lower prices. To “remedy” this, they created EV SSL. EV SSL requires more background checking, but at a higher cost. This means there are three tiers of SSL:

  1. Untrusted/Self Signed – Free – The user is strongly discouraged from visiting a site with one of these. Indicates the technologically the channel is secure only.
  2. Signed By CA – Variable Pricing – The user is told this is secure.
  3. EV SSL – Expensive – The user is told these sites are super awesomely amazing and can cure cancer.

Essentially EV SSL is nothing more than a scheme to charge more. EV SSL is supposed to do what a signed certificate should have been doing all along. By 2012 I’d bet there will be a SEV SSL(Super Extended Validation Certificate). Maybe that would require a DNA and fingerprints to prove identity.

The Problem

It’s 2008 (actually more than half way through it). I still can’t use a secure https connection without either throwing up an error to users (who are always confused by it), or paying a fee? It seems right to me it should be free to use https without any barrier for a technical level of security.

Why is “trust” bound so tightly to encryption? Why can’t a medium be encrypted without being trusted? The technology shouldn’t be tied the way it is to the business side of things.

Trust should be bound to encryption, but encryption should not be bound to trust. Trust is the “needy” individual in this relationship. Encryption is strong and confident. At least it should be…

A modest proposal

I propose that browsers should allow for self signed certificates to be used without any dialog, interstitial or other obstruction provided they are properly formed and not expired. The user interface should indicate that the channel is encrypted and communication is unlikely to be intercepted between the user and the server. It should note if there is any change (just like SSH notifies the user if the signature is changed between sessions). Other than that it should be transparent.

SSL certificates and EV SSL certificates should indicate in the user interface the the site being browsed is not only encrypted, but trusted by a third party the browser trusts. These are suitable for ecommerce, banking etc.

This would allow for things like intranets and other places where encryption is desired, paying for a CA to verify identity is overkill, and “domain verification” is just pointless.

Trust should be bound to encryption. Encryption shouldn’t be bound to trust. Encryption shouldn’t require verification. Encryption should be self-serve.

I’d be curious to know what others thought of the issue.

31 replies on “Unobstructed HTTPS”

With all due respect, Slashdot is probably the last place I’d go to for information on SSL certificates; they basically run the same story every year: “We need free SSL certs, why don”t Mozilla and other browser vendors recognize certs?” Three points:

1. The reason why Mozilla doesn’t recognize certs is because they haven’t yet gotten their act together and put together a solid operation that has some sort of independent audit sufficient to meet the requirements of Mozilla policy. (We’re not talking WebTrust here either; the Mozilla policy doesn’t require WebTrust but allows for other less expensive alternatives.)

2. Mozilla *does* recognize Startcom certs. Startcom offers free SSL certificates that work perfectly well in Firefox 2 and later versions on all supported platforms. Startcom certs aren’t recognized in IE or Safari, but the chances of this happening in future are IMO much higher than with

3. I am sympathetic to your position on accepting self-signed certs, and have argued for it myself at times. However even if Firefox treated self-signed certs this way it still wouldn’t help the situation on other browsers. Better to get a free cert from Startcom and be done with it.

@Frank Hecker: I’m not really arguing to support (or not support) CAcert. They are just an example as far as I’m concerned (a popular one).

My point is that there is a need for a browser industry effort to take care of #3. Encryption should not require trust through third party validation. Trust should require encryption. Distinguishing that should have been done years ago.

You shouldn’t be required to become “trusted” in order to use unobstructed https. You should be able to do so completely on your own.

I think you’re confused about what “trust” means in the context of SSL. DV certs provide authentication — you know you’re connecting to the real, even if you’re using a sketchy wireless network you found at the airport. That doesn’t mean that the CA trusts, just that if *you* trust and your computer, you can trust what you’re seeing.

You seem to be proposing including “SSH model” authentication, where users see a light warning the first time they visit a self-signed site and a strong warning if the certificate changes. I don’t think this would work for the Web, because:

* The “light warning” would have to be explicitly accepted by the user before it could stop being shown, because otherwise there is no reason to believe the user actually saw it. (Whether they actually *read* it is another matter entirely.)

* The browser would have to keep a list of https sites you have visited. This would be a privacy issue, so it might have to be disabled for users who have cookies disabled. The existence of such a list would also open up the possibility of an attacker trying to make your list very long, either to knock your bank’s site off the list or simply to DoS you.

* Any site relying on this method could be perma-DoSed in many users’ browsers by a man-in-the-middle attack that presents a different certificate. This could happen for a MitM attack near the user (affecting many sites for that user) or one near the server (affecting all of that server’s new users).

* If the site owner loses her certificate, she’s screwed, because all users will see the “strong warning” when they return.

* It would no longer be safe for e-commerce sites to use third-party credit card processors, because data sent as part of a form post could be intercepted before the user even saw the “light warning”.

* Data or privileges intended for a “real” https site could fall into the hands of a MitM attacker taking advantage of the SSH model. Even if we made sure that all cookies and passwords were deleted upon changing the certificate setting for a site, we’d still have to worry about session restore, the JavaScript same-origin policy, CAPS, HTML5 localStorage, HTTP auth, various caches, and Greasemonkey. (This was also a huge weakness in browsers that used a warning dialog instead of an error page for invalid certificates — users thought it wasn’t a big deal to accept a certificate temporarily in order to “just view” the page.)

Meanwhile, real DV certs are cheap and getting cheaper.

If you really want to bring down the cost of using https, convince DreamHost to implement TLS SNI. I’m paying way more for a unique IP address (from DreamHost) than I am for my DV cert (from GoDaddy).

I don’t know a lot about this but I’ll butt in anyway 🙂 I think Jesse’s first point on trust is the one that matters most to me. SSL isn’t enough to prevent the man in the middle attack when I’m not sure about the security of the network between me and my final destination. The unsecured wireless network is the easy one that most people can grok. When I’m connecting to, I want to knot that I’m really connected to before I submit my credentials. That’s really, really important to me. I want to be sure that it’s not some man in the middle pretending to be and getting my bugzilla credentials. So, for me, it’s not at all about whether a CA or even whether Mozilla trusts, but that I trust and I can trust that’s what I’m actually looking at when I hit submit on the login form.

That’s not really relevant for this… if it’s a situation where that’s a potential problem, you should buy a Cert from a CA. I’m not saying that trust is worthless. I’m just saying it’s not always necessary. Two examples:

1. But what about various applications that exist on a corporate LAN. Every company has them. They are only accessible when your connected either in the office or via VPN. The risks are low, hence they are self signed. Encrypting is still important to prevent snooping, but MitM attack risk is extremely minimal in such a case. Most companies either self sign (annoying error message), or don’t use SSL to avoid annoyances (which is bad).

2. It would also encourage things like router configuration, home servers, and other devices to not communicate via plain text. They are all using HTTP right now, which is also subject to MitM attacks right now. Making SSL easier to use for these applications doesn’t in any way increase the odds of MitM attack or make it easier. It just means less unencrypted traffic (passwords, data). That’s a good thing. Manufacturers are avoiding SSL because browsers show errors. Technology wise, most devices these days use 200MHz processors and could easily serve their admin interface over SSL. Many also use various open source HTTP servers that do support SSL in some way.

Bugzilla is a public site with confidential information. It should have a trusted cert. My personal router/server/other device should be able to support SSL without prompting me with an error.

If someone on your corporate or home network can watch your traffic, they can probably modify and MitM it too. So if you’re using encryption without authentication, they can snoop it with just a little more effort than snooping plain-text traffic.

@Jesse Ruderman: The effort to fire up WireShark or some of the many easy to use packet sniffers is pretty trivial. AFAIK there’s no 1 button MitM tool out there.

I think the discussion deviates quite a lot from the point Robert made. Why has encryption to be tied to trust?

The first thing very confusing for me as a user was the change of handling the URL bar in terms of colour. As I see it, the distinction between encrypted and unencrypted site has gone entirely after upgrading to FF3, which I personally find a sad thing, the bar stays white all the time.

I’m in IT since a long time, and I check the certs of sites I really need to trust and expect them to be signed from a source I can trust as well. But for me it is not a concern that the private web interface to email has a self-signed cert, because I know what it must look like in terms of what the cert itself says about itself and what the one signing it says. The most important part here is that I have an encrypted channel. For that case, it is also very easy to get around this annoying page by simply including this self-signing CA into FF and the site cert is recognised as signed by an “authority” I trust.

So why does nobody think about what the user sees and knows, or better got used to? I mean the normal user which just uses a browser and does not know anything about the difference between certs nor even heard of EV SSL. I’m far too deep into ICT and do security audits myself. But why did FF throw out the coloured URL bar? At the moment, there is no visual difference between visiting a HTTP site, my private https site with self-signed self-made CA included in the browser, bugzilla or even my online bank website. Why not use the colours used before in a sensible way so the average user has at least some form of info on what is going on? My suggestion:
1. Website uses self-signed cert directly => make it another shade of white to have some difference to the HTTP sites but stay within white.
2. Websites using some self-made CA => use a new colour like orange
3. Normal CA signed website cert => as it was, use yellow
4. EV SSL => best would be to use yellow as well, as it’s about the same as many normal SSL certs, but I’m happy with green or something like that. But IMHO, this was only invented so some companys can get their cash cow one step further.

As a next step, I’d also welcome a standard feature displaying the key data of a cert right under the URL bar or a similar place. So I can check the website this cert is issued to. I always have a very bad feeling if I see certs issued to something like * If this should be a professional site, I’ll leave. If a site involved into money can’t afford a cert for every name they use, hmmm….
If this kind of information is displayed every time a user visits a site, the awareness of the user what is going on could be much better. Like:
– Site with cert issued by Some official CA valid from a to b encrypted with 128bits
– Site with cert issued by private CA valid from a to b encrypted with 128bits
– Site without verification encrypted with 128bits
– Wildcard Site *.domain.comwith cert issued by Some official CA valid from a to b encrypted with 128bits


I think the security purists (like Jesse, apparently) are missing how sites are really used in the real world.

People with laptops end up using many different connections. Every time I use a new open wi-fi network, I’m taking the chance that my communication is being intercepted. If an SSH-style system were used by default, then I would notice a man-in-the-middle attack near my end of the network (the most likely by far) when I switch connections.

For most of the sites I have IDs on (Sashdot, LWN, …) this is “good enough”. I’m willing to risk the possibility that somebody has tapped into my connection far enough upstream to intercept all of my communications right from the first time I visit the site. But it would be nice to at least deal with the much more likely scenario that somebody at the local internet cafe is intercepting my communications.

I’m very much in favour of silently accepting self-signed certs, with warnings only if a known self-signed cert mysteriously changes. Of course they wouldn’t benefit from the more secure looking interface enhancements that CA-certs get.

… Ami.

@Robert: I’m not going to argue the whole “trust vs. encryption” and related issues. My point is simply this: Every time this general seems to come up (and the Slashdot article is no exception), people seem to start with the proposition “SSL certs are expensive, people can’t afford to get SSL certs from a CA” and then they use that as a jumping off point into an argument for why accepting self-signed certs would be a good idea. My point is simply that regardless of whether accepting self-signed certs is a good idea or not, the premise of this argument is false, at least for lots of common use cases people might be concerned about (e.g., protection of a personal site). I suggest people revise their premises and make their arguments assuming the availability of no-cost CA-issued certs.

As a side note, those people discussing the issue of SSL connections to routers, etc., may find bug 416842 of interest.

> Why is “trust� bound so tightly to encryption? Why can’t a medium be encrypted without being trusted?

Because without being able to trust who you’re talking to, the encryption isn’t worth a damn (as pointed out by others above).

I think the issue here is that you’ve confused identity verification with trust. SSL only provides you with identity verification and doesn’t say anything about you being able to trust someone. If you trust any site that goes over SSL, I have the perfect site for you…

What do you mean “specified in DNS”? If we had DNSSEC we could solve a lot of problems, but SSL comes in part because we cannot trust a thing regular DNS tells us.

Anyone can create their own custom root and add that to their browser. If you’ve got multiple machines to secure and don’t want to buy certs that’s the way to go.

Anyone can create their own custom root and add that to their browser. If you’ve got multiple machines to secure and don’t want to buy certs that’s the way to go.

But for consumers looking to control one of the many home devices now accessible via web browser, that’s not really practical. Most won’t understand how to do that. Not to mention you need to do that for each device.

Regrettably, all the ‘but I just want security’ arguments really do fall short, as initially cleared up by Jesse. It’s not a matter of being a purist, but rather that an automatically self-signed cert. could become a seriously bad issue for anyone visiting important sites for the first time on an insecure network. (For instance, a business traveler who takes a clean ‘loaner laptop’ with him/her, or one who takes a family computer, but then logs into work resources.) Domain-verification may not be really sturdy, but it’s still many levels better than just letting a compromised DNS zone point you to a self-signed server that matches perfectly given a lack of credible CA.

Secure connections to malicious servers is a worse situation than insecure connections to legitimate resources, as the former gives you a false sense of security, but the latter makes you perfectly aware of the risk.

It’s a real pity that this adversely affects self-signed certs, as I would like to see this encryption become more usable*, but the potential for fraud far outweighs the convenience. As Jesse pointed out – a larger adoption of SSL CNI would help tremendously, as this has been one of the biggest holdups for using cheap certs for resources!

*As pointed out – there are *many* places that really should be using SSL but aren’t for a variety of reasons. Also, the only way I can think of to handle the SSH-style checks of key fingerprints would be to print the key fingerprints on the backs of ATM cards and bank statements…and people /still/ wouldn’t check them. (Unless the browser requested the the n-th and m-th digits be entered in a box the first time – but soc. eng. could tackle that.)

You are asking for browser’s not to warn users when a website cannot be authenticated (trust as you put it). This is simply unacceptable, as this is one of the major problems SSL solves. People expect HTTPS sites to be secure.

I think the real problem you need focus on is: how can we make certificates cheaper or perhaps free, while at the same time maintaining high quality validation by CAs?

@Remy: You obviously didn’t read anything before commenting. “People expect HTTPS to be secure” is fallacy. Just look at phishing problems. Phishing sites are often served over SSL. That’s why EV-SSL was created to fix, which there’s evidence to suggest it didn’t work.

CA’s don’t do “high quality validation”. Phishers have been using signed certificates for a long time. You can often buy them with a shared hosting account (which you can get with a stolen credit card).

The point is encryption shouldn’t result in an error. The page being encrypted isn’t an error, it’s a feature. The site is no less trustworthy than if it was served over port 80.

By your logic, every NOT https page should display an error message since it too can not be authenticated.

MitM attacks don’t increase because of SSL. HTTP is just as vulnerable. If you have data to suggest otherwise, please cite it.


The point is, SSL certificates guarantee that when you are connected to a certain domain, the connection to that domain is secure (authenticated and encrypted) and not hijacked. Whether you trust that domain is up to you.

Phishing works by getting people to trust malicious domains. SSL and PKI do not guarantee that any given site is trustworthy, only that it is who it claims to be. EV certs can associate a business name to a domain, but then you have to decide whether you trust that business.

> By your logic, every NOT https page should display an error message since it too can not be authenticated.

I did not suggest that at all. I made no mention of conventional HTTP. People don’t expect non-SSL sites to be authenticated and browsers do not indicate them as such.

> MitM attacks don’t increase because of SSL. HTTP is just as vulnerable. If you have data to suggest otherwise, please cite it.

This is not at issue. Again, HTTP connections don’t claim to be authenticated.

> This is not at issue. Again, HTTP connections don’t claim to be authenticated.

Neither do self-signed HTTPS connections, right?


>Neither do self-signed HTTPS connections, right?

…but if the browser shows the same chrome for it as it does for a CA-signed one, it looks every bit as good as your Financial Institution of Choice or your Secure E-mail Login Page. Funny what might happen if your DNS got hijacked and one of those sites was using a self-signed cert, eh?

@John, good point. However, there is no reason the browser needs to show special chrome … just let my connection go through and don’t bother me. If I’m going to enter my credit card, I’m gonna check for a good CA-signed certificate, but otherwise I probably don’t care.

Neither do self-signed HTTPS connections, right?

Correct. You get a cookie.

…but if the browser shows the same chrome for it as it does for a CA-signed one, it looks every bit as good as your Financial Institution of Choice or your Secure E-mail Login Page. Funny what might happen if your DNS got hijacked and one of those sites was using a self-signed cert, eh?

I specifically said:

The user interface should indicate that the channel is encrypted and communication is unlikely to be intercepted between the user and the server. It should note if there is any change (just like SSH notifies the user if the signature is changed between sessions). Other than that it should be transparent.

Self signed certificates aren’t an error, and shouldn’t be treated as such. They should be treated as a different grade of SSL, just like we have SSL and EV SSL right now.

You can indicate encryption without implying that the domain is authenticated.

And if I were phishing, I’d just get a domain cert for my phishing domain and use a technique such as:

https ://

Most users wouldn’t know the difference anyway unless the browser had Phishing protection. No dialog, the site looks secure… end of story.

Robert: the problem there is that the site may want SSL for both encryption *and* protection against DNS hijacking. For example, the AUS connection that Firefox uses to download updates is SSL not for the encryption, but for the assurance that we are in fact connecting to AUS. What happens if you install Firefox on a laptop, then fire it up for the first time on an untrusted WiFi hotspot, and someone MITMs you to serve you a malicious update? Granted, the update code could be more strict than the general case, but the point remains. You bookmarked, you visit that site, you expect to get that site. The site uses SSL to ensure that you get that site. MITM attacks on WiFi are not that hard, there are tools out there these days that can do it out of the box. If someone can stick a self-signed cert into the response, and your browser accepts it without complaint, you have lost.

@Ted Mielczarek: True. And they still can. My proposal is that the UI distinguish between signed and self signed SSL certificates. The user would still know AMO is trusted, or their bank is trusted. But they would also be able to access things on their intranet, or their personal website using SSL without being told there’s an error.

The proposal isn’t really a technical change. It’s merely a UI change to be more graceful. It could even display a banner similar to that of a popup notification that the site is using a self signed cert. It’s still better than the current error page implementation. It’s more intuitive for the user who is likely to encounter self signed certs.

@Robert, you have my full support, it’s a thing of the user interface not the technical side.

A prove that a normal user doesn’t care and doesn’t understand about the “behind the scene” difference was given to me just yesterday.

He called and told me that he no longer can’t use FF3 because it gives him an error accessing the webgui page of his hosting account, and he now has to use IE6 and asked my to fix the problem for FF3.
The problem was easy, the webgui management account was using https under his own domain address (like but he does not have a cert for his site, so provider uses a generic self signed one.

Lesson learned: The error page FF3 shows is not up to the task for the normal user. It leads to a workaround even worse, go back to IE6. Yuk…

The only easy method a user understands would be a banner explaining the three different SSL versions, self signed in white with warning, normal SSL in yellow as the user is used to an EV SSL in green, the later two giving the “verified” information as additional data .

Usability study on the user should have been conducted before introducing such a “feature” as this – or it comes down to the question if FF3 is a geeks tool or an end user product.

It’s frustrating to see the hard work of getting clients to use FF getting a bad hit with such bullshit. Sorry, but I’m quite angry about it.


In response to Frank Hecker’s July 19th post. This is not entirely true. Safari does support Startcom. The problem is actually with Startcom itself. Since they moved to the new website, new free SSLs issued require browsers to have the new root certificate installed. Safari only has the old root cert. So old SSLs issued by startcom will be recognized but new ones won’t. Until perhaps Apple includes the new root cert in future upgrades of Safari.

Robert, is there something new on this front? While the need for self-signed certificates is now an all-low, they still exist, notably on embedded devices. Your approach seems sensible: treat the HTTPS site as not-quite secure with a special padlock icon (and a warning if needed).

Leave a Reply

Your email address will not be published. Required fields are marked *