Issues High Traffic Sites May Face When Disabling TLSv1.0

The assumption that TLSv1.0 is hanging on only to service Internet Explorer Windows XP users (and therefore that disabling the protocol is trivial) is widely held but is most certainly false. Depending on the industry and target demographics of a website this can be a change that requires careful planning and metrics gathering. Disabling TLSv1.0 on a blog is easy (https://hjcotton.net/ – now TLSv1.2 only!) but on large e-commerce websites [particularly those targeted at business-to-business sales where technology upgrade cycles may occur at a slower pace] this protocol still directly facilitates sales. According to a blog post by Cloudflare on March 12, 2018, less than 4% of their total customer API traffic is using either TLSv1.0 or 1.1; but on an e-commerce site this traffic may still constitute significant revenue from customer orders.

Let me make it clear – TLSv1.0 needs to go. It’s vulnerable to BEAST and POODLE. PCI DSS requires TLSv1.0 to be disabled in favor of allowing only TLSv1.1 and 1.2 connections after June 30, 2018. This deadline has already been extended for two years and another extension seems unlikely this close to the deadline. As of the date on this post, nine out of ten top e-commerce sites tested below still supports TLSv1.0 – the only exception being Kohl’s.

curl -v -s --tlsv1.0 https://github.com/ -o /dev/null/ 2>&1 | grep -E "(< HTTP|error)"

To test sites for TLS version support you can use the curl command above (an “HTTP/1.1” response header indicates success in this case) or just punch in domains into SSLLabs’ Server Test.

Aside from two blog posts from GitHub and Cloudflare engineering teams in the last few weeks there hasn’t been much insight into the thought and processes large organizations are putting in to disable it. For anyone who’s looking at this June 30, 2018 deadline and charged with maintaining PCI compliance, these posts are invaluable.

How Did They Do It?

GitHub gave notice and a timeline on their blog back in February 2017 about the upcoming changes and a follow up post in February of this year. In addition to the notice they also had a single one hour brownout of TLSv1.0 connections as a temporary heads-up to draw attention from anyone using that protocol. Mind you, GitHub probably isn’t very concerned with maintaining support for old IE versions on Windows XP – their customer-base is pretty technically savvy and has likely long since upgraded to TLSv1.1+ capable browsers. However, their audience also integrates with their service far beyond just in-browser use (think automated build tools, integrated support in IDEs, etc.). One does not need to look particularly far on Twitter to see that dropping TLSv1.0 caused pain for at least some of its customer base.

TLSv1.0 has been around for nearly 20 years and so it’s in almost everything! It’s in all manner of tools continuing to work silently in the background with much depending on them – and in an instant they’re going to stop working unless there’s a plan in place to [re]discover and upgrade/replace them.

Cloudflare is similar to GitHub in that their customer-base is technically-inclined (at least in regards to their API and Dashboard – end-user pass-through connections using their CDN service can continue to use TLSv1.0). Like GitHub Cloudflare has chosen to have a TLSv1.0 brownout period. Unlike GitHub, they are also detecting when API and browser connections to their customer dashboard are over the protocol and warning them of the upcoming change.

Finding Out How Much Dropping TLSv1.0 Will Affect You

According to CanIUse, TLSv1.1 and TLSv1.2 are either lacking or not enabled by default in every IE version 10 and under, in Firefox before version 27 and in Chrome before version 30. Note that the browser support for both 1.1 and 1.2 is almost identical and so some organizations are choosing to keep support for only 1.2. PCI DSS only requires dropping 1.0 so choosing to maintain 1.1 is a choice that can be made from traffic stats.

If you don’t have any log sources that can tell you what version of TLS clients are connecting with then your next best bet is to create a Custom Report in Google Analytics [or your analytics engine of choice] and filter on Internet Explorer 10 and below as they will probably be your pain points. Those browsers have support but it is not enabled by default.

Detecting TLSv1.1+ Support From Client Connections

The single-hour/day brownouts preventing TLSv1.0 connections that GitHub and Cloudflare used/are using are useful for discovering how your site is being used by A) internal tools your company or third parties may be using; i.e. automated scripts verifying that your website is online and B) by your customer base.

Besides programmatic use of your site either internally or by third parties for which ample notice and a brownout period can address, how can you let customers browsing your site know when their browser is connecting over TLSv1.0 and will cease to load your site when the switch is flipped permanently? There are two client-side options:
1. Use a service like How’s My SSL to determine what version of TLS is being used by client connections. With their service you can make a request to How’s My SSL’s API from JavaScript and parse the results for TLSv1.0 connections. If TLSv1.0 is detected then show a warning to that user in the browser.
2. Create a site on a domain that does not support TLSv1.0 connections. Make an AJAX request to that site on page loads – if the connection fails then the lack of TLSv1.1+ support is likely the cause. As with option 1, show a TLSv1.0 warning to that user.

For bonus points with either option: log the user agent so you can build some stats on what browsers are being affected.

June 30, 2018

As of the date of this post, sites accepting credit card payments are but 90 days away from having to turn off TLSv1.0 to maintain PCI compliance and almost all large e-commerce companies have not made the switch preemptively. This TLS change will come down to the wire and it will be interesting to see exactly when and how these sites make the transition – and what fallout there might be.

The Perfect Free Password Management System

There’s no shortage of password management sites and applications. Chances are you’ve got one built into your web browser right now and there are also a plethora of standalone options. However, there are pros and cons to each type. Below is a breakdown of each as well as long-known alternative I recommend.

While we often hear about the need to use a password manager, it’s not necessarily clear why. It’s important to note that regardless of the strength of a password, using the same one on multiple sites means that when even one of those sites is compromised your stolen credentials can now be used by attackers to log onto the other sites that use the same password. This process is called credential stuffing. Troy Hunt’s HaveIBeenPwned service demonstrates this problem clearly as it can tell you whether your email address [or your password] has been encountered before in a breach.

Option A) In-Browser Password Management

This type of password manager is built right into Chrome, Firefox, and Safari. Hell, it might even be in IE/Edge for all I know. This style of password manager displays an in-browser prompt asking you to save or update your credentials when you enter them.

Pros:

  1. Built into your browser and offers a reasonable level of security. There’s nothing to install – it just works!
  2. Integration level is excellent – username and password fields are automatically filled in without intervention and you are automatically prompted to save or update credentials as you use them.
  3. In regards to Firefox Sync and Google Chrome Browsing Profiles – password syncing. For example, if you use Chrome on your phone and on your PC you can save and have your passwords, bookmarks, and even browsing history synced between all of your devices.

Cons:

  1. By default, built-in browser password managers auto-fill stored usernames and passwords, which make them vulnerable to third party tracking.
  2. They don’t offer a password generation feature that can choose random and unique passwords for every site.

Option B) Typical Standalone Password Manager

This style of password manager is either a standalone application or a browser extension. LastPass, 1Password: the list goes on and on.

Pros:

  1. Strong, unique password generation features discourage password reuse.
  2. Syncing features to keep your credentials stored beyond just having them on a single device.
  3. [Generally] some level of browser and app-integration for inserting saved credentials more easily.

Cons:

  1. Costs [potentially]. This may be a one-time fee or a recurring cost ala LastPass.
  2. Third party dependency. How long will a given password management service be around, and once they are defunct what happens to the passwords you’ve stored with them?

Option C) My Recommendation

This system is technically a standalone password manager from Option B [though it is free and open source] and is paired with the cloud storage/syncing platform of your choice [Dropbox, in this example]. With a small amount of setup you can have a robust password management system that works everywhere.

Pros:

  1. Strong, unique password generation – just as with Option B, the Typical Standalone Password Manager.
  2. Password syncing is available through any file sharing service. In this example I’ll use Dropbox.
  3. Free.
  4. Works on every platform.

Cons:

  1. Lack of browser and app integration for inserting credentials.
  2. Minor limitations in the syncing of the password database.

Details

KeePass is used as the password manager guts of this system. It has variants that work on Windows, Mac OS, iOS, Android, and Linux.
KeePass lets you securely store passwords as well as any secret info [perhaps the gibberish you entered as the answers to some sites’ mandatory security questions?] in a single encrypted database file. It can [and should] be secured with a password and offers password generation features that will work for obscure password requirements; i.e. “8-16 characters with exactly one number and no more than two special characters”.

Dropbox provides the file syncing capability for the KeePass password file in this example, but any private file syncing site like it will work. Go ahead and sign up for a free account – the limited storage is more than enough for storing the tiny KeePass database.

Setup Instructions

  1. Sign up for Dropbox [or the file syncing site of your choice] – this should be self explanatory, but choose an excellent and unique password for this account since it will contain a file with all of the credentials for your digital life in it. Install it on all of the devices you need to sync passwords to and log in with the account you created.
  2. Download and install KeePass on all the devices you’ll be using it on. Choose the same version across-the-board to eliminate any compatibility issues. In this example I’ll use KeePass 2.x.
  3. Create a new password database and store it on your PC in the Dropbox shared folder. Password protect it by choosing a secure and unique password, since the difficulty in brute-forcing the database file is the only security this password manager has if it falls into the wrong hands. Don’t use the same password you just used for Dropbox.
  4. Create your first stored password entry. Save it and then save the database file. Watch as it magically syncs to the great cloud.
  5. From Dropbox on another device, open the password database file. This can be unclear how to do on mobile so review in the Examples section how opening the file in MiniKeePass from Dropbox on iOS works.

Examples

  1. Using KeePass on MacOS and syncing with Dropbox
  2. Using Dropbox on iOS to sync the KeePass database file and open it in MiniKeePass

Caveats

This is not a foolproof password syncing system. If you add an entry on your PC and then add one on your phone while it’s offline there will be conflicts syncing the database file with Dropbox when it’s back online. Since this is a single encrypted file rather than storage of individual credentials, the syncing system isn’t going to seamlessly sync changes like this. Avoid these scenarios by ensuring the password file is synced prior to making changes in it.

Conclusion

Password management need not be complex nor expensive. The days of ‘choose a memorable password’ are, unfortunately, long behind us now. Choosing unique, unmemorable passwords and storing them conveniently and securely is the best protection for limiting credential stuffing attacks when your personal information is made available from a breach. Choose the password manager built in to your browser, or succumb to the advertising of LastPass, or use my recommended setup for the perfect free password management system – just choose and use something!

Bringing Firefox Back!

Firefox has had no shortage of controversial security and privacy changes in recent memory. Frankly, the sum is enough to make you worry about the future of the browser itself. A few of Mozilla’s more interesting choices that come to mind are:

  1. Integrating Pocket (which it now owns)
  2. Selling advertising on new tab pages
  3. Choosing Yahoo over Google as their default search engine
  4. Marketing itself with connections to the Mr. Robot TV show

I’ve fielded more than one malware removal question after telling someone to “just search for and download 7-Zip” because of their multi-year default to Yahoo search. Try finding the legitimate download in Yahoo’s search results compared to Google’s!

With these fiascos now behind it, we’re left with but a few transgressions preventing it from being, if not as good as we’d fondly like to remember it, at least the only real competitor to Chrome. Without being able to travel back to simpler times where Firefox 3 and the Firebug browser extension brought web development to life for me, I’d have to say that Firefox has never been better. All that’s left is ditching Pocket and cleaning up the new tab screen.

Disabling Pocket

  • Visit about:config and search for / change “extensions.pocket.enabled” to “false”.

    Cleaning up the New Tab Page

  • Open a new tab and click on the Settings button. Uncheck all the options.

That’s it. Restart Firefox and rebel, rebel, rebel!

In-Flight Page Modification and Content Injection by ISPs, Hotels, and Wifi Access Points

Are you ever worried about connecting your phone or laptop to your hotel’s Wifi? Something about the scratchy towels and confusing array of door locks does little to inspire my trust in their network security. Regardless, I’d prefer to have Wifi speed while perusing /r/gifs/.

So, I connect to the hotel’s Wifi access point, go through their login portal with the provided password, and I’m good to go. I head to http://www.reddit.com/ and all is well. Except for that advertisement that’s in the footer of every page I visit. I know Reddit doesn’t have an ugly animated banner ad at the bottom of their pages. I know my laptop is free from adware. The HTTPS version of Reddit has no such banner ad, so what’s going on? /r/gifs/ and baby bat burritos will have to wait because something’s not right.

Here’s where the problem is – browsing unsecured, HTTP pages in your browser – ala http://www.reddit.com/. Most people assume that HTTPS [SSL] is only for your important, secretive browsing: think banking or paying your bills. Obviously, it’s important there because it stops third parties from seeing information like your credit card number. What else does HTTPS do? It guarantees that https://www.bankofamerica.com/ is actually Bank of America. In addition, it keeps any prying eyes from seeing the content of your traffic as it makes its way to and from the bank’s servers. More importantly, it keeps anyone from modifying it in-flight as it makes its way across the Internet.

You can see where the problem is if I were on https://www.bankofamerica.com/ and browsing for car loans that it would be a very bad scenario for me if my ISP, who wants to find new ways to make money with advertising, could inject competitor’s ads for other banks onto Bank of America’s pages as they come back through their servers to my browser. Worse, what if they modified the loan rates I saw on those pages so that I end up shopping for a loan elsewhere? This is what HTTPS prevents. It ensures that content remains encrypted and unaltered from Bank of America all the way to your browser.

So, where does my budget hotel ad injection scenario fit into this? For years it’s been known that ISPs can and do inject content into pages. Comcast is notorious for using this to display high monthly bandwidth usage warnings. While previously only a technology that ISPs might use it’s now commonplace enough that any provider of an Internet connection, whether it be your hotel or local coffee shop, can see the money to be made from injecting advertisements into the websites of people using their access points. Alternatively, maybe they’re getting a good deal on a Wifi/Internet package from a vendor and it’s the vendor who is ultimately making the money from the advertising/tracking opportunities.

This doesn’t stop at ISPs, or coffee shops, or hotels.
Comcast has extended their Ad injection to their Xfinity hotspots.
AT&T injects JavaScript into pages requested over its data connections.

Clearly this is a problem that is getting worse instead of better.

The takeaway from this is that it in-flight page modification and/or content injection is a common tactic and it’s unlikely to get better. What can be done? You can complain on the Internet and hope to shame companies to stop these practices. That could work. However, the best solution is ultimately to use HTTPS versions of sites whenever possible to prevent webpage content from being injected or modified. Websites without HTTPS availability at the moment will start seeing some pressure from both browser vendors and search engines to switch. Slowly but surely the web will become HTTPS by default; all the way down to animated gifs of baby bat burritos.

Add Character Sets to CFPOP With JCharset

This is a follow-up post to Fixing CFPOP.

If you’re using the CFPOP tag to handle any volume of email you will eventually come across character encoding issues. It’s only a matter of time! These limitations tend to be with the underlying JVM and not with ColdFusion itself (which is unusual). The good news is that the fix is quite easy.

Grab JCharset and place the extracted .jar file in one of the following spots in your CF install path:

CF8: /runtime/lib/
CF10: /jre/lib/ext/

Restart CF.

So… is it working?

In order to find out if you have JCharset in the right place, download the “CharsetTB” script from
http://www.sustainablegis.com/projects/i18n/charsetTB.cfm.

Once the CF service is restarted this file will list all of the character sets ColdFusion has access to. If you can find “UTF-7” in this listing: Good news – you placed JCharset in the correct directory! If it isn’t in the list, try another directory where CF will pick up the JAR when restarted.

Spidering / Link Checking With wget

I use XENU for link checking sites and finding missing assets but I couldn’t figure out how to make sure that it was following the redirects it encountered. For example, if an inline image source is “/images/sitelogo.jpg” but that 301 redirects to “/images/sitelogo-new.jpg”, XENU will report the redirect (as an error if you prefer), but what I really want to know is whether the destination of that redirect was a 200 OK (or a 404, or something else unintended). It wasn’t clear to me if XENU was ensuring that the file existed after being redirected.

I tried out a few other free tools but none seemed even as good as XENU. It was then that I stumbled upon the “spider” option in wget. You can set it free on a URL like so:

wget --spider -l 2 -r -p -o wgetOutput.log http://somesite.net

This will spider the URL up to 2 levels deep and ensure that any inline assets on the pages within those levels are also downloaded. The “-p” option ensures that inline assets like images or css are downloaded from a page even when the maximum number of levels in the “-l” option is reached. The output is logged to wgetOutput.log

At the very end of wgetOutput.log you’ll find a list of broken links that looks something like this. You will also get a ton of other useful information about every request that it made – so you know exactly what it’s doing!

Spider mode enabled. Check if remote file exists.
--2013-08-06 20:10:40--  http://somesite.net/images/sitelogo-new.png
Reusing existing connection to somesite.net:80.
HTTP request sent, awaiting response... 200 OK
Length: 4153 (4.1K) [image/png]
Remote file exists but does not contain any link -- not retrieving.
 
Removing somesite.net/images/sitelogo-new.png.
unlink: No such file or directory

Other Useful Options

Specify a user agent:

-U "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)"

Spider a site that forces you to log in:

  1. Get the Cookie Exporter Add-on for Firefox.
  2. Log into the site you want to spider.
  3. From Firefox, run Tools -> Export Cookies -> cookiesFile.txt
  4. Use the “–load-cookies” option:
    --load-cookies cookiesFile.txt

Complete Example:

wget --spider -l 2 -r -p -o wgetOutput.log -U "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)" --load-cookies cookiesFile.txt http://somesite.net

Time Management – The Pomodoro Technique

This isn’t a programming post but it is something that is important to developers: time management.

It can be difficult to “get in the zone” and stay there for a length of time because of general distractions in the office – phone/email/IM/etc. Some things are out of your control but I have found a time management method that works quite well when I am tackling larger tasks and not putting out fires: the Pomodoro Technique.

Basically, the Pomodoro Technique is breaking up your work into 25 minute coding blocks that are separated by 5 minute breaks. After 4 of those 25 minute “pomodori” blocks of work you take a longer 15 minute break. I use the term “break” pretty loosely because I generally take that time to check email and IM and respond to anything that needs my attention.

This doesn’t work EVERY day – some days you are jumping around to many small tasks and can’t take advantage of it, or there’s a lot of email or IM activity that you need to be a part of. But, when you are working on tasks and can temporarily limit communication distractions I have found this regular cycle of uninterrupted work followed by dedicated time for email to be a surprisingly productive idea (considering how simple it is).

There are a variety of timers available for absolutely everything (you can give it a go right from your browser with a site like Tomato Timer). However, I’m rather taken by the desktop app Tomighty as I can set it in the system tray and forget about it until it notifies me that the pomodori (or break) is over.

tomighty tomighty-break

Another ColdFusion SerializeJSON Bug

Adobe recently released ColdFusion 10 Hotfix 11 and it fixed Bug #3338825! I am positively ecstatic because invalid JSON causes me a lot trouble. This hotfix even addresses two other JSON serialization issues and so it appears to be good news all around.

However, it did not address Bug #3337394 in which the string “No” is turned into a boolean false. (“Yes” also returns a boolean true for good measure). This bug is still considered “Unverified” although it was filed in September 2012 (Test case below)

A colleague came across another SerializeJSON() bug today that I thought I’d share because I seem to spend a lot of my workday Regexing the input to or output from this function to clean up what it incorrectly handles. [It can’t handle the truth!]

It’s filed as Bug #3596207 and this is its test case showing a numeric string with a trailing period being returned as an integer with a trailing decimal point:

SerializeJSON({a: "1."});

Output (not valid JSON)

{"A":1.}

Expected Output

{"A":"1."}

Test Case Showing Both Issues

SerializeJSON({a: "1.", b: "no", c: "yes"});

Output (not valid JSON)

{"A":1.,"B":false,"C":true}

Intended Output

{"A":"1.","B":"No","C":"Yes"}

Adobe has made some good progress with the latest hotfix but there are some serious bugs left in functions that have been neglected for some time now. SerializeJSON(), for instance, debuted in CF8 almost six years ago and after that amount of time these very basic serialization test cases should not fail.

Fixing CFPOP

Some of my ColdFusion projects involve receiving A LOT of email via CFPOP. It’s perfect for the job about 95% of the time. However, that remaining 5% failure rate reveals glaring inadequacies that I have spent significant amounts of time trying to work around.

I generally use CFPOP in a try/catch and fall back to either of the following solutions for troublesome messages.

CFX_POP3
If you’re running CF on a 32bit Windows server… $40 makes all your CFPOP troubles go away in an instant. The CFX_POP3 custom tag is an absolute bargain because it works with rare character sets, poorly named attachments, special characters, etc. Not once have I seen it fail for attachment filename or character encoding issues.

Limitations
1. Only runs on 32bit Windows servers
2. Seems to choke on large attachments (10MB+)

POP CFC
The POP CFC project is run by the creator of the CFX_POP3 tag and it contains some handy functions that use underlying Java methods for getting mail from a server and parsing through it. These functions are great for processing messages with oddly-named file attachments that will break CFPOP.

Notes
1. Supports the same character sets as CFPOP.

If you’re using CFPOP or POP CFC, I highly recommend setting up JCharset to allow processing of some of the more “unique” character sets out there. I’ll go over that in more detail in a future post.

Corrupted Queries in ColdFusion 7

For some time now I’ve had an application running on Coldfusion 7 that will randomly throw exceptions because of poorly formed SQL in seemingly random queries. I could not explain the malformed SQL from looking at the queries – they were always fine. Also, the horribly mangled SQL I would see in the exception logs could not possibly have been generated by any conditional logic in the query. Here is an example:

<cffunction name="getUsers" access="public" output="false" returntype="query">
  <cfargument name="username" type="string" required="false" default="">
 
  <cfset var local = StructNew()>
 
  <cfquery name="local.qryUsers" datasource="dsn">
    SELECT usr.username
           ,usr.email
           ,usr.name
    FROM users usr
 
    WHERE 1 = 1
    <cfif Len(arguments.username)>
      AND usr.username = <cfqueryparam value="#arguments.username#" cfsqltype="cf_sql_varchar">
    </cfif>
 
    ORDER BY usr.username ASC
  </cfquery>
 
  <cfreturn local.qryUsers>
</cffunction>

The above function would run fine 99% of the time until the SQL generated would cause an exception. The SQL from the previous cfquery that caused the exception would end up resembling something like this:

ELECT usr.username
      ,usr.email
      ,usr.name
FROM users usr
1 = 1
ASC

Needless to say it has become completely and utterly mangled – and with no way to account for it! Where has half the query run off to?! It was likely only a matter of time until a completely malformed query executed and resulted in data loss or corruption. So… what was the solution to this craziness?

My initial hypothesis was that SQL statements sometimes band together and run away at the thought of being transported to the database server for execution. This may very well have been true! Adobe could not be reached for questioning on this topic but they did release a hotfix that erected a 12 foot high fence around the edges of cfquery in order to prevent any bits from falling out or otherwise escaping at inopportune times.

Handy!

The hotfix is available here.