Why are your tweets more secure than your asset manager’s portal?

Cutting-edge security techniques are now standard on social networks but conspicuously absent from some asset management portals. We explore why, and point out a few things managers could do to better ensure their client security.

Over the past few years the use of two-factor authentication and other advanced security techniques have become the standard on consumer banking sites, yet still in many commercial financial portals the use of them remains sparse.

This is somewhat surprising, given that the concept of two-factor authentication is not new – anyone who has worked in a bank in the last decade should be familiar with the term, if not by the same name. If you have ever been issued with an RSA “dongle” at some point and used it to access your desktop, then you’ve used two-factor. By the same token, if you’ve simply tried to log in to your bank account and been sent a verification code via text message – then again you’ve used two-factor authentication.

The basics are simple – it relies on something you know (your username and password) and something you have (a secret code of some sort transmitted to you or generated by a hardware device). The idea is that without both you cannot gain access to whatever service it is that you are trying to enter – and it is extremely unlikely that some malevolent third party could access both at the same time.

Recently, consumer focused services such as Dropbox and Twitter have joined the party – offering two factor to secure your data and identity managed within their service, all while the asset management lags dangerously behind. This begs the question: Why is this not an industry standard across all web-deployed financial applications?

The price is (was) too damn high.

In the past, the answer has certainly been cost and perhaps to an extent ease of use, but in recent years infrastructure services have opened the technology to the mass market – both in terms of price and user experience. The time was, the only way to put a secret second factor in a person’s hand was via a hardware device – a very clunky process when dealing with large numbers of people. This meant managing an organization just to make sure every user of your service had their hands on one at the right moment, a logistical challenge if ever their was one.

Codes sent via text message is by far the most popular method nowadays, but even that was a very costly exercise up until just a few years ago, requiring organizations to manage bespoke cellular hardware to distribute codes to users and such.

Technology firms have taken that whole headache from their clients, offering the ability to authenticate via text, via a secure app, or via a dongle if required. They manage the software as a service, or in some more sensitive cases allow the client to host their software on-site. This vastly lowers the barrier to entry and makes it much cheaper and easier for people to adopt this advanced method of securing client data.

The cure and the cause of all that ails.

We cannot claim that adopting two-factor is a completely seamless process, however. In our experience the technical work involved lies largely in integrating with an existing sign-on system. But surprisingly, knowing how often to use the second factor is also a point of friction, and something that needs to be managed.

Making a user enter a second code each and every time they logon becomes somewhat impractical, and we suspect lowers overall user adoption of services. It’s simply annoying to have to enter a second passcode each and every logon.

To combat this one can employ a technique called passive fingerprinting. This method leverages a web browser’s ability to discern certain non-sensitive information about a user’s computer, the result of which forms a unique ‘fingerprint’ of the device that is being used.

Each time a user logs in, the site checks that fingerprint to see whether they are using the same one that logged on previously, and if the result is the same or very similar per a statistical analysis of the two fingerprints. If so, then the user is allowed to log on without the challenge. If not, then a challenge for a second factor is issued.

In this way we’ve been able to make the process a whole lot less annoying for our client’s users, and we would highly recommend the same for anyone considering going this road.

But what’s the big deal? My password is incredibly hard to guess and I never write it down.

A glance through the technology journals of the day will bring to light a litany of client security incidents across the web, many in which entire password databases were stolen by hackers and sold to the highest bidder. The chilling reality is that even if passwords are encrypted, there are techniques that hackers can use to crack even seemingly secure passwords in a matter of days or even hours.

If you or your users re-use the same password on different sites, then no matter how good your own security is, it may be compromised by a breach in that of your neighbor. An important rhetorical question for the asset manager has to be: if a hacker uses an externally compromised password (i.e. one that was gained by hacking someone else’s site) to gain access to and steal your client’s data, is it your fault? If a leak is traced back to you, who’s reputation will be tarnished?

The good news is that this two-factor method effectively removes this risk, and why we’re very happy to have it in our tool belt. If you don’t possess it you really should consider doing so very soon.

Inside job?

Another less talked about risk, but certainly just as critical, is that of an inside job, where a disgruntled employee steals login data from their employer and again hawks it on the black market to an insidious third party. We’ll deal with the mechanics of stopping such an attack in our next blog post on internal controls, but the point we would like to raise here is that even if such a leak occurs from your firm, there are things you can do ahead of time to secure and protect your users from further data loss.

It is an awful question to pose, particularly for an organization that may pride itself, and in fact stake its reputation on data security, but we think that unless you reason about your controls from the standpoint of a post-breach scenario, you are really only doing half the job. Most people may suggest that once the horse has bolted, its useless to shut the stable door, but the reality is actually quite different.

Caught with one’s hand in the honeypot.

A technique known as a “honeypot” is one such method. This involves setting up dummy users with easily crackable passwords, the idea being that if the password data were to fall subject to a brute force attack, these passwords would fall first and would be used to login to the site in the early stages of an attack. The owner of the site, knowing the users to be dummies can setup up alerts waiting for their login. If one occurs they know the database has been compromised.

In some sense the technique relies on the hackers using the easily cracked passwords first, something that is certainly not guaranteed (this commentator posits that if you know about this trick, then the bad guys do too)

However, a recent study co-sponsored by MIT and RSA takes the technique one step further, seeking to overcome this pitfall by attaching many dummy passwords to each and every account in the system with only one being the correct passcode. If any of the “honeywords” are used, they trigger the alarm. Given that the ratio of honeywords to passwords is so high, the likelihood of one being used is quite probable. This technique is not without its own challenges, but certainly provides a much more treacherous path for the would-be assailant in gaining access to another firm’s systems.

Of course the rabbit hole of security runs deep and here we are merely scratching the surface. For us, though, the application of best practices and the continual scrutiny of our own procedures is a primary function of our business.

And while we accept that such navel-gazing at one’s own controls is a luxury many managers believe they cannot afford – whether they do the work themselves or outsource – we contest that it is in fact the opposite. It’s a necessity they cannot live without.