After a week of dealing with attacks on Microsoft Exchange servers, I got in my car, turned on the radio, and heard that classic song by the Talking Heads.
David Byrne says he wasn’t really singing about a house on fire but instead, about breaking free from whatever was holding you back. And I thought… there’s a lesson there.
How many of you are congratulating yourself for being on O365 (or Google) and no longer hosting Exchange on premises? How many of you are cursing the fact that you still have Exchange in house? And how many of you – up until this past week – were applauding your staff for being current on all your security patches?
As I and my peers struggled to quickly mitigate this critical vulnerability – because let’s face it most of us use Microsoft for email, and many of us, aren’t in the “the cloud” yet for one reason or another (often a financial one) – I asked myself “How can I help my organization recognize how our infrastructure choices and processes impact our security footprint?”
One of the more common metrics we utilize is to measure the effectiveness of our patching programs. If you’re like me, you have policies that state that we patch within 30 days of a release (or 60 or 90 days), and use tools like Rapid 7 or Tenable to assess compliance with those policies.
What this current Exchange vulnerability told us, glaringly, was that patching and low vulnerability scores, aren’t necessarily the best indication of our risk profile or exposure. I realize that there will always be ‘zero day’ vulnerabilities, and we can’t just unplug all devices from the network to keep our organization safe. But are we becoming way too reliant on the numbers that come from tools? Even if you were patched to the most current Microsoft patches on all your servers, if you had on premise Exchange servers you were still exposed – and you scrambled. If you’re in a large organization, you had potentially a dozen or more Exchange servers to patch, lots of staff who were pulled in to do this work, lots of end users who were impacted with email outages throughout a day or two, and many days’ worth of after-patch analysis to see if a threat actor had compromised you before you patched. In one case I know of a health care organization that had to delay a go-live last weekend because their technical staff was tied up in patching and reviewing their Exchange environment.
The house was burning.
Am I saying everyone should move Exchange to the cloud? Sort of. What I’m really asking is – “When was the last time you stopped looking at the numbers and looked at where your biggest risks were and if there were better ways to reduce those risks?“ We all know email is one of the major targets of threat actors. I haven’t run into a single healthcare organization that had an entire complement of staff dedicated to doing nothing but maintaining watching and managing their email servers. This job is usually part of other tasks assigned to often under resourced, or under skilled, infrastructure staff. As a security professional, have you had a conversation with your leadership about the risk of being in the business of operating and managing a commodity that isn’t a core business differentiator but is a primary source of security risk? And Exchange isn’t the only thing that falls into this arena. What about your public facing web servers? Are they still hosted on prem? Again, another big target of attack.
Maybe it’s time we listen to the music and look at how our organizations are designed and architected. Maybe it’s time to walk down the hall to the C-Suite, slam our fist on the desk and cry that it’s high time to consider “outsourcing” or “cloud hosting” high risk security targets to the experts that build and manage these resources for a living.
The Talking Heads say “Burning Down the House” is a metaphor for destroying something safe that entraps you. Should we start with Exchange?
Be safe. Be Secure.