Three years ago this week, WannaCry shook the world. The ransomware spread unlike anything before it, infecting hundreds of thousands of computers around the globe within hours of execution. No target was off limits, and even the UK’s National Health Service (NHS) was pushed offline, causing nationwide chaos. What such an attack would mean for the NHS in the current climate, nobody wants to think about.

WannaCry worked by encrypting data on Windows machines and demanding a Bitcoin ransom. It achieved this through an exploit that the National Security Agency (NSA) had developed called EternalBlue, which a hacker group, The Shadow Brokers, leaked months prior to the attack. Microsoft released patches to mitigate the vulnerability in good time, so good cyber hygiene would’ve gone a long way towards stopping the attack. Many organizations, however, failed to apply the patches or were using outdated systems, allowing WannaCry to move at the incredible speed it did.

So for a couple of days in May 2017, the world watched a new breed of monster gather seemingly unstoppable momentum. The internet was crumbling hour by hour, and the headlines made sure everybody knew it. But on day three of the attack, something happened nobody could have predicted. Marcus Hutchins, a teenage hacker working out of his parents’ house in rural England, discovered a kill switch in WannaCry’s code. The digital world was saved.

As with any true story, nothing matches the insight of those who were there – the people who lived the event in real time. We’re fortunate that many cyber pros call Immersive Labs home, so we found out where they were during history’s greatest cyberattack, and how they responded. This is WannaCry in their own words.

Phillip Neale, Information Security Officer

I remember that it was a sunny Friday in early May – why do these things always happen on a Friday? I was working for a large bank as part of its EMEA Global Information Security function in London, and was responsible for a regional team which included vulnerability management and remediation.

As news of WannaCry broke and more information emerged, an incident was declared. In EMEA the majority of the team were supporting the US, providing regional updates, or fielding questions coming from the business. (And believe me, there were lots!)

I managed to get home on Friday night but dialed back in several times over the weekend. By Monday we were in a monitoring status rather than a full response status, due to the weekend efforts of the operation teams who had been on point since midday on Friday.

As more details emerged about the organizations that were impacted, I realized how lucky I was to work in a great team with strong leadership, and in a business that reacted when it needed to. The team’s response was nothing short of amazing.

Colette King, Cyber Crisis Expert

I remember the Friday afternoon of WannaCry clearly: I was sending the last emails of the week, looking forward to a few beers with friends, and preparing for a relaxing weekend. It had been a demanding week covering some of my manager’s duties while he was on holiday, so those future beers were well earned (and eagerly anticipated!). Then WannaCry reared its head and all plans were shelved – it was straight into full incident response mode. Thankfully, and this is credit to the technical teams, we were able to get a handle on the situation swiftly. The biggest challenge, in fact, was around comms; what and when to tell who, and who was needed to approve and send these messages.

A significant incident response lesson was learned that day: be sure you know who to contact and how if urgent companywide comms are needed – especially over a weekend. For that I have to say, thank you, WannaCry.

Dan Butcher, Principal Blue Team Content Engineer

I was leading a small team of security engineers who were responsible for maintaining our enterprise security information and event management (SIEM) solution. We were buried in deep thought and discussion about how to best parse different log formats, the types of enrichment that we could perform, and the various correlation rules and dashboards that we could produce with the data from the many devices that were, or were soon to be, sending their logs to us. It certainly wasn’t a role that placed us on the firing line when a security event took place, but we were at the forefront of developing the company’s threat detection capability.

We sat across the room from one of the many SOCs dotted around the globe, and had a good working relationship with the team. They would often wander up to my desk and ask questions about event correlation, threat detection, or the enterprise environment. My team and I knew the SIEM and, by extension, the company’s networks like the back of our hands, so were happy to help them find the information they needed or answer any questions.

Knowing the threat that WannaCry posed, I was shielded from the conference calls and frenzied, resource intensive SIEM searches taking place in the SOC room just six feet from my desk. I could feel the tension building behind the glass-paned walls with my colleagues in security operations and sensed the flurry of questions I’d soon be getting – probably before they had thought to come and ask.

Soon enough I had a visit from a member of the SOC; a kill switch domain had been discovered in WannaCry, and they needed to know which data sources to search so they wouldn’t have to trawl the billions of logs produced over the last couple of days. A few short queries later I was able to show them that there were no signs of infection on our network and how to reproduce the steps I took. Armed with the knowledge of how to monitor the situation going forward, the analyst thanked me, and we carried on with our day.

Kev Breen, Director of Cyber Threat Research

The 12th of May 2017 isn’t a day I’ll forget any time soon. It was the day that Alien: Covenant came out, and as Sci-Fi films go it was… oh wait, something else happened that day too.

I was working as a malware analyst at a defense contractor. It was a fairly standard day to begin with; the SOC were doing SOC things and we were examining some phishing malware that had arrived. We had the threat feeds open and I started to see reports of widespread ransomware infections. This was right up my alley, so I aimed to get hold of a sample. This way I could give it to the SOC for log detections, the firewall/IPS teams for blocking, and the CIRT teams for threat hunting.

Everything was going well. Typically for this kind of situation, the first thing we did was grab domains from any reports, put them in the SIEM for logging, and block them at the gateway. This way if something beaconed out it might not pull second stages.

We then started to see reports that this was actually a kill switch domain, which meant blocking it would do more harm than good. It was now about trying to get any active blocking pulled back to solely alerting. Fortunately we were never infected, but, oddly, our fast response could have hurt us!

Immersive Labs was in its infancy during the WannaCry attack, amounting to just a handful of people in a shipping container. But those people were as dedicated then as we are now, creating a lab on WannaCry within 24 hours of it emerging. That iconic lab has had a few facelifts since then, but it’s still one of the most popular on our platform. And the good news is, you can check it out for free – just click the button below to get started.

Check Out Immersive Labs in the News.

Published

May 15, 2020

WRITTEN BY

Immersive Labs