Monitoring the Baddies

In this post, I provide some insight into how I keep tabs on the bad actors hitting up the web applications I care about.

Monitoring the Baddies

In this post, I provide some insight into how I keep tabs on the bad actors hitting up the web applications I care about.

Introduction

When I kicked off my career in InfoSec, I had a basic understanding and absolutely no information. Go back a few posts and read up on that.

Importantly, at my firm, we had no knowledge of the daily HTTP requests to our web applications, good or bad. We were blind. If something horrible happened, we had to suffer it first, before being able to do anything about it, let alone prevent it happening the next time.

We had two developers, who had the monthly task of dragging IIS logs from our various and many web servers into a central place, THEN parsing them into an analyser and THEN spewing out some intelligible information, for us to then make some decisions.

So, should we get hacked on January 1st, we run our analysis on February first and so a month on we're now aware of the intrusion, but can do nothing about it, except maybe block the dodgy IP and / or fix the underlying problem.

Not great. We need to do better.

Getting eyes on the problem

Back in 2015, we suffered a rather nasty denial of service attack, that took us out for a number of days. It was many layered and protocol indifferent; TCP SYN floods, UDP anywhere blasts and HTTP requests crazier than our web servers could stand up to.

Our network teams could see the bad traffic down on the wires, but we couldn't see the traffic up at the app layer. We had the data, but we weren't surfacing it anywhere useful.

Then came ELK. ElasticSearch Logstash Kibana. ELK.

We were already using the platform for quite specific purposes, such as application error or event monitoring and the like, but not for the wholesale monitoring of requests to our entire web estate, which would actually be really useful.

When we got hit in 2015, we quickly started shipping HTTP logs to our ELK stack, which showed us that we were getting reflected pingback traffic from compromised WordPress sites all over the World, which was part of the overall attack. By knowing this, we were able to shut down that particular dimension of the attack, while the network teams did their part elsewhere.

Before ELK, we were running blind.

Developing our use of the technology

After the 2015 event, we realised we needed to get a proper handle on things, so the mission was "ship all logs and look at them". That's precisely what we did.

We started with coarse data. Entire rows simply being displayed in graphs, charts and lists. Hard to interpret and therefore harder to talk to people about.

Then, as our understanding of Kibana developed, the data was more easily refined, dashboards were easier to build and ultimately, we were able to create visualisations that could be consumed by anyone with an interest.

Such is the power of the platform.

We got to the point where anyone with an interest in anything relating to web application activity could come along and get served some really useful stuff. Requests, events, errors, you name it.

How does this help with security?

Well, if you can see all the traffic to your web applications, you can start to weed out the goodies from the baddies. If this is your job (or a part of it), you know the signs. Requests that contain in the URI:

/cgi-bin
/scripts
/admin
/wp-admin
/setup
/phpmyadmin

By the way, we get these all the time, despite running none of that technology. Hackers aren't always that attentive.

Continuing, with user agents:

OpenVAS
MassScan
Metasploit / MSF
Acunetix

And so on, because often hackers (or the drones they're utilising) don't obfuscate their user agent strings.

You can hide / spoof certain things (your IP (and thus country), user agent and so on), but the resource you're requesting is as clear as day. And of course it's logged and then shipped and surfaced in a monitoring system. Using that information to threat model, it can then be automatically pumped into your intrusion prevention system, or WAF.

If you don't spoof anything, then the whole request gets passed through and you get blocked / banned / routed to somewhere else due to a variety of conditions. You can read more about this on my post about WAF.

Conclusion

It all starts with getting logging right. Where possible, log everything and then pare it back from what you don't need to what you do. Storage might be an issue, as well as shipping. Once you get the balance, you're in good shape.

Log requests, events and errors. As many behaviours as possible. Some will help application developers, some will help systems operators and some will help security analysts. They all count.

For a developer, it could help debug a problem with an API call not working, or a missing resource not serving somewhere, like a library or specific script. For an ops person, it might highlight a missing firewall rule, or knackered app pool. And finally, for a security bloke, it might display a spike in traffic from a nasty actor, trying their best to break into your stuff.

ELK isn't the only platform that satisfies these requirements, but having used it for several years, it gets my vote.

Mastodon