

That’s when we see our analysts getting overwhelmed and feeling alert fatigue. But then, as we add new customers and more detections, the alert volumes start to creep back up. The alert volumes we see are a constant ebb and flow, and we spend a lot of time tuning our customer’s environments to turn down the noise to a manageable level. This includes everything from known benign scanners (such as Shodan) to what we call "internet background noise," scanners that are constantly scanning the internet as a whole, looking for things to poke at. STEVE MCMASTER: We process a high volume of alerts on behalf of our customers, and as part of this traffic, we often see a lot of noisy alerts. GN: What kind of challenges were you and your teams at Hurricane facing around your managing security alerts? I started here at age 17, right out of high school, and along the way, I've done a little bit of everything. I’ve been with Hurricane for almost 14 years at this point - it's actually the only job I've ever had. As part of these services, we provide 24x7 security monitoring, and I oversee the two teams that deliver this service. STEVE MCMASTER: Sure - to set the stage, we manage Splunk and Phantom deployments for our customers, running the gamut from infrastructure management, to search and SPL creation, to security analysis and alerting, to rapid incident response.

Perhaps we could start off with a bit of your background - how did you get started with Hurricane Labs? We had the good fortune to chat with Hurricane Labs Director of Managed Services, Steve McMaster, to learn how the company had done it. Recently the Hurricane team found a way to leverage GreyNoise Intelligence data to identify the noise in Splunk and Phantom alert traffic, reducing the load on their analysts by 25%. The company manages these platforms for customers, providing 24x7 monitoring services supported by a team of Splunk ninjas to handle the heavy lifting. Hurricane Labs lives this reality every day as a managed services provider that is 100% focused on Splunk and Phantom. Your security engineering team jumps in with yet another cycle of tuning to get the alert volumes back down to manageable levels. As you add new detections and log sources, the volume of alerts begins to rise, your analysts start to get overwhelmed, and that familiar alert fatigue starts to kick in. Imagine this scenario: You’re running a security operations center, and your team is processing a bunch of alerts coming out of Splunk Enterprise Security.
