Catbird Networks Director of Product Management, Malcolm Reike, talks about how virtualization changes the security game with Network World Editor in Chief John Dix.
Outline the security toolset you folks offer.
We provide a multifunctional, network-based security control suite that overlays on the virtual infrastructure and gives you the ability to manage policy for firewall, IDS and IPS and do vulnerability scanning and configuration scanning with the Security Content Automation Protocol. We also have Layer 2 network membership monitoring that allows you to look at what physical things are directly connected to your virtual infrastructure. So that allows you to figure out what that set of things are you can’t necessarily block with a firewall.
And then we tie it all together by allowing you to select compliance frameworks like PCI and HIPAA, and as each policy is applied to an asset, as each control is put in place, we are continuously monitoring how those controls and those policies impact your compliance levels, and report in real time in the native compliance framework language. We say stuff like, “You are now 2.5 out of 3 compliant with PCI 1.2.”
So we report our current security configuration, our current policies, and our current controls in the language of the compliance framework, which allows the operator to very easily communicate with a GRC team to communicate how they are contributing to compliance.
How many customers do you have?
We’ve got about 30 customers deployed right now.
Who do you compete with?
A lot of people are doing multifunction and a lot of people are doing software-defined security or security for the software-defined data center. Others do kind of event-based compliance measurement monitoring GRC-type stuff. But in terms of multifunctional security solutions, nobody has the body of controls we do, and does compliance the way we do, and nobody who does compliance has the number of controls we have. So we’re kind of filling a niche at this point. Our biggest competitor is customers wanting to secure their virtual environments the old way, or to implement virtualized security controls that aren’t unified in a single interface.
You didn’t name VMware as a competitor, so I presume they are a partner?
We have been an alliance partner of theirs for quite some time. They’re implementing a methodology for vendors to deploy their security as services, which is similar to our approach. But from our perspective this is good because we’ll be able to report on controls we have and, through emerging technologies on the NSX side, also extend our real-time continuous compliance monitoring to controls we don’t directly orchestrate with. So for every control that integrates with VMware’s integration framework, we will have the capability to extend our orchestration. So we’re very excited about those developments.
How is virtualization changing the game?
For one, roles are changing dramatically. IT organizations that used to have a whole group of data center system admins are now managing virtualized servers with a fraction of the people. Data center operations have been streamlined, basically due to the automation that is afforded to those that adopt software-defined or virtualized data center technology.
Deployment of a workload, configuration of the operating system, configuration of supporting technology like databases, deployment of a specific application stack, all of those things have been automated due to the ability to basically snapshot and freeze and template virtual machines, then deploy them onto virtualized instances of Intel hardware at the click of a button. It’s literally a do-once-execute-many kind of approach to configuration in the data center.
At Catbird we have the same kind of concepts. We can construct an empty policy envelope that has all of the controls you would need; firewalls, scanning, Layer 2 access control, etc. And when the virtual infrastructure administrator goes click, click, click and deploys workloads, these policy envelopes get immediately applied. That means we are able to deploy more security more ubiquitously across virtualized or software-defined data centers than we ever could have possibly been able to with the physical analog.
A lot of people think, “We were more secure with physical.” The simple fact of the matter is that we weren’t. We can deterministically apply security controls now, whereas before they were not deterministically applied. We did our best to architect the network to have choke points for these network-based security controls, and any traffic that went through these choke points was subject to the controls. But now we can apply it right co-resident on the virtual switch and guarantee that any and all traffic entering this workload is being subject to firewalling and IDF, and the thing is scanned on a deterministic schedule, etc.
We can then view, process and report on those results from a unified management console just like we have a unified management console for deploying workloads and managing workloads. We have a unified management console for deploying firewall IDF, vulnerability scanning, configuration checking, etc.
Firewalls in particular are a management nightmare, given people load them with all sorts of rules that are never changed again for fear of interrupting a service. Is that problem exacerbated in the virtual world when you can easily create instances all over the place?
Yeah, that’s interesting. I’ve been thinking quite a lot about this. When you deploy a firewall co-resident with the virtual cable of the virtual machine, you can start looking at your firewall rule set in a way that’s much more local to the asset it’s protecting.
Let me give you an example. In Catbird we manage firewall rules, and we do it on a TrustZone basis. That’s our policy envelope methodology. So I say stuff like -- these five servers are in this TrustZone and they can only talk to this other TrustZone providing database services with database network traffic. And anybody can come into it with web, because it’s a web app.
What I’m able to do when I’m managing ACLs or network-based access control rules is forsake the context of every other zone or IP address and simply look at those rules that are impacting that zone. Now the zone is a container for many assets that might have multiple interfaces, etc., but I’m essentially getting an economy of scale when I abstract multiple IP addresses to this zone because I can look at the rule set within the context of just that zone.
And then when those zones are removed, or when the virtualized assets in those zones are decommissioned, I can easily see that and remove those rules. When we think about it this way, a firewall operator need not ever consider and manage 70,000 rules at a time. The system is intelligent enough to manage only the subset that’s relative to them.
Does the DevOps movement – the practice of merging the development and operations groups -- change the equation at all? Someone recently said DevOps is the last great hope for security professionals because it would let them bake in security early.
I’ve been to organizations that take security very seriously: federal, financial and high-tech companies who have the skills to protect their IP, etc. I’ve also been in organizations who know that security is important but haven’t allocated anything more than technology expenditures to security ever. So it really runs the gamut.
But even in organizations that take security seriously, security is rarely baked in from the beginning. So I agree with the DevOps sentiment that security people should be involved from the beginning. It seems that someone is getting hacked every day now. They learn how much security they need by living through these nightmares.
Good security is the ability to respond quickly when things go wrong. If you knew how they were going to attack and subvert your systems tomorrow, that would be the Holy Grail, wouldn’t it? Most security solutions are looking at yesterday’s hack.
That’s why I am a firm advocate of multifunctional integrated security solutions that perform automation. Because with these types of tools I can analyze network traffic and manage firewalls and scan and look at the open ports and launch a configuration scan, all from the same unified interface. That gives me much better holistic unified threat visibility than I can possible get with five different consoles open on my desk. And that, by nature, means tool consolidation.
Can you get there by stitching together best-of-breed stuff?
I don’t think you can because, by its very nature, best-of-breeds do one thing well. So therefore it’s a point solution. So therefore I have to constantly be translating between each one of those instead of looking at a unified security solution that does what it’s supposed to do as soon as a virtual machine is spun up. Right? IP addresses are really a bad way of binding security information to a web server running the credit card app-15 . It’s a really bad way to do it because it’s so abstract.
It’s like using your telephone address book backwards. “I want to call Bob. Oh, he’s in 831. Oh, his exchange is 478. OK. Oh, there’s Bob. Call Bob.” No. I call Bob. I don’t even know Bob’s number anymore. But when I’m doing network security, if I can’t bind the event to a specific virtual machine logically associated with an app through a policy container like a TrustZone, I don’t have a consolidated view. I’ve just got a bunch of events that I have to constantly correlate between.
But there have been so many efforts over the years to make all the security tools work better together. Why does your approach stand a better chance?
I think the lynchpin that makes it possible today is this idea that for the entire lifecycle of a workload I can, through the virtual infrastructure, know network-based attributes that are relevant to my security controls in a way that I’ve never been able to do before. When I ping an IP address I can tell whether it’s up or down. If I want to verify that it’s still connected to an application that it was a week ago, that’s actually a harder problem. But when I’m looking at it from the virtual infrastructure I know that it’s VM-25 App-3 every time. And I know its IP address, which switch it’s on, that it’s protected by a firewall, and I know it was scanned. As a matter of fact, the scanner changes the IP address when the IP address changes to make sure that it scans the correct asset. You can’t do that with the point solutions.