Key Takeaways
Amol takes us through CrowdStrike's journey from $6 million in ARR in 2014 to $2.5 billion today. Over that time, we talk through how threats have changed, how buyers mindsets' have changed, and where the opportunities are for new startups today in the face of consolidation. And as it relates to AI, we speak about CISOs concerns related to ChatGPT, the AI driven attacks we're already seeing, and where LLMs are most likely to play a role in security products.
Topics Covered
- Amol's journey to joining CrowdStrike at just $6m in ARR, a few years after its founding, and not having coming from cybersecurity prior
- It was very attractive seeing the investment CrowdStrike was making in its core platform, with a vision to be more than just a single product
- The other key reason I joined was the focus on stopping breaches; at that time, the industry was not really successful at that
- What were the core ideas around which CrowdStrike was founded?
- First was delivering cyber from the cloud; that was a huge bet back in late 2011; cloud was not mainstream, let alone people putting their workload telemetry into the cloud / a SaaS service
- Second was that all incbents were signature-based, focused on anti-virus, which by definition is a reactive model; you can't build signatures for an attack that hasn't happened yet
- CrowdStrike's approach was to focus on the underlying behaviors of the software or underlying operating system
- This connects to the sub-bet around machine learning & AI when AI was not really mainstream in cybersecurity
- Third was this idea of "visibility-first"; when you want to secure your house, you need to put cameras around it to see what's going on; this is where the concept of EDR (endpoint detection and response) was born
- What value did the cloud-only model bring to customers?
- On-prem deployments create silos, you don't get the benefit of "the crowd" / "community immunity"; this is in addition to the typical cloud benefits of ease of deployment and administration
- Fast forward to 2023, CrowdStrike is a $35bn public company, what does the business look today?
- Focused today on the core of endpoint security, but has expanded beyond there into cloud security, which is expanding rapidly as a problem space
- The second very big bet was focusing on identity security; majority of attacks today originate with account takeovers, account compromise, etc.
- The third major area is around the periphery, bridging the gap between IT and security operations, with a focus on proactive security; this includes functionality very efficient web-scale log management
- What were the governing rules of cyber when you joined CrowdStrike, and how do those compare to today?
- When I joined, there wasn't an awareness of what's needed to block or stop breaches; there was a void and that's why CrowdStrike was successful
- There wasn't a true platform; the legacy vendors McAfee and Symantec had suites, but not an integrated single platform
- Attackers back then could be broken down into specific categories
- Nation state actors were doing sophisticated attacks; everything else was pretty unsophisticated
- Today, these silos have disappeared completely; the advanced techniques that were available to only nation state actors are now democratized
- The second big piece is "breakout time", the time it takes an attacker to move from one machine to elsewhere in an organization; breakout time has trended down every year since 2014; this changes the game quite a bit
- And now the attackers are of course attacking election systems, and critical infrastructure, which was less common before
- Why do small groups now have access to more advanced attacks?
- What is driving down the breakout time?
- It's the ability for attackers to do much better recon, and overcome any limits we've placed very quickly
- How are LLMs / AI broadly starting to play a role here?
- With LLMs being present, training models is far less of a challenge, and we're already seeing LLM-driven attacks on the rise, and they will continue to
- We've already seen it specifically in phishing; it's very hard to determine now what's phishing
- To what degree is CISO concern valid with tools ChatGPT?
- The developer side is where people are most sensitive; you want to be careful with what generated code you're running
- On the corporate data side, there is risk in sending data over, but it remains to be seen how the big cloud providers adapt to that; they need to reassure people that is not for training or bring stored
- Will this be a positive tailwind for data security?
- Data security is the ultimate goal; if you're able to secure the data at scale, the rest is done
- There is definitely space for a solution that tackles data security in a consistent way, and is not sensitive to labeling of the data, and understanding e.g. "no data should go to x location"
- What do you think about the products / broader market around securing AI products themselves?
- The approach will just be to put some guardrails in place, you're ensuring both inputs and outputs conform to what you want
- Broadly in cyber, what governs peoples' buying decisions today?
- Other big issue is the vole of alerts & alert fatigue; there's a lot of inefficiency & complexity in the work that SOCs have to do
- The other piece is the lack of skills; the workforce is just not there compared to the needs
- AI & LLMs will be able to help, and you're seeing products do in this direction, but it's also about the basics of investing in training & education to build up that workforce
- It's very early days, but it would be a much more level playing field if hans didn't need to spend their time dealing with automated attacks, and could concentrate on "hands on keyboard" activity from attackers
- To what degree is this just a data problem? Do the AI advancements really matter, to take a cynical view?
- Data being in one place is critical; first and foremost this is a data problem
- Most important to keep in mind is that products not be myopic in focus on generative AI; defenders should pick the best tool for each given situation; it's not always going to be generative AI
- What are the specific functions in security that you think LLMs well-solve?
- Once a SOC gets an alert, they have to dig into what happened, what machine is being targeted, what user is being targeted, what technique is being used; you need to look at the periphery of the attack and ask "is this a true positive or false positive"
- That whole triage & investigation step requires collecting data from a variety of different places, and reducing the manual burden here is a slam dunk; we're already seeing that with a lot of products out there
- The second part is natural language querying and unlocking the intel or activity data; that is huge because those datasets are larger in scale and are not as easily queryable
- Once a SOC gets an alert, they have to dig into what happened, what machine is being targeted, what user is being targeted, what technique is being used; you need to look at the periphery of the attack and ask "is this a true positive or false positive"
- What has already been deployed today? Specific products?
- CrowdStrike's Charlotte AI, Microsoft/Google offering products to help people query data lakes
- We are in a wave of consolidation, but are there opportunities for new vendors to still succeed? Where are those distinctly non-platform opportunities?
- For the first time in cybersecurity, we now have a few true platforms, which is definitely a barrier
- Though with attacks constantly evolving and increasing in vole, sometimes exponentially, there will always be opportunity for startups to help deal with this
- With cybersecurity, new classes of attacks are discovered practically every year, you can use that as a niche to start and then grow into other areas from there
- With these true platforms now in place, is important to have integrations when starting new companies here?
- If you were going to start a new company today, where would that be?
- Lots of different areas, but proactive security is near and dear to my heart; we've come a long way on runtime security, but we're still a long way from reducing the attack surface in an automated way
- All of the work that's going on around exposure management is great to see, but there's still a lot of open space there