With Dave Palmer, GP at 1011 Ventures & Co-Founder at DarkTrace
Key Takeaways
There are two major themes governing the cybersecurity market: a labor shortage in security talent and phishing being the single largest threat vector. In a few different form factors, LLMs can start to fully close automation loops and solve this labor problem. Meanwhile, LLMs will amplify the scale and strength of phishing attacks, introducing a whole new set of problems. The opportunity set in both categories is immense.
Topics Covered
- Background on Dave & DarkTrace
- Dave's background in UK intelligence services
- The founding story of DarkTrace, one of the first and most successful cybersecurity companies built with an ML-first lens
- The core technical innovation was built on work at Cambridge Bayes' Theorem, which involved updating your held set of beliefs in light of new evidence, which in cybersecurity was a key concept
- That ability to constantly revisit your suspicions about whether an attack is happening is incredibly powerful
- What exactly does the DarkTrace product do?
- DarkTrace is looking at each person or device in an organization from a bottom-up perspective, developing a baseline picture for how they operate and then looking for anomalies
- The idea doesn't just apply to networks; it applies to any systems that may exist in an organization
- You can then reprogram the infrastructure in real-time to stop an attack
- Where else have you seen ML be well-applied in cybersecurity?
- Cylance was a big early innovator, bringing these "AI innovations" into EDR (endpoint detection and response)
- Communications is an area where we're already very tolerant of a machine intervening on our day-to-day activities and behaviors
- The bad old years of spam did a lot to normalize this
- Abnormal, Tessian, Material, DarkTrace have all been innovating here and have done great work
- Cutting edge now is knowing when an otherwise trustworthy account has been taken over
- LLMs as a challenge for CISOs
- ChatGPT is very effective at generating phishing content and phishing is the primary way organizations get breached; has the world well-considered the impact of this?
- LLMs in the short term are going to give a bumpy ride in cybersecurity
- You already see things like WormGPT and PentestGPT emerging, which have no ethical controls built in
- We now have an issue of human-level plausibility and machine-level scale
- If that goes a step further into autonomous lateral movement, it becomes a lot harder to figure out problems... we can no longer detect "phone-home" activity
- With the advent of AI agents, on-site GPUs almost become uranium; you want to prevent malicious agents from having access to those resources
- Even if LLMs are iteratively designing software than can run autonomously once dropped into an organization, that also becomes an interesting problem
- Biggest opportunities in cybersecurity today
- Fully closing the loop from detection to action
- Most critical is moving away from systems that tell human beings there's a problem
- Application security is an interesting topic for this; people are using AI copilots to write software, potentially with vulnerable code
- There are a ton of startups emerging capable of spotting these vulnerabilities and making changes to the code base
- If you think of changing code as part of copilot code generation it actually works a lot better
- It's logical that "pre-secured" building blocks should be given to copilot-like LLMs
- You can take that a step further to also fix enterprise application sprawl; it solves the issue of "best practice discovery"
- Cyber has lagged in most prior big technology shifts; it may be different this time around
- Application security is an interesting topic for this; people are using AI copilots to write software, potentially with vulnerable code
- In vulnerability management, LLM-powered agents can start to replace humans as the links between various tools
- How do you get past the complaint of affecting production systems with remediations?
- 90% of the tech you have to worry about is non-production; it's corporate email, SharePoint, etc.
- Much of our limitation in the past was the inflexibility of the orchestration systems; it's a lot of pre-defined workbooks and a list of allowed systems to administer
- Is the false positive rate now low enough to not disrupt production systems?
- We are now quite tolerant; mobile phone connectivity can drop out, emails can disappear; we are all used to that idea that sometimes an intervention happens
- Most critical is moving away from systems that tell human beings there's a problem
- LLMs may also play a role at explainers of decisions
- Though it's risky because LLMs are so good at being plausible without being correct
- Fully closing the loop from detection to action
- Where else do you see opportunity for emerging vendors?
- Protecting AI systems
- Hugging Face already has malicious code that shouldn't be run; how do you scan for things like that
- Bot detection or broadly authenticating users interacting with my services is another category
- Building products that are safe and accessible for LLMs... how do you do that?
- How do you deal with bot traffic now that much more of it may be user-driven?
- The human-interaction with LLMs is going to necessitate a whole new concept of "acting on behalf of" and identity broadly
- Protecting AI systems
- How do you think about investing in security for ML when it's such a dynamic landscape?
- It's all about having smart people on your investment committee with whom you can form a vision for the future, and how flexible this vision is so that it can cope with inevitable change in the ecosystem