top of page

"Do robots dream of secure computing? Exploring cybersecurity for AI systems"

  • Nov 6, 2025
  • 2 min read

Background


  • In the late 1960s, science fiction author Philip K. Dick explored the traits that distinguish humans from autonomous robots in his novel "Do Androids Dream of Electric Sheep."

  • As advances in generative AI allow us to create autonomous agents that can reason and act on humans' behalf, we must consider the human traits and knowledge we must equip these agentic AI with to enable them to act autonomously, reasonably, and safely.

  • One crucial skill we need to impart on our AI agents is the ability to stay safe when navigating the internet.


Equipping AI Agents with Cybersecurity Knowledge


  • If agentic AI systems are interacting with websites and APIs like a human internet user, they need to be aware that not all websites or public APIs are trustworthy, and nor is user-supplied input.

  • Therefore, we must empower our AI agents with the ability to make appropriate cyber hygiene decisions, as it will be the autonomous agent's responsibility to decide if it is safe and appropriate to "click the link."

  • The threat landscape is constantly shifting, so there are no hard and fast rules we can teach AI systems about what is a safe link and what is not.

  • AI agents must verify the disposition of links in real-time to determine if something is malicious.


Leveraging Threat Intelligence APIs


  • There are emerging approaches to building AI workflow systems that can integrate multiple sources of information to allow an AI agent to come to a decision about an appropriate course of action.

  • In this example, we use the LangChain framework with OpenAI to enable an AI agent to access real-time threat intelligence via the Cisco Umbrella API.

  • The key functionality is the "getDomainDisposition" function, which passes the domain to the Umbrella API to retrieve the disposition and categorization information about the domain.


Proof of Concept


  • When provided with a known safe domain (www.cisco.com), the system identifies that the domain has a positive disposition and is classified as safe.

  • When provided with a known malicious domain, the system identifies that the domain has a negative disposition and concludes that this is not a domain which is safe for connection.

  • This demonstrates how AI agents can be equipped with the ability to make informed decisions about internet safety, identifying trustworthy links and websites based on real-time threat intelligence.


Conclusion


  • Equipping autonomous AI agents with cybersecurity knowledge is crucial for the next generation of agentic AI systems.

  • By learning to assess the safety of domains, AI agents can develop better cyber hygiene, making more intelligent decisions rather than simply being restricted by security gateways.

  • This proof of concept shows how AI agents can leverage threat intelligence APIs to make informed decisions about internet safety, a key skill for autonomous systems navigating the digital world.


Sources


  • https://blog.talosintelligence.com/do-robots-dream-of-secure-networking/

  • https://www.instagram.com/reel/DQnMIc7j9IC/

Recent Posts

See All
Defeating AI with AI

Key Findings Generative AI and agentic AI are increasingly used by threat actors to conduct faster and more targeted attacks. One capability that AI improves for threat actors is the ability to profil

 
 
 

Comments


  • Youtube

© 2025 by Explain IT Again. Powered and secured by Wix

bottom of page