The effectiveness of large language models is directly tied to the quality of the prompts they are given. In December 2022, the term prompt engineering gained popularity as a way of describing effective prompting strategies that produce high quality and relevant LLM outputs. Now, in 2025, the term context engineering has gained popularity alongside the rise of LLM agents and agentic systems.
To understand context engineering we must first understand what an agent is.
If you’ve never heard of XBOW before, you will. XBOW is a platform that leverages a multi-agent system to perform automated pentesting. The team behind it is on a mission to build a fully autonomous system to catch verified vulnerabilities, allowing security teams to focus on problems that require a human touch.
XBOW made huge waves back in June of 2025 when they achieved the number one spot on the US HackerOne bug bounty leaderboard.
In this post, we’ll explore a machine learning framework designed to combine the attack flow from the Lockheed Cyber Kill Chain with the MITRE ATT&CK dataset, enabling better contextualization, prediction, and defense against adversarial behavior.
This post is a summarization of research conducted by Chitraksh Singh, Monisha Dhanraj, and Ken Huang in their paper “KillChainGraph: ML Framework for Predicting and Mapping ATT&CK Techniques”.
You can read their full paper on ArXiv here: https://arxiv.
In this post I want to share some insights gleaned from an enlightening conversation between Scott Clinton, Jason Ross, Akram Sheriff, Ophir Dror, and Or Oxenberg regarding the security implications of agentic systems that leverage Model Context Protocol (MCP).
This post is a summary of their conversation, you can find the entire webinar here which was hosted by the OWASP GenAI Security Project.
What is Agentic? What role does MCP play?
Today we are exploring secure Retrieval Augmented Generation systems to create a context aware chatbot.
With all the hype around GenAI, I thought it would be fun to explore a little bit of what RAG has to offer. At a high level, RAG systems allow users to leverage the full power of large language models while giving the model awareness of personal context. LLMs don’t know everything out of the box, they only know about information they were trained on.