The latest Cybersecurity Workforce study from ISC2 shows that despite a healthy 8.7% growth in the cybersecurity workforce, the gap between workers needed and those available has also grown 12.6% year over year. At the same time, organizations are experiencing an uptick in ransomware–an average of nearly eight ransomware incidents per year, according to ExtraHop research. Clearly, security analysts need help. With the new, first-of-its-kind AI Search Assistant in RevealX™, ExtraHop has just delivered what overworked teams need.
AI Search Assistant allows any security analyst at any level—even ones with no prior experience using RevealX—to start getting value from the tool immediately. On day one, users can start finding vulnerable devices and proactively hunt for threats on the network.
RevealX is the first network detection and response (NDR) solution to integrate AI-assisted search functionality. The new feature uses a large language model (LLM) to convert users’ natural language questions into device and detection queries.
The AI Search Assistant can help organizations reduce risk and build resilience by enabling any user to quickly discover the devices most at risk of being breached, no training or ramp up required. But it does more than just give end users the answers. Like a good tutor, it also helps analysts build the skills they need to get those answers on their own.
For example, an analyst might ask, “how many devices are currently connected to the internet that aren’t running a Crowdstrike endpoint agent?” Not only will the AI Search Assistant display the results of this natural language question (even if the analyst misspells something or asks in a language other than English), it also shows the query it ran to find these results. New analysts can quickly close domain and product proficiency gaps by learning as they go, instead of slowly climbing the learning curve before they can get useful, intelligent results.
Early customer reviews are in: AI Search Assistant is a boon. One long-standing retail customer enabled the feature as soon as they saw it during a roadmap meeting with ExtraHop. In fact, the customer’s security team was so excited that the roadmap meeting quickly transformed into an exploration of what the new feature could accomplish.
Standing at the Forefront of Innovation
Building the first AI-assisted search in an NDR solution wasn’t easy, much less going from prototype to fully fledged feature in less than four months. “We saw a need for users to explore the product without deep product knowledge,” says ExtraHop Senior Principal Software Engineer, Alex Birmingham, who led the technical team in realizing this vision. Choosing the right LLM was only the first challenge the ExtraHop Product Management team faced. The team also needed to overcome throttling limits set by cloud providers and carefully engineer and optimize prompts to provide accurate results without exposing customer data to the LLM.
Many AI assistants or chatbots must send customer data to an LLM in order to function correctly, but the Product Management team at ExtraHop deemed this an unnecessary risk. “We designed and architected the entire LLM pipeline such that ExtraHop’s data is exposed to the LLM, but customer data is not. The only way that customer data will reach the LLM is if the customer includes it in their question,” says Birmingham. Instead, the LLM, which has been trained extensively on the ExtraHop API, generates a query that leads the customer to their data after the LLM has safely exited the conversation.
A large reason why RevealX is such an effective security solution is that it can see all the devices connected to the network, detect and alert on all manner of network activity, and attribute those detections to attacker tactics, techniques, and procedures (TTPs). Each of these devices, detections, and attributions has dozens of attributes. That all adds up to a complex data model that’s tricky to query in natural English.
“Early in ideation we saw that everyone was building chatbots, and were tempted to do the same,” Birmingham says. “However, we saw that chatbots are struggling with hallucinations that could lead to errors that may be acceptable in products with lower stakes, but could lead to fatal errors in a security domain.”
With clever prompt engineering, like the inclusion of an OpenAPI 3.0 schema and a list of valid inputs, the Product Management team contextualized user inputs within the product and eliminated AI hallucinations. The results of their hard work speak for themselves.
Read the original post here.