Scanner technology at Guardian360: a technical (r)evolution

Network scan

At security SaaS companies, scanner technology is the heart of their platform: without reliable, efficient scanners, asset discovery and vulnerability management are simply not future-proof. But as many security engineers know, legacy architectures and scanners are slowly reaching the end of their useful life. In this blog post, we’ll explain our technical choices and the reasons behind replacing our scanner stack.

Why current scanners no longer work
Scanners often produce a murderously long list of discovered vulnerabilities. That sounds great, but experience shows that noise and irrelevant issues obscure the real risks. Modern software and OSes are packed with built-in mitigations; unexploitable vulnerabilities are much more common than before. Asset owners are increasingly focused on eliminating false positives and edge cases, while speed and relevance have become crucial.
Moreover, scans are resource-intensive. Scanning large networks consumes increasingly more CPU, RAM, and storage—traditional probes are outgrowing their capabilities. The reliability of tools (think openVAS) is under pressure, and maintenance is becoming a daily chore.
What also doesn’t help is that many organizations are struggling with a shortage of specialists. Scanners need to quickly identify where limited resources should be deployed, instead of wasting a lot of time determining what’s truly important.
In short, it’s time for a radical step!

New, now even more autonomous! Towards fully open-source and community-driven
We’re replacing the latest closed-source scanner technology in our Lighthouse platform with an open-source stack built around Nuclei: an extremely flexible engine to which you can quickly add your own templates and checks. New vulnerabilities in the wild? The community builds templates, and they are automatically downloaded. Not dependent on a closed vendor, but on an international community that shares updates rapidly. This means being able to respond more quickly to exploits and trends that are spreading rapidly around the world thanks to AI.

SIDEBAR: Why did we choose Nuclei?
Why did we choose Nuclei as the foundation of our new scanner stack?

  • Nuclei is an open-source engine, built by and for the security community. Thanks to a global network of researchers and professionals, new templates and exploit checks are released incredibly quickly. This allows us (and our clients) to respond incredibly quickly to zero-days and trending exploits—faster than many commercial vendors can keep up.
  • Nuclei templates are flexible (YAML-based) and extremely easy to extend with customer-specific checks or custom business logic, without requiring extensive programming. For example, you can add unique detections for your own APIs that are irrelevant to anyone else.
  • By using feeds, new checks are automatically retrieved. For example, if a new vulnerability is discovered around a SonicWall globally, you can often perform a new scan based on the recently shared template within hours.
  • Guardian360 can also give back to the community in this way: we also add new checks to the repository ourselves.

But flexibility means more: templates can also be unique to each client or organization. Scan specific endpoints, monitor custom APIs, run hardening checks? Custom, relevant scans can be set up for any organization in the future, without having to code everything yourself.
From probe to agent: the next step in deployment
The classic scanner probe, running as a virtual machine, remains relevant for certain use cases. But with our new agent approach, we can make it even more compact and flexible. Agents run natively on Windows, Linux, Mac, or even as a container in a CI/CD pipeline. This enables local checks: think of registry checks, operating system hardening, MITRE ATT&CK detection, and much more. Asset discovery can also be boosted: from ARP scans to third-party API connections, all centrally visible in Lighthouse.
In addition to security and hardening insights, we will also focus more on privacy on the endpoints. We have collaborated with Utrecht University on this and will integrate the InfoSec Agent they provided into the agent. And not unimportantly: less resource consumption, more speed, and less bloatware. Important in an age where “simplicity by design” is a must for security tools. We explicitly strive for lightweight agents that pose no risk of CPU or memory hogging—with lessons learned from well-known platforms where things went terribly wrong in mind.

Using AI in scanner development and reporting
AI in security usually means a lot of marketing bingo, but little concrete action. But we are using AI purposefully in two new areas of our stack. First, AI helps generate and enrich scanner templates, significantly accelerating development. Second, AI enriches the scan results: more context, additional tips, and concrete verifications that allow users to immediately confirm whether issues are present—relevant for those who no longer want to search through generic CVE lists.
AI isn’t just a cool buzzword on the dashboard, but an accelerator for developers and a resource for the end user.
The scanner replacement is more than a technical upgrade: it’s a vision for more relevant insights, reduced overhead, and an open, flexible approach. As a security engineer, architect, or asset owner, you’ll immediately notice the difference in speed, relevance, and reliability. In our next blog post, we’ll delve deeper into the rollout of the agents and how your organization can take this step without burdening your systems.

What does our timeline look like?
We’re currently testing the new scanner technology on probes in our staging environment. The initial results are promising, but much remains to be done. This includes preparing our central scanner environment, delivering new APIs, and preparing our backend infrastructure to process scan results. We currently expect to deliver these new scanners in mid-Q1 2026, but due to many uncertainties, this delivery date may shift.
Once the new scanners are delivered and in production, we will continue developing the agents. Because a lot of fundamental work has already been done, we expect the first agents to be ready soon after the scanners are delivered. However, we can’t make any firm commitments yet: we don’t know what we don’t know.

Share this entry