Building a CI/CD Pipeline for RunReveal Detections: Detections-as-Code in Action
How to build automated CI/CD pipelines for security detections with RunReveal. Learn detection-as-code best practices for testing and deployment.

In the rapidly evolving cybersecurity landscape, the ability to deploy, test, and maintain security detections at scale has become a critical capability for modern security teams. Gone are the days when security rules were manually crafted in isolation and deployed through ad-hoc processes. Today's security operations demand the same rigor, automation, and collaboration practices that have revolutionized software development.
This is where detection-as-code comes into play—treating your security detections, rules, and configurations as code that can be version-controlled, tested, reviewed, and deployed through automated pipelines. By implementing CI/CD (Continuous Integration/Continuous Deployment) for your RunReveal detections, you're not just improving operational efficiency; you're fundamentally transforming how your security team collaborates, maintains quality, and responds to threats.
Why Detection-as-Code Matters for Detection Engineering
Traditional security detection management often suffers from several critical issues:
- Lack of version control: Changes to detection rules are often made directly in production without proper tracking.
- No peer review: Critical security logic is deployed without collaborative review processes.
- Testing gaps: Detections are rarely tested against known datasets before deployment.
- Deployment inconsistencies: Manual deployment processes lead to configuration drift and errors.
- Rollback challenges: When a detection causes issues, rolling back changes is complex and error-prone.
Detection-as-code addresses these challenges by bringing software engineering best practices to security operations. When you treat your RunReveal detections as code, you gain:
- Collaborative development: Team members can propose, review, and discuss detection changes through pull requests.
- Automated testing: Every detection change is validated against test data before deployment.
- Audit trail: Complete history of who changed what, when, and why.
- Consistent deployments: Automated pipelines ensure detections are deployed consistently across environments.
- Quality assurance: Linting and validation catch syntax errors and logic issues early.
The RunReveal Detection CI/CD Pipeline
Let's build a comprehensive CI/CD pipeline that will transform your detection engineering workflow. This pipeline will ensure that every detection change is properly tested, reviewed, and deployed with confidence.
Pipeline Overview
Our CI/CD pipeline consists of three main stages:
- Repository setup: Structure your detections in a version-controlled repository.
- Pull request (PR) validation: Automatically test and validate changes before they're merged.
- Deployment: Sync approved changes to your RunReveal environment.
Step 1: Repository Structure
First, create a Git repository to house your detections. The beauty of this approach is flexibility—you can organize your detections however makes sense for your team:
detections/
├── sigma/
│ ├── okta/
│ │ ├── users/
│ │ └── admin/
│ └── aws/
│ ├── cloudtrail/
│ └── iam/
├── sql/
│ ├── okta/
│ │ ├── users/
│ │ └── admin/
│ └── aws/
│ ├── cloudtrail/
│ └── iam/
├── samples/
│ ├── test_data_1.json
│ └── test_data_2.json
└── .github/
└── workflows/
├── validate.yml
└── deploy.yml
This structure allows you to:
- Group detections by type (Sigma, SQL) and category
- Maintain test data samples alongside your detections
- Keep CI/CD configuration in version control
Step 2: PR Validation Pipeline
When a team member creates a pull request, three critical validation steps should run to ensure the quality and functionality of the proposed changes:
Validation Step 1: Linting
runreveal lint <sigma/sql> <file>
This command validates the syntax and structure of your detection files, catching common errors like:
- Invalid YAML syntax in Sigma rules
- Malformed SQL queries
- Missing required fields
- Incorrect data types
To do this step correctly, you should run lint
once for sigma and once for sql expressions. The upside of this command is that it supports directories. In our contrived example, runreveal lint sql sql/
and runreveal lint sigma sigma/
should work to run against all detections in one go. If any fails, there will be an error accordingly.
Validation Step 2: Detection Testing
runreveal detection run -f <detection> -i <sample>
This step tests your detection against sample data to ensure it behaves as expected. Here's where it gets interesting—this command has specific behavior that you need to account for in your pipeline:
- Exit Code 0: No matches found in the sample data
- Exit Code 1: Matches found (detection triggered)
Your pipeline needs to be instrumented to handle these exit codes according to your testing strategy. For example, if you provide sample data that should trigger your detection, you'll want to expect exit code 1. If you're testing with clean data, you'll expect exit code 0.
Here's how to do it with GitHub Actions:
Example .github/detection-tests.json
:
[
{
"detection_file": "sigma/okta/users/suspicious_login.yml",
"input_file": "samples/okta_malicious_login.json",
"expected_exit_code": 1
},
{
"detection_file": "sigma/okta/users/suspicious_login.yml",
"input_file": "samples/okta_clean_login.json",
"expected_exit_code": 0
},
{
"detection_file": "sigma/aws/cloudtrail/privilege_escalation.yml",
"input_file": "samples/aws_privilege_escalation.json",
"expected_exit_code": 1
},
{
"detection_file": "sql/okta/admin/admin_role_changes.sql",
"input_file": "samples/okta_admin_changes.json",
"expected_exit_code": 1
}
]
Parameterizing tests this way, will ensure you can build a solid test suite for all your detections and publish them confidently that they will work.
To fetch sample logs, you can use your existing log sources and retrieve the raw log entries and paste them onto a file - although sensitive data should be redacted.
An important caveat is that only streaming detections are supported by runreveal detections run
command, for the time being.
Validation Step 3: Sync Dry Run
runreveal detections sync --dry-run -d $INPUT_DIRECTORY
This command validates that your detections can be successfully synced to RunReveal without actually deploying them. It checks for:
- Naming conflicts
- Validation errors that would prevent deployment
Step 3: Sample CI/CD Configuration
Here's an example GitHub Actions workflow that implements this pipeline:
name: Detection Validation
on:
pull_request:
branches: [ main ]
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup RunReveal CLI
run: |
# Install RunReveal CLI
curl -sSL https://install.runreveal.com | sh
- name: Configure RunReveal
run: |
runreveal configure --api-key ${{ secrets.RUNREVEAL_API_KEY }}
- name: Lint Detections
run: |
runreveal lint sigma <sigma_directory>
runreveal lint sql <sql_directory>
- name: Load Test Configuration
id: load-config
run: |
echo "matrix=$(cat .github/detection-tests.json)" >> $GITHUB_OUTPUT
- name: Test Detections
strategy:
matrix:
include: ${{ fromJson(steps.load-config.outputs.matrix) }}
run: |
runreveal detection run -f ${{ matrix.detection_file }} -i ${{ matrix.input_file }}
exit_code=$?
expected_exit_code=${{ matrix.expected_exit_code }}
if [ $exit_code -eq $expected_exit_code ]; then
echo "SUCCESS: ${{ matrix.detection_file }} test passed (exit code: $exit_code)"
else
echo "FAILURE: ${{ matrix.detection_file }} expected exit code $expected_exit_code but got $exit_code"
exit 1
fi
- name: Dry Run Sync
run: |
runreveal detections sync --dry-run -d sigma
runreveal detections sync --dry-run -d sql
Step 4: Deployment Pipeline
When changes are merged to your main branch, the deployment pipeline automatically syncs your detections:
name: Deploy Detections
on:
push:
branches: [ main ]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup RunReveal CLI
run: |
curl -sSL https://install.runreveal.com | sh
- name: Configure RunReveal
run: |
runreveal configure --api-key ${{ secrets.RUNREVEAL_API_KEY }}
- name: Deploy Detections
run: |
runreveal detections sync -d ./detections
The Transformative Impact
Implementing this CI/CD pipeline will fundamentally transform your security operations:
- Improved collaboration: Your security team can now work like a development team, with proper code review, discussion, and collaborative improvement of detections.
- Higher quality: Automated testing and validation catch issues before they reach production, reducing false positives and missed threats.
- Faster iteration: Teams can confidently make changes knowing they'll be properly tested and can be quickly rolled back if needed.
- Better documentation: Your Git history becomes a rich source of documentation about why changes were made and how detections evolved.
- Reduced risk: Systematic testing and validation reduce the risk of deploying broken or ineffective detections.
- Scalability: As your detection library grows, the automated pipeline ensures quality remains high without proportional increases in manual effort.
Conclusion
Building a CI/CD pipeline for your RunReveal detections isn't just about automation—it's about bringing engineering rigor to security operations. By treating your detections as code, you're enabling your security team to work more collaboratively, deploy more confidently, and respond to threats more effectively. All this is now supported by our own command-line tool.