What is test-as-code?
Last updated on
Friday
August
2025
Test as code isn’t a feature, it’s a philosophy. With Gatling, your performance tests become part of your codebase. They’re stored in Git, reviewed like features, automated in pipelines, and scaled like infrastructure.
You build, test, and ship—all using the same workflow.
That’s how performance stops being a bottleneck and starts being built in.
From test scripts to test strategy
In today's software delivery, tests should evolve with your code rather than living in a separate tool or someone's laptop.
Think of test as code like you think of source control or CI pipelines: not just a technical choice, but a smarter way to work. Instead of bolting performance tests onto the end of a release cycle, you treat them as first-class citizens in your dev process.
That means:
- Performance tests live in your repo
- You can run, diff, and review them like any other code
- They fit into your test suite alongside unit and integration tests
- They shift load testing from a reactive afterthought to a daily habit
With Gatling, all of this is built in—from day one.
What makes Gatling’s test-as-code model different
Gatling gives you the building blocks to treat load testing like software—versioned, portable, scalable.
1. Write scenarios in real programming languages
Your tests don’t live in a GUI—they live in code. Gatling lets you write performance scenarios in Java, Scala, Kotlin, or JavaScript, using clean SDKs that feel like writing user stories, not scripts.
- Reuse logic like you do in app code
- Organize by feature, user journey, or team
- Use variables, conditionals, loops, just like you would when writing functions
It’s not just about avoiding vendor lock-in; it’s about making your test code maintainable, expressive, and scalable.
2. Run tests the same way you run builds
If you’re using CI/CD, you already know how critical test automation is. Gatling makes it just as easy to automate performance testing.
- Trigger tests on pull requests, merges, or scheduled runs
- Pass dynamic config and test data from the pipeline
- Fail builds automatically on SLA breaches
- Export test results to JUnit, dashboards, or monitoring tools
No extra tools, no flaky UI clickers; just executable code testing your system like a real user.
3. Assertions as safety nets
Set latency limits, error budgets, or throughput goals directly in the test file, just like you’d write a unit test with assertions.
You can assert things like:
- “95th percentile response time < 500 ms”
- “Error rate = 0%”
- “Throughput stays above 300 RPS”
These assertions give you guardrails in CI/CD, so you catch regressions before they hit production, and every new feature has to meet your performance testing criteria.
Model real-world traffic with confidence
Most tools throw “users per second” at your app and call it load testing. Gatling takes it further.
You can choose from different workload models:
- Open models (arrival rate-based): great for spike and soak tests
- Closed models (concurrency-based): perfect for session-heavy apps or API backends
- Custom injection profiles: tailor-made for retail peaks, streaming bursts, or SaaS onboarding floods
With built-in support for different tests spike, stress, volume, and regression testing, you get precision over guesswork—and insights you can trust.
Realism matters: data-driven test cases
Performance tests shouldn’t just hit /login with the same credentials over and over. Gatling helps you simulate real users doing real things:
- Feeders inject fresh data from CSV, JSON, or generated sets
- Sessions keep track of state across requests
- Virtual users behave independently—no shared cache, no test pollution
This lets you validate not just load but logic: Does the system break when users all hit search with different queries? When inventory is low? When tokens expire?
That’s where real issues hide. Gatling helps you find them.
All the benefits of real code, for your tests
Because Gatling uses real source code for your scenarios, you unlock a lot of what makes modern engineering productive:
- Code review before a test runs
- Static analysis for clean formatting and structure
- Refactors across test scenarios
- Support for integration testing, unit test coverage, and functional validations all in one test suite
It’s not just easier to maintain; it’s also easier to collaborate on.
{{cta}}
Gatling Enterprise: take test as code to scale
Writing great tests is just step one. Running them at scale, sharing results, and managing complexity; that’s where Enterprise edition comes in.
Here’s what you unlock:
Distributed test execution
- Run millions of users across cloud regions, on-premises VMs, or hybrid setups
- Combine public locations with private generators behind firewalls
- Simulate global traffic or region-specific loads from a single config
Live dashboards and comparisons
- Monitor test execution in real time—latency, throughput, errors
- Use Run Trends to compare results across builds and releases
- Drill into test explorer metrics to understand what really caused a slowdown
Configuration as code
- Define your test setup in YAML
- Pin environment variables, injection profiles, and thresholds
- Use Git to track everything—just like your production code
Built-in security and team governance
- SSO (OIDC, SAML), RBAC, audit logs, and secrets management
- Quotas and usage dashboards to keep automated testing within budget
- Private packages for sensitive environments (no raw data leaves your network)
How to start: turn tests into code
Here’s a practical way to start shifting left:
- Pick your language
Java, Kotlin, Scala, or JavaScript—all supported and ready to use in your existing tools (including VS Code) - Write your first test
Use the Gatling Recorder, import from Postman, or start from a template. Keep it small and readable. - Codify your expectations
Add assertions for latency, throughput, and error rates. This is your performance test coverage baseline. - Automate
Run tests in CI/CD, export results to JUnit, archive HTML reports, and fail on regression. - Scale with confidence
Move to Enterprise for distributed load, team collaboration, and deep analytics. Think like DevOps, not QA.
How Sophos made test as code work across teams
Sophos, a global leader in cybersecurity, uses Gatling Enterprise to scale performance testing across decentralized teams. Each team owns and writes their own test code, validating backend APIs against strict SLAs with every deployment.
By embedding performance tests into their CI/CD pipelines, developers catch issues early, reduce cloud resource waste, and ensure services stay fast and efficient. Gatling’s scripting flexibility, real-time dashboards, and repeatable simulations make it easy to reuse tests across services—turning load testing into a shared, team-owned practice.
When tests become code, performance becomes culture
Most teams already write code, review code, and deploy code in a structured, automated way.
Gatling lets you bring test code into that same world.
- Fewer surprises in production
- More confidence in your releases
- Better feedback loops for your team
- Real-world validation that keeps up with fast releases
Because when testing becomes code, it stops being a side project and starts being part of how you build great software.
{{card}}
FAQ
FAQ
Test-as-code means defining test scenarios using programming languages rather than using graphical user interfaces. Instead of dragging and dropping test steps, you write them in code—allowing for version control, code reviews, automation, and reuse. In load testing, this makes it easier to model realistic traffic, simulate dynamic user journeys, and scale testing via CI/CD.
Gatling was built from the ground up to support test-as-code. Its DSLs (in Java, Scala, Kotlin, and JavaScript) let you write load test scenarios directly in code. These scripts are just text files, easily maintained in Git, reviewed in pull requests, and triggered from your CI/CD workflows. This aligns perfectly with modern DevOps practices.
Compared to GUI-based tools like JMeter, Gatling's code-centric model offers: Easier collaboration via version control and code review. Precise modeling of user logic with loops, conditions, and data feeders. Seamless CI/CD integration and automated performance gating. Cleaner maintenance and easier refactoring. Legacy tools often suffer from brittle test plans, poor diffing in Git, and limited reusability. Gatling eliminates those pain points.
Yes—Gatling provides a Recorder that converts user sessions into code, and Gatling Enterprise includes a no-code scenario builder that exports test definitions as code. This enables QA engineers and testers without deep programming skills to contribute to performance testing—while keeping code as the source of truth.
Related articles
Ready to move beyond local tests?
Start building a performance strategy that scales with your business.
Need technical references and tutorials?
Minimal features, for local use only
