If your platform is subject to GLI-19 certification — and if you operate in most regulated iGaming jurisdictions, it is — you've probably been through a lab submission at least once. You probably also have a story about something that failed or nearly failed that your QA team hadn't anticipated. This is almost always a documentation or traceability problem, not a software problem.
GLI-19 is Gaming Laboratories International's technical standard for online gaming systems. It covers everything from RNG fairness to responsible gambling controls to cashier integrity. Most of it is testable. Not all of it is automatable. And the parts that aren't automatable are exactly where teams tend to underinvest.
What GLI-19 actually is
GLI-19 is structured as a series of sections covering different aspects of an online gaming platform. The major areas include: game mathematics and RNG, game integrity, responsible gambling, cashier and financial controls, security, system integrity, and player interface requirements. Each section contains specific requirements that must be met and, critically, must be demonstrated through documented testing.
The standard doesn't prescribe how you test — it prescribes what you must demonstrate. That distinction matters. You can use automated tooling as supporting evidence for many requirements. You cannot use automated tooling as the sole evidence for requirements that specifically mandate human observation, judgment, or signed attestation.
GLI-19 uses the phrase "the testing laboratory shall verify" in multiple places. The laboratory is a human entity with certification and accountability. Your CI/CD pipeline is not a testing laboratory.
What QA teams typically miss
In our experience, there are five areas where QA teams consistently underdeliver against GLI-19 requirements — not because the software is broken, but because the evidence trail is incomplete.
1. Multi-jurisdictional game rule variations. GLI-19 requirements are often applied through a national regulatory overlay. UK, Malta, Gibraltar, and New Jersey all have GLI-19-derived standards with local variations. If your platform serves multiple jurisdictions, each variant needs its own documented verification. Teams often test the default ruleset thoroughly and assume the jurisdictional variants are "close enough." They are not. Each variant needs explicit test execution evidence.
2. Game mathematics validation. The theoretical return-to-player (RTP) percentage is a mathematical model. Testing that the software correctly implements that model requires specific statistical sampling — typically hundreds of thousands of game rounds, documented against the certified mathematical model. Most QA teams run this in automation and consider it done. The gap is documentation: can you produce, for the auditor, a signed statement from a named tester that they reviewed the statistical output and it aligns with the certified RTP? Automation produces the data. It doesn't produce the attestation.
3. Bonus logic edge cases. Bonus mechanics are a common source of compliance failures. Wagering requirements, eligibility rules, and game contribution percentages need to be tested not just for the happy path but for the edge cases: what happens when a player deposits mid-wager? When they change game mid-bonus? When a promotion expires while a bonus is active? These scenarios need documented test cases with expected results derived from the certified rules — not just exploratory testing.
RNG: the most misunderstood requirement
Random Number Generators are the heartbeat of any gaming platform, and GLI-19's RNG requirements are among the most technically specific in the standard. ISTQB ATTA What most teams get wrong is treating RNG testing as purely statistical — pass the chi-squared test, pass the NIST suite, done. That's necessary but not sufficient.
GLI-19 also requires verification of seeding and re-seeding behaviour: how the RNG is initialised, how frequently it re-seeds, and what happens to game outcomes around re-seed events. It requires verification that the RNG is inaccessible to players and operators in ways that could influence outcomes. And it requires that the RNG implementation matches the certified technical specification — not just that it produces statistically acceptable output.
This is where the automated/manual boundary becomes most clear. Statistical tests are ideal automation candidates. Behavioural verification of seeding and access control requires documented human test execution against a specific test plan aligned to the certified specification. A CI/CD test that asserts "RNG output passes chi-squared" is not the same thing as a signed test execution record verifying that the RNG implementation conforms to the certified specification.
Responsible gambling flows
Responsible gambling (RG) controls are increasingly scrutinised by regulators, and GLI-19 Section 12 (in current versions) is correspondingly detailed. The common failure modes are subtle.
Self-exclusion: Testing that self-exclusion prevents login is the easy part. GLI-19 requires verification that self-exclusion propagates appropriately across marketing systems, affiliate feeds, and email communications. QA teams often stop at the platform boundary. Regulators don't.
Deposit limits: Testing that a deposit limit blocks a transaction at the limit threshold is necessary. Testing the edge cases — what happens when a limit change request is submitted, when the cooling-off period applies, when limits are removed — requires structured test cases with documented expected behaviour drawn from the certified rules.
Time and session controls: Session reminders and time limits need to be tested not just for functional correctness but for timing accuracy. A session reminder that fires 7 minutes late rather than exactly at the configured interval may be a compliance failure in some jurisdictions. This requires manual timing verification — automated assertion of "reminder appeared" is not sufficient.
Evidence packs: what auditors actually want
The single biggest gap in most teams' GLI-19 testing is evidence quality. An auditor reviewing a lab submission wants to see:
- A test plan explicitly referencing the relevant GLI-19 clauses
- Test cases with documented expected results derived from the certified specification
- Execution records showing who executed each test, on what build, on what date
- Actual results with pass/fail determination and any defect references
- A summary sign-off from a named, accountable individual
Screenshots of a green CI/CD pipeline do not constitute an evidence pack. Test reports generated by Playwright do not constitute an evidence pack. These may be useful supporting material. They are not the primary evidence.
The ISTQB Advanced Level Test Manager syllabus is instructive here. ISTQB AL It defines test completion reporting requirements in terms of what a stakeholder needs to make a release decision. In a regulated context, the stakeholder is partially an external auditor, and their requirements are defined by the standard — not by your release process preferences.
Traceability: the part everyone skips
Traceability — the ability to trace a test case back to the specific requirement it verifies — is a foundational concept in structured testing. ISTQB FL Most QA teams understand traceability in principle and implement it inconsistently in practice. In a GLI-19 context, inconsistent traceability is a compliance risk.
Every test case in your GLI-19 test suite should be explicitly mapped to one or more clauses of the standard. If an auditor asks "where is your test evidence for GLI-19 Section 4.3.2?", you should be able to produce it immediately. If your test suite is organised by feature rather than by standard clause, producing that answer requires manual cross-referencing that introduces delay and error risk.
Practical checklist
If you're preparing for a lab submission or reviewing your GLI-19 testing coverage, these are the questions to ask:
- Does every test case in your compliance suite reference the specific GLI-19 clause(s) it covers?
- Do your RNG test execution records include named tester, build version, date, and a signed attestation?
- Are jurisdictional variants tested separately with separate execution evidence?
- Do your responsible gambling test cases cover the edge cases (mid-cooldown changes, limit removal, cross-channel propagation)?
- Can you produce a complete evidence pack — plan, cases, execution, results, sign-off — within 24 hours if asked?
- Is your test documentation version-controlled and linked to the specific software build it was executed against?
If the answer to any of these is "no" or "probably not", the documentation gap is the highest-priority risk to address before your next submission.
References: GLI-19 Technical Standard for Online Gaming Systems v1.1; ISTQB Foundation Level Syllabus v4.0; ISTQB Advanced Level Test Manager Syllabus; ISTQB Advanced Technical Test Analyst Syllabus.