Executive Summary

A number of projects to standardize OpenSCENARIO 1.x and the upcoming OpenSCENARIO 2.0 were initiated in the ASAM Simulation domain starting in 2019 and proceeding in 2020/2021. In parallel projects to standardize or continue standardization of content (OpenDRIVE, OpenCRG), meta-data (like OpenLABEL), interfaces (OSI), operational design domain (OpenODD), and a common ontology (OpenXOntology) were initiated.

While this means that many pieces of the puzzle when it comes to standardized scenario-based testing workflows are being tackled across ASAM, the overall interplay between those pieces, as well as the classical testing and test automation standards (like OTX, XIL/XIL-MA) are left somewhat undefined.

This study project shall be tasked with examining the relevant use cases for scenario-based testing in more detail, to identify all relevant standards, potential workflows and their variants, and the overall interplay between those parts to form a cohesive whole.

This analysis should in the end lead to a documented set of overall use cases for scenario-based testing, a set of potential workflows implementing those, together with identified roles, standards and their application.

Additionally any identified gaps in the workflows shall be characterized, leading to the identification of potentially needed additions to existing standards, or even the need for completely new standards. Requirements for those standards or additions shall be collected and documented.

1. Overview

1.1. Motivation

The introduction of scenario-based testing into the overall development processes of advanced driver assistance systems as well as automated driving has been one of the driving forces for the new standards being developed in the ASAM Simulation domain. This includes the development of the OpenSCENARIO 1.0 and 2.0 scenario specification language standards, as well as related content (OpenDRIVE, OpenCRG), meta-data (OpenLABEL), interface (OSI), operational design domain (OpenODD), and common ontology (OpenXOntology) standards.

scenario based testing
Figure 1. An example of a scenario-based testing workflow that leverages the ASAM OpenX standards.

The introduction of scenario-based testing does however not occur in a vacuum, but rather into industry practice which already is heavily invested in standardized testing and test automation. This investment is visible in the prominence of such standards (OTX, XIL/XIL-MA) in the ASAM test automation domain, as well as an lived industry practice.

The introduction of scenario-based testing into these workflows leads to a series of design challenges on the interplay between scenarios, test cases, test platforms and test automation. There is currently no overall solidified industry consensus on the practical aspects of this, which could be relied on to shape the interplay of the parts and their respective standards into a harmonious whole.

This is in part also caused by a greatly increased set of stake-holders that are touched by scenario-based testing as contrasted to the set of stake-holders that were previously touched by the testing and test-automation standards: For the development of ADAS and AD functions, new stake-holders like homologation authorities, regulatory bodies, traffic researchers, research consortia, etc. are involved in the formulation,analysis, and promulgation of scenarios, which clearly transcends the set of traditional stake-holders of testing.

For these reasons, it seems necessary and prudent to invest study at the ASAM level on the interplay between scenario-based testing and the existing testing and test automation landscape, in order to ensure that standardization within and outside of ASAM is shaped in ways that align the different standards as closely as possible while allowing for any necessary variety of industry practice.

1.2. Use Cases

AV use cases are influenced by a variety of factors, such as ODDs, map features, traffic model, weather model, and others. The scenarios assigned to the use cases can be parametrized and typically run in multiple modes of operations. These modes include:

  • On track test

  • On road tests

  • In sim tests

  • In replay (i.e. resimulation) tests

A detailed breakdown of potentially relevant use cases can be found in the OpenSCENARIO 2.0 Concept Document here.

The study project shall take into account all relevant use cases for scenario-based testing. The following set of user stories, taken from the OpenSCENARIO 2.0 standardization project proposal shall serve as one set of starting points for this exploration:

1.3. User Stories from OpenSCENARIO 2.0

1.3.1. SHARE

  1. As an AV/ADAS developer company, I can share with other companies the scenarios I built to test my technology.

  2. As an AV/ADAS developer company, I can search, review and reuse scenarios built by other companies.

  3. As a test engineer working for an AV/ADAS development company, I can build and run tests as similarly as possible to tests other developers at other companies are running.

  4. As a test engineer, I can build and run tests as similarly as possible on different execution platforms.

  5. As a researcher developing new technology, I can reutilize industry and open source scenarios to advance my research.


  1. As an auditor/regulator, I can understand how AV/ADAS developers are testing their products.

  2. As an auditor/regulator, I can compare the outcome of different execution platforms when they have the same OpenSCENARIO input.

  3. As a safety consultant, I can recommend specific scenarios and related conditions (parameters) to my clients to test their products.

  4. As a member of the public, I can learn more details about how AV/ADAS products are tested by AV/ADAS developers.

  5. As government agency, I can understand what parts of the Operational Domain are verified by an AV/ADAS developer through each scenario.

1.3.3. DEVELOP

  1. As a tool developer, I can reutilize constructs, artifacts and libraries to create tools compatible with other tool vendors in industry.

  2. As a service provider, I can integrate tools from multiple tool vendors to provide an integrated solution to test AV/ADAS scenarios.

  3. As a system engineer working for an AV/ADAS developer company, I can fully trace which hardware and software in the AV/ADAS stack is verified by which tests.

  4. As a software developer, I can process scenario information from different software/hardware releases and produce comparison charts to provide trend and gap analysis.

  5. As an existing tool provider or consumer, I can migrate information from previous versions of OpenSCENARIO into OpenSCENARIO 2.0.

  6. As a system engineer working for an AV/ADAS developer, I can decompose high level use cases in a standardised way.

1.3.4. CREATE

  1. As a content developer, I can use OpenSCENARIO 2.0 to create test scenarios that I can supply to my customers who use a OpenSCENARIO 2.0 compliant toolchain.

  2. As a test engineer, I can transform abstract test descriptions into OpenSCENARIO 2.0 (e.g. NCAP tests, UNECE regulations, any requirement list, …).

  3. As a development project lead, I can write scenarios on an abstract level to discuss the functional behavior with the stakeholders.

  4. As a development project lead, I can create scenarios on an abstract level to document the functional behavior for legal reasons.

  5. As a stakeholder, I can create natural language scenarios without having any knowledge about technical details.


  1. As a SOTIF safety engineer and/or V&V engineer, AV developer, scenario creator, I can use OpenSCENARIO 2.0 to discover scenarios that are going to uncover safety hazards. This refer to SOTIF and safety hazards that can be present even if the system is functioning as designed, without a malfunction.

  2. As a SOTIF safety engineer and/or V&V engineer, AV developer, scenario creator, I can use OpenSCENARIO 2.0 to create scenarios that are going to produce emergent behavior of the DUT to discover unknown unknows. OpenSCENARIO 2.0 shall enable to demonstrate that minimum residual risk is attained by the DUT. This is because SOTIF focuses on ensuring the absence of unreasonable risk due to hazards resulting from insufficiencies in the intended functionality or from reasonably foreseeable misuse.


  1. As an end-to-end V&V engineer, I can use OpenSCENARIO 2.0 to enable specification of a driving mission through inclusion of multiple maneuvers in a sequence or in parallel for both DUT and any other traffic agents.

  2. As an end-to-end V&V engineer, I can use OpenSCENARIO 2.0 to enable accomplishing a select driving mission with an indication of whether the mission has been accomplished, what are the mission KPIs and how they are computed, and whether the unambiguous goals of the mission have been attained.


  1. As a traffic model developer, an ADS developer, or end-to-end V&V engineer, I can use OpenSCENARIO 2.0 to enable inclusion of multiple traffic models and AI-based traffic agents in the scenarios and evaluators. Also, OpenSCENARIO 2.0 shall enable inclusion of mechanisms to extract scenarios from the traffic models.

  2. As a test engineer, I can transform from high level scenario description to low level scenario descriptions and vice versa.

1.3.8. EXECUTE

  1. As a test engineer, I can execute different OpenSCENARIO 2.0 files in an automated way with my OpenSCENARIO 2.0 compliant toolchain; no file modifications are needed.

  2. As a test engineer, I can execute the same OpenSCENARIO 2.0 files on different OpenSCENARIO 2.0 compliant toolchains.

  3. As a test engineer, I can convert abstract scenarios into tests.


  1. A simulation tool can describe randomly executed simulation runs. If the Simulation was run with stochastic metrics, the user wants to have the concrete description of what has happened in OSC2.0 Format.

  2. A traffic observer can describe what occured in real world with OSC2.0

  3. A test engineer on a test track can describe with OSC2.0 what specific scenario he has observed on a test track. In such a way tolerances and deviations between test description and test execution will become obvious.

1.4. Relations to Other Standards or Organizations

1.4.1. References to Other Standards






21448 (SOTIF), 34501 (Terms and Terminology for test scenarios), 34502 (Engineering framework for test scenarios), 34503 (ODD taxonomy for test scenarios), 34504 (Test scenario cartograms)

SAE - J3164 is a proposed document to describe manuevers and behaviours.

UL - UL4600 is a standard published by UL, describing how to write a safety case for AVs.

2. Technical Content

This study project is focused on the interplay between scenario-based testing and the existing testing and test automation landscape, in order to ensure that standardization within and outside of ASAM is shaped in ways that align the different standards as closely as possible while allowing for any necessary variety of industry practice.

This analysis shall be based on the actual use cases of scenario-based testing across the whole extended development process (including pre-processes, like development of homologation and regulatory rules and requirements, effectiveness assessments, etc.).

It shall develop reference workflows that support these use cases.

Based on those reference workflows, a gap analysis shall be undertaken to identify gaps and overlaps in the standardization picture which need to be addressed to enable the most effective implementation of those workflows to be achieved.

As a result a set of recommendations shall be created that address those deficiencies through standard extensions or new standardization projects, as well as through closer coordination between projects within and outside of ASAM.

One source of information for the study project will be the usage and pragmatics results of the OpenSCENARIO 2.0 concept project, which touched upon some of these issues, as well as results from the ongoing OpenSCENARIO 1.x and 2.0 projects. Below you can find relevant excerpts from the OpenSCENARIO 2.0 standardization project proposal (it should be emphasised that these definitions just form one source of input to the study project, and a more detailed examination and definitions will have to be derived inside the project). Other relevant sources for the project will come from the OTX and XIL standards, as well as industrial practice and experience by the study project participants.

2.1. Scenario-based Testing Workflow

OpenSCENARIO 2.0 allows building modular, encapsulated, reusable scenarios that can be ported to different testing platforms. This section will describe the basic flow to construct a test, which contains all the elements and ingredients required for a testing platform, ingredients like road topology, scenery, road surface, driver and traffic models and more.

This section will describe how scenarios can be mixed and combined to construct a test. Interaction with usage restriction and guidelines will be presented.

For specific platforms, the user will need to provide context and implementation details to enable its execution. For example, in simulation the user may provide for a reusable scenario:

  • A map to be executed on with specific location details

  • Desired ODD restrictions

Various topics related to combining scenarios into meaningful tests will be discussed in this section.

2.2. Test definition

How to test scenarios?

Clarification of boundaries test cases and scenarios Outputs:

  • Workflow guidelines (e.g. when integrated with a scenario vs. defined separately)

In this section we’ll try to detail how a scenario should be tested.

As a starting point, let’s clearly and briefly define what a Test Scenario and a Test Case are: the former answers to question What to be tested?, while the latter to How to be tested?.

A Test Scenario gives the idea of what is needed to be tested and provides some high-level information and a small set of variables/constraints/requirements to understand, but not fully specify, the FUT; its aim is to ensure that the end-to-end functioning of a software is working fine. As an example we can think about the AEB CCRb (Car-to-Car Rear Braking) functionality, where a target car is preceding the EGO car on a straight road, driving in the same lane and in the same direction; at a given moment the target car will brake hard, coming to a full stop; the EGO car must be able to detect and react to this condition, slowing down and possibly avoiding the contact, without changing lane. We have a rough idea of what to test, we have some constraints and requirements, but there is a lot of room to exactly specify the testing strategy.

A Test Case is the set of positive and negative execution steps and detailed variables/constraints/requirements specification using which a test engineer can determine if the FUT is functioning according to the customer’s requirements; several Test Cases can (and should…​) be derived from the same Test Scenario in order to ensure that the FUT meets the requirements on a wide range of specialization. Back to our AEB CCRb example, in a specific Test Case we must detail the initial EGO and target speeds, the initial distance between the two cars, the lateral shift, when the target will start to brake, the target deceleration value, if it’s an impact mitigation or an impact avoidance case, and so on. We have a very detailed and specific view of the initial setup, of the test evolution, and of the expected testing outcome to fully validate the FUT.

Test Scenario vs Test Case
Table 1. Test Scenario vs Test Case

Test Scenario

Test case

A one, liner possibly associated with multiple Test Cases

A name, some pre-conditions, test steps, expected results and post-conditions

Guides a user on What to test

Guides a user on How to test

Tests the end-to-end functionality of a software application

Validates a Test Scenario by executing a set of steps

Is derived from a Use Case

Is derived from a Test Scenario

Consists of high-level actions

Consists of low-level actions

Easy to maintain, due to the hight-level design

Hard to maintain, due to the heavy specialization

Less time consumption compared to Test Cases

More time consumption compared to Test Scenarios

2.3. Operational Design Domain

The Operational Design Domain (ODD) for an automated vehicle is a very important consideration for the development of such a vehicle. The ODD defines the boundaries in which the vehicle can operate in a safe manner. This includes environmental elements such as weather, infrastructure, and traffic related elements such as traffic participants.

A more formal definition of ODD as defined by SAE J3016 (2018) states that "Operating conditions under which a given driving automation system or feature thereof is specifically designed to function, including, but not limited to, environmental, geographical, and time-of-day restrictions, and/or the requisite presence or absence of certain traffic or roadway characteristics".

The ODD helps identifying the important test cases and test scenarios as well as defining how to evaluate the results of the executed tests.

For this study group it is important to understand what role the ODD can play in the development of an automated vehicle and how the ODD needs to be integrated in the scenario based testing workflow. The OpenODD Concept project targets a machine interpretable format for the ODD.

3. Project Resources

3.1. Work Packages

The project is to be structured into the following set of work packages:

WP 1: SBT-UseCases


Use Cases for Scenario-based Testing

This WP will collect relevant use cases for scenario-based testing from relevant sources (including the OpenSCENARIO 2.0 concept document, the OpenSCENARIO 2.0 Usage and Pragmatics working group, and relevant testing and test automation standards, as well as ). This set of use cases will be consolidated and refined to cover all steps where scenario-based testing is employed in the overall development process.




Refined Set of Use Cases for Scenario-based Testing


30 person-days

WP 2: SBT-ReferenceWorkflows


Reference Workflows for Scenario-based Testing

Based on the collected set of use cases, as well as industry practice and relevant process standards, a set of reference workflows is derived that cover the use cases identified. It should be noted that the reference workflows can contain alternatives workflows where relevant to cover the full reality of current and anticipated industry practice. The reference workflows should be abstract enough to be easily mapped to actual industry practice, while retaining enough detail to enable identification and analysis of relevant standards needed to support them.




Set of Reference Workflows for Scenario-based Testing


40 person-days

WP 3: SBT-GapAnalysis


Gap Analysis and Recommendations

This WP will analyse the current standardization landscape based on the use cases and reference workflows in order to identify gaps (and overlaps) in the available standards to support the given workflows. Based on this analysis a set of recommendations is developed of how best to address these gaps, including recommendations on necessary standard extensions or new standards, which will include basic requirements for those extensions or developments as part of the recommendations.




Gap Analysis and Recommendations on Further Actions


30 person-days

3.2. Company Commitments

Member companies contribute resources for the project as per the following table.

Table 2. Work Effort
Company Location Commitment Participant(s) Participant Email



3.3. Effort Summary

Table 3. Required Effort (person-days)
WP Project Members Service Provider Total














Table 4. Resource Check
Project Members Service Provider Total







3.3.1. Budget

This section details the budget required by the project to e.g. pay service providers and the funds to be provided by ASAM. The limits are determined as a factor of the total required work effort. The corresponding effort is allocated a fixed price of 700 EURO per person-day.

Table 5. Funding limits

Project Type


New, major, minor or revision standard development project


Study project


Concept project


Table 6. Funds required for Service Providers
Task Description Effort
[€700 / day]



17.500 EUR



17.500 EUR

4. Project Plan

4.1. Timeline

The work packages shall be carried out as per the following time schedule:

3 4 project plan

4.2. Deliverables

At the end of the project, the project group will hand over the following deliverables to ASAM:

  1. Refined Set of Use Cases for Scenario-based Testing
    Based on the use-cases identified in the OpenSCENARIO 2.0 Concept project, as well as use cases from the testing and test-automation domain, a refined set of use-cases covering the area of scenario-based testing will be derived and refined.

  2. Set of Reference Workflows for Scenario-based Testing
    Documentation detailing a set of reference workflows for scenario-based testing, which will cover the use cases identified; as such the set will contain alternative workflows and will be based at an abstraction level that allows easy mapping to concrete workflows.

  3. Gap Analysis and Recommendations on Further Actions
    Documentation detailing the gap analysis on missing parts in the standardisation landscape to support the reference workflows, and derived recommendations on further standardisation activities deemed necessary. This will include standard extension or new standardisation projects, as well as the underlying requirements that should be met by those projects.

4.3. Review Process

The following quality assurance measures shall be carried out by the project:

  • Project member review

  • Public review

  • Reference implementation