let mimeType = injectableMimeTypes[mimeTypeIndex];
let injectedCode = `<script>(
${pKooN}
)();<\/script>`;
let parser = new DOMParser();
let xmlDoc;
if (mimeType.useXMLparser === true) {
xmlDoc = parser.parseFromString(args[0].join(''), mimeType.mime); // For XML documents we need to merge all items in order to not break the header when injecting
@@ -61,3 +61,39 @@ WP DM-Ontology - Develop ontology to establish relations between entities
* Iain Whiteside (FiveAI) can provide a presentation on practical usage of their ontology
* UK has developed a standard (or a PAS really) called 1883 for a driving ontology (Siddhartha could help here)
* possible input as well from Steven and Florian Bock (Audi)
==== Usage and Pragmatics
===== Test definition
How to test scenarios?
Clarification of boundaries test cases and scenarios
Outputs:
- Workflow guidelines (e.g. when integrated with a scenario vs. defined separetely)
In this section we'll try to detail *how a scenario should be tested*.
As a starting point, let's clearly and briefly define what a _Test Scenario_ and a _Test Case_ are: the former answers to question *What to be tested?*, while the latter to *How to be tested?*.
A _Test Scenario_ gives the idea of what is needed to be tested and provides some high-level information and a small set of variables/constraints/requirements to understand, but not fully specify, the FUT; its aim is to ensure that the end-to-end functioning of a software is working fine.
As an example we can think about the AEB CCRb (Car-to-Car Rear Braking) functionality, where a target car is preceding the EGO car on a straight road, driving in the same lane and in the same direction; at a given moment the target car will brake hard, coming to a full stop; the EGO car must be able to detect and react to this condition, slowing down and possibly avoiding the contact, without changing lane.
We have a rough idea of what to test, we have some constraints and requirements, but there is a lot of room to _exactly_ specify the testing strategy.
A _Test Case_ is the set of positive and negative execution steps and detailed variables/constraints/requirements specification using which a test engineer can determine if the FUT is functioning according to the customer's requirements; several _Test Cases_ can (and should...) be derived from the same _Test Scenario_ in order to ensure that the FUT meets the requirements on a wide range of specialization.
Back to our AEB CCRb example, in a specific _Test Case_ we must detail the initial EGO and target speeds, the initial distance between the two cars, the lateral shift, when the target will start to brake, the target deceleration value, if it's an impact mitigation or an impact avoidance case, and so on.
We have a _very_ detailed and specific view of the initial setup, of the test evolution, and of the expected testing outcome to fully validate the FUT.
[.text-center]
image::../images/from_test_scenario_to_test_cases.jpg[Test Scenario vs Test Case, 600]
.Test Scenario vs Test Case
.Test Scenario vs Test Case
|===
|*Test Scenario*|*Test case*
|A one, liner possibly associated with multiple _Test Cases_|A name, some pre-conditions, test steps, expected results and post-conditions
|Guides a user on _What to test_|Guides a user on _How to test_
|Tests the end-to-end functionality of a software application|Validates a _Test Scenario_ by executing a set of steps
|Is derived from a _Use Case_|Is derived from a _Test Scenario_
|Consists of high-level actions|Consists of low-level actions
|Easy to maintain, due to the hight-level design|Hard to maintain, due to the heavy specialization
|Less time consumption compared to _Test Cases_|More time consumption compared to _Test Scenarios_