Project Number

P2020-12

Domain

Simulation

Relevant Standard

none

Project Name

ASAM OpenLABEL BS 1.0.0

Project Type

Standard Development Project

Start Date

01.12.2020

End Date

30.10.2021

ASAM Funds

t.b.d€

Executive Summary

The OpenLABEL Standard Development Project will develop a standard based on the concept paper of the OpenLABEL Concept. This standard will include object labeling topics on labeling methodology, labeling structure and file format. The OpenLABEL standard project will be closely coupled with the ontology project, as the object descriptions will be delivered by the ontology project. The OpenLABEL Standard will be developed using the OpenXOntology Core Domain model. The OpenLABEL Standard will be accompanied by a user guide.

Next to the object labeling the OpenLABEL project will also cover the scenario labeling, in this case the coupling will be with the ontology project and the upcoming OpenSCENARIO project.

In the end the standard and the user guide will cover:

  • labeling methodology

    • What are different labeling methods and how shall they be applied?

    • Examples for labeling methodologies are semantic segmentation or 2D Bounding boxes.

  • labeling structure (including relations)

    • How do the different label classes relate to each other?

    • The classes are the classificators of the labels, e.g. "car", "pedestrian".

  • file format and structure definitions

    • How are the labels stored, with which information?

  • scenario labeling

    • The labeling of scenarios is special in some points due to information that might not be extractable from single frames.

    • Due to their importance for ADAS and AD validation and testing, scenario labels are discussed explicitly as own topic.

    • labels derived from object labels

    • abstract labeling of scenarios

In the openLabel concept paper the following has been reached:

  • A semantic description of the most important labeling methods.

    • I.e. how to label what?

  • A suggestion of a specific file format and its characteristics

  • An outline of specific requirements class relations and their handling for labeling

    • A taxonomy and class relations will be mainly taken from the OpenXOntology project

    • A continuos discussion with OpenXOntology is key

    • Specific classes and attributes that are special for labeling - and thus need to be provided by openLabel to OpenXOntology comprise for example "occlusion" (how is it labeled, what levels exist?) or "truncation" (truncation describes the cut off of objects at the edge of a frame or a scan)

A special focus of this standard project is to built upon the aforementioned points as follows:

  • Review of the labeling methods and putting into context to existing labeling standards or guidelines

  • Complementing the list of labeling methods and their description

  • Refinement of the labeling standard with regard to market needs.

  • Continue and intensify discussion with OpenXOntology on the specific needs of openLabel

1. Overview

1.1. Motivation

This is a proposal for the development of OpenLABEL, a new standard regarding the Labeling, for Machine Learning models training and validation, of raw data generated by vehicles equipped with sensors with the capacity to enable any SAE level of autonomy >= 2. In addition to the labeling of Datasets for machine learning the OpenLABEL Standard will also provide scenario labeling methodology

From working with different customers, a significant fragmentation emerged in the way each individual organization categorizes and describes the objects populating the driving environment. Such categorizations and descriptions are the fundamental building block of any Autonomous Driving System’s (ADS) perception stack, since it is through them that an ADS come to a primal understanding of the status of around itself, including the entities present and some aspects of their behavior. Many vital driving decisions are based on this understanding.

The lack of a common Labeling standard in the industry is the root cause of several different issues:

  • Hampered Vehicle2X Interaction: the different descriptions/understandings of surroundings may cause casualties in complex situations involving two or more different ADS’s OpenLABEL could support filling the existing V2X standards like ITS G5.

  • Precluded sharing: It results highly difficult if not impossible to share data across organizations adopting different Labeling taxonomies and specifications.

  • Lowered Annotation quality: Each individual labeling task requires ad-hoc training and even custom software features development to be completed, that translates into a higher probability of errors and thus a threat to safety.

  • Deprecation of old labels: Long-term operation of ADS development imply changes in quantity and richness of labels to be produced, considering the evolution of the driving scenes, new sensors, and scenarios. As a consequence, a flexible descriptive language is required to absorb future extensions/modifications of labels and guarantee back-compatibility.

In sum, the absence of a labeling standard such as OpenLABEL is ultimately a significant safety threat for all road users surrounding any kind of vehicle which is being operated in autonomous or semi-autonomous (SAE Level >=2) mode. OpenLABEL objective is to increase overall operational safety by providing a language that allows for the encoding of a common baseline understanding of the driving environment for any ADS.

OpenLABEL project outcomes will include:

  • a list of classes of interest - Labeling Taxonomy -, this taxonomy will be create using the MVP from the OpenXOntology Project and in alignment with the Domain Model of OpenSCENARIO 2.0.

  • the structure underlying the relations among classes,

  • the definitions for each one of the classes of interest together with examples or images and plausible class attributes,

  • the labeling specifications: a set of instructions detailing the way each class should be labeled with respect to each type of annotation, including explicit directives to treat particular instances of critical labeling situations (occlusions, associations, etc.).

  • OpenLABEL will include the designation of a suitable data format that allows for an effective representation, storing and exchanging of the generated labels.

  • Scenario labels for derivable and non-derivable labels.

  • A user Guide on how to use and apply OpenLABEL in the development process.

1.2. Use Cases

Use cases in the context of ASAM standards describe the external behavior of the standardized system, i.e. the interaction of the system with a user or with another system. The description of use cases is particularly useful for explaining the motivation for new standards, major version development projects or the addition of new features in minor version development projects.

ASAM subdivides use cases into three tiers, where each lower level is a refinement of its immediate higher level.

  • Business use case: Describes an economic advantage, company need, process, method or element of a larger tool chain that involves many people of a company or multiple companies in a customer supplier relationship. Example: ECU calibration and measurement.

  • End User Use Case: Describes a need, process, method or element of a tool chain that is handled by one person while he carries out specific tasks within a company use case. Example: Start measuring data on an ECU.

  • Technical use case: Describes a technical necessity, that is required for the operation and interoperability of technical systems, such as tools, test systems or application software, to support the tasks of end user use cases.

This can be expanded on during a project’s development.

1.2.1. Technical use cases

Type

Technical Use-Case

Title

ML model Benchmarking

Description

Ultimately, labelled data are mostly used for the purpose of ML model Training and Validation. A unified labeling standard can allow to train and validate models on a shared description of world entities and benchmark them on a common baseline.

Actors

- AV perception developers
- regulators
- policy makers
- local and national authorities
- insurances

Notes

n.a.

Type

Technical Use-Case

Title

Machine Learning Model Validation / Ground truth for model validation

Description

To train a machine learning model, lots of labeled data is necessary. After the training the model must be validated, for this purpose new data including the ground truth is necessary. With standardized labels external/new data can be used to validate the model or improve the validation and training.

Actors

- ML Engineers
- Validation Engineers
- local and national authorities
- homologation institutions

Notes

n.a.

Type

Technical Use Case

Title

Sensor Data Annotation Review

Description

A standardized format for sensor data and sensor data annotations would help developing standard tools for visualization of the data and reviewing the annotations. This would also permit outsourcing of certain tasks within the training and test dataset creation pipeline for autonomous driving.

annotate data from different sensor types (e.g. Lidar, Camera, Radar, Ultrasonic), consider time behavior of different sensor types.

Actors

- Environment Perception Developers
- Data Managers

Notes

contributed by Volker Schomerus @ VW

Type

Business Use Case

Title

Reduce human learning effort for Sensor Data Annotation

Description

With a standardized format and standard sets of classes for sensor data annotation, the ordering process for sensor data annotation services would be easier and the risk of different understandings of annotation specifications could be reduced. With this the time a labeler needs to learn how the labeling works will be reduced.

  • enable higher quality labeling

  • common annotation structure

  • everyone follows only one guideline

Actors

- Environment Perception Developers
- Data Managers
- Annotation Service Providers, - Tool Developers
- Labeler

Notes

contributed by Volker Schomerus @ VW

Type

End User Use Case

Title

Reduce tool implementation effort for Sensor Data Annotation

Description

With a standardized format and standard sets of classes for sensor data annotation, the ordering process for sensor data annotation services would be easier and the risk of different understandings of annotation specifications could be reduced. This has also an impact on the tool providers developing annotation tools for one labeling standards instead of having to support many different labeling structures and conventions. This will reduce costs for the whole workflow.

* enable higher quality labeling * common annotation structure * everyone follows only one guideline

Actors

- Environment Perception Developers
- Data Managers
- Annotation Service Providers, - Tool Developers
- Labeler

Notes

contributed by Volker Schomerus @ VW

Type

Technical Use-Case

Title

Metadata labelling, Multi-sensor labeling of objects and actions

Description

Objects and actions of a scene/frame need to be defined as entities, with intrinsic properties (e.g. type, name, numerical properties), and in addition, with projected features that define how are they projected/measured from different sensing devices (e.g. cameras, lasers, gps, etc.). I want to be able to label data with additional information about the data, e.g. Data Owner, Sensor type & version, Labelling algorithm version. Ideally, metadata labelling would be extensible so that I can add whatever labels I need but still be query-able in the exactly the same way that I would query on non-metadata labels. The labeling data format needs to manage such levels of information (intrinsic, measured), and allocate descriptions of the timestamps and synchronization between sensing devices, in a single payload. As a consequence, labels can be produced at different levels, by different teams, and in different time periods, for instance aggregating content from newly labeled streams as they are produced.

* add metadata fields in the annotations, the keys and field can be defined in the project (e.g. define schemas)

Actors

- Environment Perception Developers
- Data Managers
- Annotation Service Providers
- useful for the whole workflow

Notes

merged use case contributed by Mike Freeman @ Warwick and Marcos Nieto (mnieto@vicomtech.org)

Type

Technical Use-Case

Title

Multi-type labeling of objects and semantic properties

Description

Different ADS development use cases require different type of labels to be produced (e.g. pixel-wise for semantic segmentation, 3D polygons for lane sensing, cuboids for obstacle detection, 2D poly lines for pedestrian analysis, etc.). Geometric entities labeling require as well a data format that enable nested properties (e.g. visibility level, ids, confidence values, tokens). Last but not least, a single label payload of a scene may contain objective data describing objects, but also semantic concepts related to actions carried out by objects, events triggering sub-scenes, or relations between objects. The semantic level of the scene requires the existence of a governing ontology such that labels can point to concepts for further semantic consumption of the annotation files.

Actors

Environment Perception Developers, Data Managers, Annotation Service Providers.

Notes

label: - Objects
- environmental conditions
- Actions * - Events
- relations

this has to be aligned with the ontology
Contributed by Marcos Nieto (mnieto@vicomtech.org)

1.2.2. Business use cases

Type

Business Use Case

Title

Dataset Sharing

Description

Sharing labeled datasets effectively and in a way that guarantees their utility across different organizations can only be achieved when such datasets are annotated following a set of standardized labels there are many datasets are available, it will enable to industry and others to easier share and use available data sets. A standardized format (classes, data structures etc.) for labels for sensor data (2D/3D objects, semantic segmentation etc.) would also allow extending training and test datasets for environment perception for autonomous driving.

Actors

- Academia
- industry

Notes

merged with use case from Volker Schomerus @ VW

Type

Business Use-Case

Title

Tool development at Label tool-provider

Description

Currently labeling projects differ greatly in terms of their requirements (e.g. how many label values can a single pixel have assigned for segmentation’s). Often tool derivatives or new features need to be implemented. This can cause bugs, increases the development time and often contradict the requirement of other labeling project requirements. By using the standard,
(1) cost savings for projects following the standard are generated, because the tools can get optimized features for the Standard.
(2) automated quality could get implemented. Currently approaches for automated QA are resource (engineering) heavy and differ from project to project.

Actors

Label tool-provider

Notes

merged with the reducing effort on the tool provider side (1. annotation, 2. quality assurance, 3. review of the labels, 4.project definition) Contributed by Tim Rädsch @ understand.ai

Type

Business Use-Case

Title

Scenario labels to efficiently search scenario catalogues

Description

Current scenario catalogues contain scenarios at different abstraction levels with different state of completeness and target use cases. There is no standard labeling mechanism to annotate such scenarios with unique and unambiguous labels, so that the user can quickly search scenario catalogues for a specific scenario with specific characteristics. By using the standard,

(1) scenario catalogues from different companies, working groups and people can be searched by using the same labels and are therefore reusable/exchangeable
(2) new scenarios can be directly annotated with the corresponding labels.

Actors

Scenario creators, scenarios users, function developers.

Notes

merge with the warwick use case Contributed by Florian Bock (florian1.bock@audi.de)

Type

Business Use-Case

Title

Cascading labeling guidelines to the label provider

Description

After finalizing the labeling guidelines together with the customer, the labeler working on the project need to Understand and Apply The labeling guidelines. Usually Labeler get trained by their Team lead and technical systems, who cascade the information of the labeling guidelines and helps with answering questions and feedback. By using the standard, (1) Inconsistencies (both missing information and contradictory information) in the labeling guidelines (HOW and WHAT to annotate) will be avoided. (2) onboarding times for the labeler and the Team lead can be reduced since the standard should not change that often. (3) Understanding should be more unified, if the standard is used in multiple projects.

Actors

Label provider Team lead of Labeler Labeler

Notes

merge with reducing effort on the human side, every party in the workflow can follow the standard Contributed by Tim Rädsch @ understand.ai

Type

Business Use-Case

Title

Ordering labeled data as OEM from supplier

Description

Person has a budget for ordering labeled data. This is especially useful for the first time for this person to order labeled data. Person assigns the labeling task towards a Label provider. For the questions on HOW to annotate WHAT to annotate The new labeling standard will be used. By using the standard,

(1) Inconsistencies (both missing information and contradictory information) in the labeling guidelines (HOW and WHAT to annotate) will be avoided.
(2) iteration cycles to create the labeling guidelines will be reduced.
(3) iteration cycles to review the quality will be reduced.

The time to the delivery of the annotations is shortened, because (1), (2) and (3)

Actors

Person at company department (orders the data). Label provider.

Notes

Contributed by Tim Rädsch @ understand.ai style guide for labeling including rules (do’s and dont’s) + examples

1.2.3. End user use cases

Type

End-User Use-Case

Title

Using labels to select scenarios to test an operational domain

Description

As a test engineer, I want to be able to identify a set of scenarios from a scenario database using a set of labels that I choose to define my operational domain that I want to test for.

I want to select data at different abstraction levels, e.g. all types of roundabouts in the rain, 3 entrance roundabouts in drizzle, and also by specifying values, e.g. cars less than 1200mm high.

* Scenario Labeling
* create metadata labels for data
* make scenarios searchable and comparable

Actors

Test Engineer using scenarios to test ADS

Notes

contributed by Mike Freeman @ Warwick

1.3. Requirements

The OpenLABEL Concept project shall work on concepts for a future OpenLABEL standard, so this standard can full fil the following requirement Also the following requirements are to consider:

Table 1. general requirements

general requirement

OpenLABEL shall describe the methodology how to label objects and scenarios, based on the defined ontology. This labeling should work on real and synthetic data.

general requirement

OpenLABEL shall define the required information to identify and label objects and scenarios

general requirement

OpenLABEL shall provide methods how to label objects

general requirement

The data format provided by OpenLABEL to label objects and scenarios should be independent form the data source

general requirement

OpenLABEL shall get the object/terminology/label definitions from the ontology project

general requirement

Data format and specification should account for and enable the definition of objects, events, relations, actions, intentions, subject/predicate/objects triplets (SPO) and other entities or properties allowing for a machine and human readable knowledge representation.

Table 2. technical requirement

technical requirement

OpenLABEL shall have the capability to store metadata and labels for data, independent of the source

technical requirement

OpenLABEL should support annotation of data from different source (extending and sharing data sets)

technical requirement

The OpenLABEL annotation format must be quick to serialize

technical requirement

OpenLABEL should provide metrics for quality assurance

technical requirement

In the data format of OpenLABEL it needs to be possible to use different labeling methods and assign relations (1:n) to labels objects, also having the possibility to add actions and intentions as label to an object will enhance OpenLABEL

Table 3. End-user requirements

End-user requirement

The OpenLABEL format needs to be human readable and easy to understand

End-user requirement

The OpenLABEL User guide shall support the user in understanding OpenLABEL to reduce learning effort.

End-user requirement

OpenLABEL Documentation should help the user to measure the quality of the labeled dataset

End-user requirement

1.4. Relations to Other Standards or Organizations

  • Relation to the ASAM OpenXOntology Project

  • Relation to the ASAM OSI Project

  • Relation to OpenDRIVE

  • Relation to OpenSCENARIO

  • Relation to BSI PAS 1883

  • Relation to OpenODD

2. Technical Content

The goal of the proposed project is to create a the first OpenLABEL Standard based on the OpenLABEL Concept paper for Data Labeling that has to fulfil the use cases detailed above.

The OpenLABEL Standard Development Project will also be the first standard to use the OpenXOntology Domain Model, and therefore be closely couple with the OpenXOntology Project, to further improve and develop the modelling approach and the content of the domain model

The OpenLABEL Standard Development Project has the following content. The work in the project will be seperated in different workpackages and subworkpackes

WP

Title

short description

1.

User guide

The User guide will help future users of OpenLABEL to apply the standard for their use cases. The User guide will be accompanied by examples

1.1

Usage and Pragmatics group

The usage and pragmatics group will test the OpenLABEL Spec on defined examples and use cases and provide feedback to the other workpackages

1.2

Writing the User guide

This sub workpackage is about writing the user guide, this will be support by a service provider

2.

Specification Harmonization

The main task of this work package is to ensure the alignment of OpenLABEL with other OpenX Standards

2.1

Aligned Terminology with OpenXOntology, OpenSCENARIO 2.0 and OpenODD

This sub working group shall create an aligned terminology for the OpenLABEL Project

2.2

Add Terminology into the OpenXOntology Application Model of OpenLABEL

Apply the Terminology to an Application Model for OpenLABEL provided by the OpenXOntology

3.

Object + Scene Labeling

The Object and Scene Labeling working group will be tasked to create the specification for the labeling of objects identified in a scene (point in time)

3.1

Object Labeling

This sub working group is focused on describing how single objects can be labeled

3.2

Scene Labeling

this sub working group is focused on how the labeled objects are being labeled in the context of a scene. There this workpackage has several perspectives: conditional labels, event labels, action labels, relation labels

3.2.1

Condition labeling in a scene

defining the labels to represent conditions in a scene

3.2.2

Event labeling in a scene

define the labels to represent events that happen in this scene, including the timeline that lead to this event

3.2.3

Action labeling in a scene

define the labels to represent actions that happen in this scene, including the object that executes this action and/or is affected by this action

3.2.4

Relation labeling in a scene

define how relation between any label can be expressed, e.g between two objects (bicycle and cyclist)

4.

Scenario Labeling(adding metadata to scenario files)

The scenario labeling working group will define the scenario labels on a meta level. this will include labels that can be derived from the content of the scenario as well as labels which are non derivable.

5.

Data Format

the working group "Data Format" will create the json format based on the input given in the concept paper and provided by the specification working group (2 - 4)

6.

Standard Documentation

This workpackage has a close interaction with all other work package and is responsible to create the final standard document. This workpackage will be mainly executed by a service provider

2.1. User guide "how to label"

In the project the different labeling methodologies will be documented and the experience of the attending project members will help to create a guide for the application if OpenLABEL.

The OpenLABEL User Guide will cover:

  • labeling methodology

    • What are different labeling methods and how shall they be applied?

    • Examples for labeling methodologies are semantic segmentation or 2D Bounding boxes.

  • labeling structure (including relations)

    • How do the different label classes relate to each other?

    • The classes are the classificators of the labels, e.g. "car", "pedestrian".

  • file format and structure definitions

    • How are the labels stored, with which information?

  • scenario labeling

    • The labeling of scenarios is special in some points due to information that might not be extractable from single frames.

    • Due to their importance for ADAS and AD validation and testing, scenario labels are discussed explicitly as own topic.

    • labels derived from object labels

    • abstract labeling of scenarios

The workpackage "User Guide" has two sub working groups

  1. Usage and Pragmatics: This working group will be using the OpenLABEL Specification and provide feedback to the other working groups, in terms of improvement suggestion and pointing out gaps, contradictions or limitations of the specification.

  2. Documentation User Guide: This workpackage write the user guide, with the provided input from the OpenLABEL concept paper and the feedback from the other working group, in particular from the "Usage and Pragmatics Group"

2.2. Specification Harmonization

This workpackage is responsible for the alignment of OpenLABEL with the other standards within the Simulation Domain @ ASAM. This means in particular a close link to the OpenXOntology and the Domain Model group of the OpenSCENARIO 2.0 project.

This workpackage has to major tasks

  1. Create a common terminology for the OpenLABEL Project, this terminology needs to be aligned with the mentioned standards. As a basis the MVP from the OpenXOntology project will be used

  2. Apply the agreed terminology onto the Application model for OpenLABEL provided from the OpenXOntology.

2.3. Object and Scene Labeling:

Often, the frames coming from sensors such as cameras depicts complex situations going on in the surrounding of the vehicle. Objects appearing in a scene are seldom fully visible. Moreover, articulated objects , intersected objects or objects in unusual configurations can pose a threat to annotation quality and can be annotated in many different ways. Labeling specifications are a set of detailed instructions that guide the labeling process giving precise labeling directions on how to treat the various cases detailed above. As an example, instructions about how to label an articulated truck partially occluded by a car and so on.

  • Develop general instructions for each class and attribute [at a meaningful level of granularity]

  • Map cases, objects and attributes that require special ad hoc labeling specs and develop

  • Include visuals, examples and any other asset that can help disambiguate and clarify the instructions

  • Consider the time behavior of different sensor types for the annotation

  • make the labeled data mergeable to extend existing datasets

  • Enable Quality assessments

2.3.1. Object labeling

In the specification design the group has to consider how to label objects in the OpenLABEL standard. The requirements from this workgroup will be shared with the OpenXOntology project. In addition the project group will align the OpenLABEL Object labels with the OpenSCENARIO 2.0 Domain Model, to ensure compatibility.

2.3.2. Scene labeling

In the specification design the group has to consider how to label scenes (point in time) in the OpenLABEL standard. In a scene the labeled objects need to be placed and the are further entities that can be labeled. This includes:

  1. Events: An event can be triggered by an object or effect an labeled object, usually an event is present over several consecutive scenes

  2. Actions/Intention: similar a events actions can either be triggered or lead to events. usually an action can be labelled over several consecutive scenes, as it builds up.

  3. Conditions: In a scene it might be necessary to label conditions

The requirements from this workgroup will be shared with the OpenXOntology project. In addition the project group will align the OpenLABEL scene labels with the OpenSCENARIO 2.0 Domain Model, to ensure compatibility.

2.4. Scenario labeling

In the specification design the group has to consider how to label scenarios in the OpenLABEL standard. The requirements from this workgroup will be shared with the OpenXOntology project. In addition the project group will align the OpenLABEL Object labels with the OpenSCENARIO 2.0 Domain Model, to ensure compatibility. In the concept project the basis for the scenario labels is based on the PAS BSI 1883.

2.5. Labeling Data Format

In the OpenLABEL Standard Development Project the group will create the annotation format based on the proposed format from the OpenLABEL Concept Project. Independent of the format the concept paper will also contain a first version of a possible labeling structure e.g.:

metadata / header
    frame
    source
label
    type = object/scenario/...
    name (ontology link to description)
    relation
    label method
    geometry (e.g. 3D bounding box)
    dynamic = yes/no
        action / intention
        ...

General requirements for the format and the structure are:

  • make the labeled data mergeable (without converter in between) to extend existing datasets

  • make datasets comparable

  • easy to understand

  • human readable

2.6. OpenLABEL Standard documentation (Service Provider)

  • The OpenLABEL Standard development project will provide a standard document based on the concept paper from the OpenLABEL Concept Project

  • In the Standard Development Project the group will also create a user guide ("how to label") / style guide for OpenLABEL

    • incl. Examples for the individual methods and concepts of OpenLABEL

3. Project Resources

3.1. Required

A breakdown of the project into individual work packages and the corresponding effort required to complete them. Effort should be given in man-hours.

3.1.1. Effort

The effort for sub work packages is accumulated in the main workpackage (e.g. 1. = 1.1 + 1.2)

WP

Title

short description

est. days

1.

User guide

The User guide will help future users of OpenLABEL to apply the standard for their use cases. The User guide will be accompanied by examples

110

1.1

Usage and Pragmatics group

The usage and pragmatics group will test the OpenLABEL Spec on defined examples and use cases and provide feedback to the other workpackages

50

1.2

Writing the User guide

This sub workpackage is about writing the user guide, this will be support by a service provider

60

2.

Specification Harmonization

The main task of this work package is to ensure the alignment of OpenLABEL with other OpenX Standards

60

2.1

Aligned Terminology with OpenXOntology, OpenSCENARIO 2.0 and OpenODD

This sub working group shall create an aligned terminology for the OpenLABEL Project

30

2.2

Add Terminology into the OpenXOntology Application Model of OpenLABEL

Apply the Terminology to an Application Model for OpenLABEL provided by the OpenXOntology

30

3.

Object + Scene Labeling

The Object and Scene Labeling working group will be tasked to create the specification for the labeling of objects identified in a scene (point in time)

400

3.1

Object Labeling

This sub working group is focused on describing how single objects can be labeled

200

3.2

Scene Labeling

this sub working group is focused on how the labeled objects are being labeled in the context of a scene. There this workpackage has several perspectives: conditional labels, event labels, action labels, relation labels

200

3.2.1

Condition labeling in a scene

defining the labels to represent conditions in a scene

50

3.2.2

Event labeling in a scene

define the labels to represent events that happen in this scene, including the timeline that lead to this event

50

3.2.3

Action labeling in a scene

define the labels to represent actions that happen in this scene, including the object that executes this action and/or is affected by this action

50

3.2.4

Relation labeling in a scene

define how relation between any label can be expressed, e.g between two objects (bicycle and cyclist)

50

4.

Scenario Labeling(adding metadata to scenario files)

The scenario labeling working group will define the scenario labels on a meta level. this will include labels that can be derived from the content of the scenario as well as labels which are non derivable.

70

5.

Data Format

the working group "Data Format" will create the json format based on the input given in the concept paper and provided by the specification working group (2 - 4)

60

6.

Standard Documentation

This workpackage has a close interaction with all other work package and is responsible to create the final standard document. This workpackage will be mainly executed by a service provider

100

Table 4. Total effort
WP No. Project member (man-days) Service Provider (man-days) Total (man-days)

1.

50

60

110

1.1

50

0

50

1.2

0

60

60

2

60

0

60

2.1

30

0

30

2.2

30

0

30

3

400

0

400

3.1

200

0

200

3.2

200

0

200

4.

70

0

70

5.

60

0

60

6.

0

100

100

Total

800

3.1.2. Budget

This section details the budget required by the project to e.g. pay service providers and the funds to be provided by ASAM.

Table 5. Funds required for Service Providers
Task Description Effort Cost (€700 / man-day)

n.a.

n.a.

n.a.

Table 6. Funds Provided by ASAM
Amount (Euros)

n.a.

3.2. Committed

Member companies contribute resources for the project as per the following table.

Table 7. Work Effort
Company (Name, Location) Committed Work (man-days) Participant contact details (name, phone, email)

Vicomtech (Spain)

60 days

Marcos Nieto

Five AI (UK)

25 days

John Reford

understand.ai

50days

Simon Funke
Tim Rädsch

Annotell AB

27days

Nic Schick

Peak Solution

15 days

Alexander Hassler

TCS

29

Arun Prasad
Sanjay Dulepet

LiangDao GmbH

25

Xiaotong Liu

Dataloop

25

Avi Yashar

Springcloud

22

Younggi Song
BH Kwon
Chirakkal Vinjohn
Sehong Kok

Caliber Data Labs

25

Yaser Khalighi

Iasys

25

Puran Parekh
Shantaram Jadhav
Deepak Patil

University of Warwick

75

Xizhe Zhang
Siddartha Khastgir
Mike Freeman

Automotive Data of China

40

Qingchun Hao
Bolin Zhou

Advanced Data Controls Corp

10

Nakanishi Yasuyuki

Altran Deutschland

10

Khalifa Ramzi

ika RWTH Aachen University

25

Johannes Hiller

saicmotor

25

Yong Luo

AKKA

25

Dmitrij Velkin

Uniquesec

15

Toktam Bagheri

AI3DSENSORY

25

Nurul Chowdhury

Ansys Germany GmbH

24

Anupam Ashish
Evren Yortucboylu

Incenda AI GmbH

20

Marius Reuther
Felix Friedmann

Deepen AI

25

Nicola Croce

German Aerospace Center

20

Jörg Peter Schaefer

The following intellectual property will be transferred from member companies to ASAM:

Table 8. Intellectual Property
Company (Name, Location) IP Description Value (Euros)

3.3. Summary

Table 9. Required work effort should be less than or equal to committed work effort + service provider contracts

Committed Work Effort

Contracted to Service Providers

Required Work Effort

4. Project Plan

4.1. Timeline

roadmap OpenLABEL 1 0 0

4.2. Deliverables

At the end of the project, the project group will hand over the following deliverables to ASAM:

Item No. Description

1.

Specification for OpenLABEL

2.

User Guide for OpenLABEL

4.3. Review Process

The following quality assurance measures shall be carried out by the project:

  • Peer Review

  • Editorial Review

  • Project Internal Review

  • Public Review

  • Reference Implementation

  • Implementation Project

  • Validator Project

  • KickOff Workshop for Public Review