[NEW] Benchmark

Even after the end of the contest, you may test the performance of your model and compare it to other users. We have also prepared new benchmarking datasets to be released after EMSLIBS 2019.



One of the most frequent applications of LIBS is material identification. Since most of these tasks are carried out based on a material library they can be regarded as classification. Hence, the aim of this competition is to find a robust classification algorithm capable of dealing with challenging datasets.

Time schedule

  • Start Date: 4/08/2019
  • End Date: 7/31/2019

Last considered submission before 23:59:59 31st of July 2019

Timezone: UTC+2 (Czech Republic - summer time)

  • Release of results: 8/31/2019
  • Presentation at EMSLIBS: 9/08/2019 - 9/13/ 2019


Supplied by a sufficient amount of data, modern machine learning methods are able to accurately learn the distribution of LIBS spectra. Therefore, the classification of samples (or spectra) selected from the same probability distribution is generally highly successful and poses no real challenge. Nevertheless, in real-life applications, often a set of samples is available for training, while the classification is targeting a wider range of samples: mineral samples, such as hematites (e.g., several hematite samples can be collected and used to build a classification model which is later used to identify a wider range of hematite samples); metallic materials contaminated with various pollutants, etc. Hence, the dataset subject to this competition simulates these conditions.


The goal of the competition is to correctly classify the test dataset with the highest possible accuracy.


The data for both training and testing the constructed classification models will be provided by us. There are 12 classes in total, consisting of 138 samples. However, the number of samples varies among classes. Every sample was measured with the same conditions. The samples are OREAS certified soil samples cast into gypsum for more convenient handling.

For a detailed description and access to the dataset, select Data option in the menu. (accessible after login)


  • The contestants are free to form groups as long as they register as such, i.e., individual groups should participate using a single account.

  • Collaboration among separate groups is discouraged.

    • The contestants are free to process the spectra as they see fit.

In addition to the behaviors outlined by the official competition rules, "forbidden behavior" encompasses any attempt to gain an edge in accuracy by using information that is outside of the provided dataset, or an attempt to use the provided information in a way that is not intended. Examples of forbidden behavior include (but are not limited to):

  • Attempting to use datasets and references beyond those made available by the competition

  • Attempting to abuse the competition infrastructure to gain an edge

Please note that we are reserving the right to disqualify any contributions demonstrating suspicious behavior.

Contest evaluation

There will be a presentation of the best-performing classification model at EMSLIBS 2019. After the conference, there will be a paper published with comparison of approaches and results of 3 best participants (groups).

Privacy of participants

During the contest, the names of participation groups are hidden. At the end of the contest, you can choose whether you want to reveal the name or stay in privacy.