Lifelong Robotic Vision IROS 2019 Competition OpenLORIS-Object OpenLORIS-Scene OpenLORIS-Location

Lifelong Object Recognition

Lifelong Object Recognition Challenge starts online now, please join the competition on Codalab ! Give us feedback at Github page or mail to Qi She if you encounter any problem.

  • This challenge intends to explore how to leverage the knowledge learned from previous tasks that could generalize to new task effectively, and also how to efficiently memorize of previously learned tasks. Making the robot behaves like the human with knowledge transfer, association, and combination capabilities.

  • To our best knowledge, the provided lifelong object recognition dataset is the 1st one that explicitly indicates the task difficulty under the incremental setting, which is able to foster the lifelong/continual/incremental learning in a supervised/semi-supervised manner. Different from previous instance/class-incremental task, the difficulty-incremental learning is to test the model’s capability over continuous learning when faced with multiple environmental factors, such as illumination, occlusion, camera-object distances/angles, clutter, and context information in both low and high dynamic scenes.

object-demo

Task-specific Rules

  • The methods should be incremental, which means the model should be only trained over the current task, and test over all previous,current,future tasks. In the 1st round we provide 9 batches of datasets, for each batch, we have train/validation/test data splits. The core of this incremental learning setting is, we need the first train on the 1st batch of the dataset, and then 2nd batch, 3rd batch, until the 9th batch, and then use the final model to obtain the test accuracy of all encounter tasks (batches). The training/validation datasets can only be accessed during the model optimizations, and any participant use the testing dataset once detected will be removed from the rank list (after the 1st round, the top-ranked participants should provide reproducible procedures).
  • We hold our competition on the Codalab website, and the participants should submit their prediction results (object label) and can be evaluated online with our ground truth.
  • We have provided 3 pre-trained models over the provided datasets. The participants can use and think about how to optimize based on them.

  • The participants who achieve the high ranking results are encouraged to deliver an oral presentation in IROS 2019 competition section and have a official award from IROS onsite.

OpenLORIS-Object Dataset

  • Please find the 9 batches of datasets, each containing train/validation/test splits here: Baidu Pan, for obtaining the password, please contact the maintainer Qi She. please provide your intitute, email and the names of participants.

Evaluation Procedure

  • The results you submitted are required to be named of “test_batch1.csv”, “test_batch2.csv” …… “test_batch9.csv”, and the format of each CSV file is shown below.
                 file         label_predict
    0 0000 xxx
    1 0001 xxx
    ... ... ...
     

Baseline Model

Discussion Channel

FAQ

  • Do we have to train our model sequentially as your provided 9 batches ?

    Yes. To obtain the consistent and comparable results, you need to train your model and design your own learning algorithm that can train sequentially over provided 9 batches.

  • Can you provide a briefer introduction how to train the model that can meet the competition requirements ?

    For example, when you train the 1st batch of datasets, you can only have access to train/validation data of 1st batch, but need to test over 1-9 batch test datasets; next when you train the model over 2nd batch of datasets, you can only have access to train/validation data of 2nd batch, but you can also keep some of validation data from 1st batch for learning current model. Finally, you need to provided your model sequentially trained on 9 batches of dataset.

  • Can you provide a briefer introduction how to train the model that can meet the competition requirements ?

    For example, when you train the 1st batch of datasets, you can only have access to train/validation data of 1st batch, but need to test over 1-9 batch test datasets; next when you train the model over 2nd batch of datasets, you can only have access to train/validation data of 2nd batch, but you can also keep some of validation data from 1st batch for learning current model. That means the validation dataset can be kept for the sequential learning task. Finally, you need to provide your model that have been sequentially trained over 9 batches of datasets.

  • The motivation of these training procedure constraints ?

    This scenario is quite common when the system is deployed as the real-world application, which should be able to update the model day after day, also the system memory is also valuable that we can not keep old dataset on the current systems, sometimes, we only select some coresets of the dataset that we have used. It is non-trivial to pick out the data that can be the summarizations of the encountered data. We constrain our learning/training procedure that approach this kind of real-world scenario.

  • Can we submit the provided baseline models for joining the competition ?

    Yes, you can submit the provided baseline models, but as we have tested, the results are not state-of-the-arts. The models provided can be used for your quicker familiar with the procedure.

  • Is there any state-of-the-art method that can learn continuous learning strategy ?

License