- You can download the dataset, submit the results after 5/11(Fri).
- Important information about this challenge will be announced through this website. Do not forget to check the website on a regular basis!
- download the dataset and submit the results are available
- Participants List has been released.
- 52 teams from 15 countries expressed their participation. Thank you for your participation. Let 's do your best until the deadline of November 10!
Since roads have a great influence on people's lives, maintenance and management of roads should be done periodically and exhaustively. However, due to lack of financial resources, many local governments are not able to conduct sufficient inspections. In fact, some municipalities automate damage determination by using high-performance sensors, but because of their high cost, many municipalities are unable to introduce them. Therefore, there is a need for a method that makes it easy to judge the damage of the road surface at low cost.
This challenge is to detect damages contained in road images photographed by a vehicle-mounted smartphone.
When this challenge goes well, there is a possibility that a simple road check can be done using only smartphones, which is extremely important.
The training and test data will consist of 9,053 photographs, collected from smartphone cameras, hand labeled with the presence or absence of 8 road damage categories.
A random subset of 7,240 of the images with labels will be released as the training set along with a list of the 8 categories. The remaining images will be used as the test set. The test data for this competition are not contained in the training data.
A match is defined as
- the prediction box has a class label that is the same as the ground truth box,
- the prediction bounding box has over 50% Intersection over Union (IoU) in the area with the ground truth bounding box (Fig.1 and Fig.2).
The evaluation metric for this competition is comparing with ground truth Mean F1-Score. The F1 score, commonly used in information retrieval, measures accuracy using the statistics precision p and recall r. Precision is the ratio of true positives (tp) to all predicted positives (tp + fp). Recall is the ratio of true positives to all actual positives (tp + fn). The F1 score is given by:
The F1 metric weights recall and precision equally, and a good retrieval algorithm will maximize both precision and recall simultaneously. Thus, moderately good performance on both will be favored over extremely good performance on one and poor performance on the other.
The team with the best evaluation result on the last day is the winner!
The following prizes are given for winner team.
- Gold Prize: $1,000
- Silver Prize: $500
- Bronze Prize: $100
This competition will continue to be on the site.
Deadline for contest teams mail to the organizers (email@example.com) letter of intent, June 10, 2018
Deadline for contest teams to submit final report and solutions, November 10, 2018
Announcement of winning teams, Nov 20, 2018
The late submission due to the delay of submitting the letter of intent was closed on July 15th.
If you are interested, please sign up through here
By signing up, you can login this system from your account.
And you can download dataset from here after the approval of the rule.
You can register from May 1st to June 10th, 2018.
This competition is based on the experience of the previous work as below.