Inferior Alveolar Nerve (IAN) canal detection has been the focus of multiple recent works in several branches of maxillo-facial imagery. Deep learning based techniques reached interesting results also in this research field, but the small size of 3D maxillofacial datasets strongly limited the performance of these algorithms. This forced researchers to build their own datasets, which are private, and this practice prevented the ability of really reproducing results and of fairly comparing proposals. We created a novel, large, and publicly available maxillo-facial CBCT (Cone Beam Computed Tomography) dataset, with 2D and 3D manual annotations, provided by expert clinicians. Leveraging on this dataset and employing deep learning techniques, we are able to improve the state of the art on the 3D mandibular canal segmentation. In this page you can download the data, the annotation tool and access to the source code which allows to exactly reproduce all the reported experiments.
In the table below, we reported some basic information about our 3D images.
For a complete technical description of how we handled the volumes please refer to our manuscript.
Project is open source, feel free to contribute with new features.
|Primary dataset (dense)||91|
|Secondary dataset (sparse)||256|
|Max volume shape||178, 423, 463|
|Min volume shape||148, 265, 312|
|Avg volume shape||169, 342, 370|
An overview of the folder's tree is pictured below. The split.json file provides information about which patients we used in our training, validation and test set. Each patient in the SPARSE folder contains a gt_sparse.npy numpy archive with the labels generated from the 2D panoramic view. Using the code provided in our repository, researchers can easily generate the circle expansion annoation from this file. A code to re-generate the gt_alpha.npy dense annotation from the masks.json and planes.npy files is also included in the repository of this project.
Dataset is now available
You need to have an account to download the dataset. Please sign up! A repo with all the pre-processing and our experiments is also available.
Two different repositories are available for this project.
If you use our dataset, please cite our works in your manuscript.