The following code repositories contain such example submissions for the Open Development Phase. Each repo includes an ensemble of baseline AI models that was trained using fully supervised learning with 1295 human expert-annotated cases, or using semi-supervised learning with 1500 human expert or AI-annotated cases (Bosma et al., 2022). All training cases belong to the Public Training and Development Dataset, and all models were trained via 5-fold cross-validation using the exact same splits. You can use any of these repos as a template for your submission to the Open Development Phase:¶
Before implementing your own algorithm using any of these templates, we recommend that you first upload a GC algorithm based on an unaltered template. In the following sections, we will provide a walkthrough of how to do this. ¶
Building, Testing and Exporting Algorithm Container on Local System¶
If you're using Linux/macOS, please use the shell scripts (.sh files) stated in all following commands. If you're using Windows, please use the batch scripts (.bat files) stated in all following commands. ¶
Clone one of the template repos (see above) to your local system, with Git LFS initialized (e.g. this can be done directly using GitHub Desktop). Build and test your inference Docker container by running build.bat/build.sh, followed by test.bat/test.sh. Please note, that testing also runs a build, so running the build command separately is not necessary (if you are certain that everything is set up correctly). Testing will build the container and run it on images provided in the ./test/ folder. It will then check the output files (csPCa detection map + patient-level likelihood score) produced by your algorithm, against those pre-computed and present in ./test/. If all tests have completed successfully, then you're good to go. Export your container using export.bat/export.sh. Once complete, your container image should be ready as a single .tar.gz file.¶
Once complete, click the "Save" button. At this point, your GC algorithm has been created and you are on its homepage. Now, click the "Containers" tab on the left panel, and then the "Upload a Container" button. Select your container image (.tar.gz file) for upload. After the upload is complete, click "Save". It typically takes 20-60 minutes till your container has been activated (depending on the size of your container). After its status is "Active", test out your container with a sample, unprocessed training case from the Public Training and Development Dataset. You can also use the sample case provided in the 'test' directory of the baseline algorithm template (see above).¶
Following these same steps, you can easily encapsulate your own AI algorithm in an inference container by altering one of our provided baseline algorithm templates. You can implement your own solution by editing the functions in ./process.py. Any additional imported packages should be added to ./requirements.txt, and any additional files and folders should be explicitly copied through commands in ./Dockerfile. To update your algorithm, you can simply test and export your new Docker container, after which you can update the container image for the GC algorithm that you created on grand-challenge.org with the new one. Please note, that your container will not have access to the internet when they are executed on grand-challenge.org, so all necessary model weights and resources must be encapsulated in your container image apriori. You can test whether this is true locally using the --network=none option of docker run.¶
If something doesn't work for you, please feel free to add a post regarding the same in the forum. ¶
Submission to Open Development Phase - Validation and Tuning Leaderboard¶
Once you have your trained AI model uploaded as a fully-functional GC algorithm, you're now ready to make submissions to the Open Development Phase of the challenge! Navigate to the "Open Development Phase - Tuning Submission" page, select your algorithm, and click "Save" to submit. If there are no errors and evaluation has completed successfully, your score will be up on the leaderboard (typically in less than 24 hours).¶
โ ๏ธ Please double-check all rules to make sure that your submission is compliant. Invalid submissions will be removed and teams repeatedly violating any/multiple rules will be disqualified. Also, please try-out your uploaded GC algorithm with a sample case before making a full submission to the leaderboard. ¶
Submission to Open Development Phase - Testing Leaderboard¶
Once you have gone through your whole AI model development + tuning cycle, and are now certain that you have your final algorithm ready and trained for independent testing, please complete the following steps:¶
Make one final submission to the Open Development Phase - Tuning Leaderboard following all the steps stated above (only if you haven't already). This is to ensure that your final model is bug-free and performs just as expected.¶
Optionally, you can prepare a document (in PDF format; minimum 1 page; no restrictions on template or maximum length), briefly covering the model design and training strategy of your AI algorithm. Please include the names, designations and contact details of your team members in this document. You can upload this document using the "Supplementary File" field in the submission form, or using the "Publication" field to share a URL to the same. If you're submitting an AI algorithm that has already been presented in one of your team's prior publications, then you can directly share a URL to that publication. Please note, this step is mandatory if you want to qualify as one of the top 5 teams of the challenge and move on to the Closed Testing Phase. If you have been selected as one of the top 5 algorithms and have not provided this document in your submission, we will contact you for the same later down the line.¶
Navigate to the "Open Development Phase - Testing Submission" page, select your final AI algorithm, optionally upload your supplementary file and/or enter your publication URL, and then click "Save" to submit. Please note, each team (not each participant per team) can only submit a single AI algorithm to the Open Development Phase - Testing Leaderboard. If multiple submissions are made, only the last submission from that team at the end of the submission deadline will be used to infer performance on the Hidden Testing Cohort.¶
Please send us an e-mail at anindya.shaha@radboudumc.nl and joeran.bosma@radboudumc.nl, with your team name, list of team members, and a URL to your submitted algorithmย (here is an example). We will use this e-mail thread to share updates regarding your submission and address issues (if any). Additionally, if you make it to the top 5, the designated team will receive the prizes listed here.¶
Results of all submissions (that are error-free and compliant with all participation rules) will be pre-computed in a hidden setting, and presented during our oral presentation at RSNA 2022. After this presentation, the results will also be made publicly available on the leaderboard.¶
We look forward to your submissions to the PI-CAI challenge. All the best! ๐¶