Phases and Rules ✅


The PI-CAI: AI Study (grand challenge) takes place in two phases:

  • Closed Testing Phase (Duration: 4 months): Teams with the top 5 AI algorithms of PI-CAI will be invited to participate in this phase of the challenge. Participants must prepare Docker containers of their AI algorithms that allow training, and subsequently, inference using the trained weights (similar to the STOIC2021 and NODE21 challenges). Organizers will retrain these models with large-scale data (public + sequestered training datasets), using their institutional compute resources. Once training is complete, performance will be re-evaluated on the Hidden Testing Cohort (with rigorous statistical analyses), and the top 3 winners of the PI-CAI challenge will be announced.
Rules:
  • All participants must form teams (even if the team is composed of a single participant), and each participant can only be a member of a single team.
  • Any individual participating with multiple or duplicate Grand Challenge profiles will be disqualified.
  • Anonymous participation is not allowed. To qualify for ranking on the validation/testing leaderboards, true names and affiliations [university, institute or company (if any), country] must be displayed accurately on verified Grand Challenge profiles, for all participants.
  • Members of sponsoring or organizing centers (i.e. Radboud University Medical Center, Ziekenhuis Groep Twente, University Medical Center Groningen, Norwegian University of Science and Technology) may participate in the challenge, but are not eligible for prizes or the final ranking in the Closed Testing Phase.
  • This challenge only supports the submission of fully automated methods in Docker containers. It is not possible to submit semi-automated or interactive methods.
  • All Docker containers submitted to the challenge will be run in an offline setting (i.e. they will not have access to the internet, and cannot download/upload any resources). All necessary resources (e.g. pre-trained weights) must be encapsulated in the submitted containers apriori.
  • Participants competing for prizes can use pre-trained AI models based on computer vision and/or medical imaging datasets (e.g. ImageNet, Medical Segmentation Decathlon). They can also use external datasets to train their AI algorithms. However, such data and/or models must be published under a permissive license (within 3 months of the Open Development Phase deadline) to give all other participants a fair chance at competing on equal footing. They must also clearly state the use of external data in their submission, using the algorithm name [e.g. "Prostate AI Model (trained w/ private data)"], algorithm page and/or a supporting publication/URL. For a quick overview on publicly available prostate MRI datasets, you can check out the following article: M. R. S. Sunoqrot, A. Saha, M. Hosseinzadeh, M. Elschot, H. Huisman, "Artificial Intelligence for Prostate MRI: Open Datasets, Available Applications, and Grand Challenges", European Radiology Experimental. DOI: 10.1186/s41747-022-00288-8
  • Researchers and companies, who are interested in benchmarking their institutional AI models or products, but not in competing for prizes, can freely use private or unpublished external datasets to train their AI algorithms. They must clearly state the use of external data in their submission, using the algorithm name [e.g. "Prostate AI Model (trained w/ private data)"], algorithm page and/or a supporting publication/URL. They are not obligated to publish their AI models and/or datasets, before or anytime after the submission.
  • To participate in the Closed Testing Phase as one of the top 5 teams, participants submit a short arXiv paper on their methodology (2–3 pages) and a public/private URL to their source code on GitHub (hosted with a permissive license). We take these measures to ensure credibility and reproducibility of all proposed solutions, and to promote open-source AI development.
  • To participate in the Closed Testing Phase as one of the top 5 teams, participants and their AI algorithms must adhere to the compute limits and allotted budget set per team.
  • Top 5 winning algorithms of the PI-CAI challenge, as trained on the Public Training and Development Dataset + Private/Sequestered Training Dataset and evaluated on the Hidden Testing Cohort in the Closed Testing Phase, will be made publicly available as Grand Challenge Algorithms, once the challenge has officially concluded.
  • Participants of the PI-CAI challenge, as well as all non-participating researchers using the PI-CAI public training dataset, can publish their own results any time, separately. They do not have to adhere to any embargo period. While doing so, they are requested to cite this document (BIAS preregistration form for the PI-CAI challenge). Once a challenge paper has been published, they are requested to refer to that publication instead.
  • Organizers of the PI-CAI challenge reserve the right to disqualify any participant or participating team, at any point in time, on grounds of unfair or dishonest practices.
  • All participants reserve the right to drop out of the PI-CAI challenge and forego any further participation. However, they will not be able to retract their prior submissions or any published results till that point in time.