Challenge Organization and Guidelines¶

This challenge focuses on unsupervised denoising of calcium imaging videos. Participants are provided with a training dataset of in-silico (i.e., synthetic) neurons in a calcium imaging setting, affected by two different noise levels. The goal of the proposed method is to retrieve the clean calcium signal while preserving both the spatial and the temporal dynamics.

The challenge is divided into two tasks, each comprising two phases. The tasks evaluate different aspects of the submitted models, while the two-phase structure is intended to ease the submission process on the platform.

Participants are asked to train their models on their own machines and submit a docker container including their evaluation code. The container will be executed on the platform infrastructure using the hidden test data as input. The output of the algorithm will then be evaluated against a hidden ground truth.


Tasks¶


Each task is designed to assess submitted methods from different perspectives.

TASK 1: Content Generalisation¶

This task evaluates the model's ability to generalize to unseen content—that is, to new video samples with similar characteristics to those in the training set. Each method will be tested on two video stacks with noise levels similar to those in the training set.

TASK 2: Noise Level Generalization¶

This task evaluates method's robustness to an out-of-distribution noise level. In addition to testing on unseen content, the algorithms will be evaluated on a video stack with noise levels not present in the training data. This tests the method's ability to generalize to different noise intensities.


Training your method¶


Training and development must be performed entirely on the participants' local machines. A single training dataset will be released and must be used for all model development and training purposes.

The training dataset is composed of four TIFF images of shape [1500, 490, 490] (FxHxW) and is available at . To promote data diversity required to solve both tasks, the training dataset includes four different samples, corrupted with two different noise levels.

A separate validation dataset, containing both noisy inputs and their corresponding clean signal, is provided for the sole purposes of visual inspection and model selection (e.g., early stopping, hyperparameter tuning). Participants must not use this data for training (e.g., updating model weights).

No test data will be shared. The evaluation will be performed on the challenge platform on the hidden test set after submission.

Participants are welcome to use any augmentation techniques, data processing and unsupervised learning technique or architecture, but they must not use any external data that would give access to a clean ground truth - including the validation set - to train their algorithm.

To ensure a fair challenge, a submission will be deemed valid if, at challenge deadline:

  • The code is publicly available in a reproducible format. This can be done by adding a link from the submission page.
  • The submission contains an explicit statement on the exact data and any eventually pre-trained weights that have been used to develop the algorithm.

The organizers reserve their right to exclude from the challenge any algorithm that fails to meet these requirements.


Algorithm Submission¶


Submission requires a verified account. Please keep in mind that this process is manual and it will require some time. For this reason, it is better to do it as soon as possible.

Your algorithm will be evaluated on one video at a time. The total maximum runtime for each container is set to 60 minutes. Consider implementing batching or tiled prediction appropriately if the requirements of your algorithm exceed the time or memory constraints.

For a technical description on how to submit your algorithm, including specific system requirements, please visit the dedicated page.


Phases¶


Each task has two distinct phases:

  1. Preliminary Phase: This phase is meant to be used to test your submission container on the challenge platform. As soon as your local container works correctly, you should submit it to the preliminary phase. In this phase, your algorithm will be evaluated on a small subset of data. If the submission succeeds, you can proceed with a final submission. In this phase, each user can make at most 5 submissions.
  2. Final Submission Phase: This phase is the actual evaluation that will be included in the challenge leaderboard. Please notice that the data in this phase has the same shape of the training data, so the submission will likely take longer to run compared to the preliminary phase. Each user can make at most 2 submissions to this phase, and they are meant to be used to compare two different algorithms or two different configurations of the same algorithm.

Guidelines Checklist¶


  • Submission to the platform requires a verified account. Please plan your submissions in advance, since account verification is done manually and it might take some time.
  • Train your algorithm on the provided training dataset
  • Clearly declare the data used on training an validation, either in the code readme or in the submission comment on the platform.
  • Ensure your docker container is reproducible by running it first locally on the validation dataset, and then in the preliminary phase on the platform for each task before making a final submission.
  • While it is not forbidden to submit different methods for different tasks, we kindly ask participants to submit their algorithms to all the tasks to ensure a clearer evaluation of strengths and weaknesses of each model. Algorithms appearing only in one of the two task leaderboard won't appear in the joint leaderboard.