Frequently Asked Questions¶
--- Challenge ---¶
1. What is the Critical View of Safety (CVS)?
The Critical View of Safety is a maneuver performed during laparoscopic
cholecystectomy that allows operators to safely proceed with the
procedure.
2. Who can participate in the challenge?
Anyone can sign up to compete in the challenge (except members of the
organizing labs), following the sign up instructions provided
here.
3. Will the challenge submission still be open after the submission
deadline?
No, only submissions made before the deadline will be considered
eligible.
--- Registration ---¶
1. Why is my registration not approved yet?
For your registration to be approved, you must (1) click the join button
in the top right corner on grandchallenge.org (2) Sign the
participation
agreement. Check
the Instructions page for more details. If it has been
more than 3 business days, please contact the organizers at
info@cvschallenge.org.
2. Must every member of my team submit a signed participation
agreement?
Yes, every team member must submit a signed participation agreement to
receive access to the challenge dataset. You can find the agreement
here. Upon
submission, we will correlate the names that signed the agreement with
those listed in the
Challenge Team Form (make sure you are listed
here!). Any submitting team member who has not submitted the signed
agreement before the submission deadline may forfeit their eligibility
to be considered as part of the submitting team and not be recognized
for the submission.
--- CVS Challenge dataset ---¶
1. What validation split should I use?
Participants are free to subsample a portion of the provided training data to form an internal validation set for parameter tuning before submission.
2. Can I use private datasets to train my model?
No, the use of private datasets is strictly prohibited.
3. Can I use other public datasets to train my model?
Yes, participants are free to use any publicly available data to train
and validate their submissions.
--- My Challenge Methods ---¶
1. Will the inference pipeline preserve the temporal information?
During testing, we will use an input setup that preserves the temporal frame order per video. Note that the frame rate during inference will be 1 fps, much lower than the frame rate of the original videos.
2. What is the frame rate for the test set?
1 frame per second.
3. Do I have to train my model using data sampled at 1 frame per
second?
No, not at all. Participants are free to use any and all of the data
provided within the scope of the challenge as well as any other public
data.
4. Is the testing going to be an online prediction?
Yes, only use past and current frames to inform your predictions.
5. Can I tune my model at test time?
No, test-time tuning is not permitted.
6. The different challenge objectives seem to be conflicting, how do I
prioritize what my method focuses on?
That's part of the challenge! Surgical Quality Assessments like the CVS
assessment inherently present real-world difficulties like subjective
interpretations, distribution shifts, etc... As we move toward
real-world implementations of AI in surgery, we must balance different
objectives such as being performant (subchallenge A), well-calibrated
(subchallenge B), and robust (subchallenge C). You can choose to
prioritize one or strike a balance between the different tasks.
7. Will the inference pipeline allow access to the metadata that was provided during training (e.g. use of ioc, icg, source location, etc.)?
No, the only inputs that will be available during test time will be image inputs extracted from the testing videos. While you can use these additional signals to train your model, your inference pipeline will need to essentially map a sequence of image inputs to a sequence of outputs and not rely on anything more.
--- Related Work ---¶
1. Are there public works on CVS assessment that you could point me to?
Of course, here are some published methods for CVS assessment [1], [2], [3], [4], [5].
2. Are there relevant datasets I should be looking at?
Here's one dataset for anatomy localization and CVS assessment as well as another public dataset for CVS assessment. Note, that while the target task may be the same (CVS assessment), the annotation protocols may vary and the use of these datasets needs to be carefully considered. Aside from this, several public benchmarks on various other related tasks for surgical video understanding do exist (here's a nice compilation) . We encourage participants to leverage these existing works into their submissions.
--- Submission ---¶
1. How do I submit my method?
Methods are to be submitted as a docker file. We will provide a docker
template and submission guidelines by late August 2024.
2. Can I submit to only a specific subchallenge?
No, submitted methods will be evaluated across each of the 3
subchallenges.
2. Can I submit different methods to different subchallenges?
No, a single method will be evaluated against different criteria in each
of the subchallenges.
--- Publication ---¶
1. Will my challenge submission be published?
We plan a joint publication of the CVS Challenge which will include the
submitted challenge models and results. More information will be
provided on this as time goes on.
2. When can a participant publish independent research on this
dataset?
Participants are allowed to publish their results separately only after
the publication of a joint challenge paper.
3. When will the joint results be published?
This is tentatively planned to happen before the end of 2025.