3rd Place Winning Solution at the Help a hematologist out Challenge


We placed 3rd in the Help a hematologist out Challenge at the Helmholtz Incubator Summer Academy – From Zero to Hero, 2022.

I attended the Helmholtz Incudbator Summer Academy - From Zero to Hero, 2022. I specially took part in the Help a hematologist out Challenge and joined the BLAMAD team. The theme of the challenge was to find creative domain adaptation solutions for blood-cell classification which is important for diagnosis of diseaeses such as anemia or leukemia.

We were given two annotated datasets (Mat_19 and Ace_20) on white blood cell images and the goal was to classify the cell type on a third, unseen dataset (WBC1). We used Cycle-GAN for domain adaptation, i.e., unpaird image translation between images in the annotated datasets and images in the unseen dataset. Afterwards, the trained generator was used to transform images in the annotated datasets to those of the unseen dataset. An example image-to-image translation between source and target datasets is shown below:

Example source <---> target translation using Cycle-GAN

A resnet18 classifier was trained on the newly transformed annotated dataset and applied on the unseen dev set. We placed 5th on the dev phase of the challenge as shown below:

Dev Phase Leaderboard

Luckily, our model was good in generalization and hence, we placed 3rd at the test phase and each of us won a power bank at the award ceremony :)

Test Phase Leaderboard

The code and detailed information on the whole challenge is available here.

I would like to thank the organizers of the challenge and my BLAMAD teammates: Bashir (Me), Lea Gabele, Ankita Negi, Martin Brenzke, Arnab Majumdar, and Dawit Hailu.


  title={Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks},
  author={Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A},
  booktitle={Computer Vision (ICCV), 2017 IEEE International Conference on},

  title={Image-to-Image Translation with Conditional Adversarial Networks},
  author={Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A},
  booktitle={Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on},

  author = {Jun-Yan Zhu},
  title = {CycleGAN and pix2pix in PyTorch},
  year = {2017},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix}}

  title={Deep residual learning for image recognition},
  author={He, Kaiming and Zhang, Xiangyu and Ren, Shaoqing and Sun, Jian},
  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},