Embracing Annotation Efficient Learning (AEL) for Digital Pathology and Natural Images
Jitendra Malik once said, "Supervision is the opium of the AI researcher". Most deep learning techniques heavily rely on extreme amounts of human labels to work effectively. In today's world, the rate of data creation greatly surpasses the rate of data annotation. Full reliance on human annotations is just a temporary means to solve current closed problems in AI. In reality, only a tiny fraction of data is annotated. Annotation Efficient Learning (AEL) is a study of algorithms to train models effectively with fewer annotations. To thrive in AEL environments, we need deep learning techniques that rely less on manual annotations (e.g., image, bounding-box, and per-pixel labels), but learn useful information from unlabeled data. In this thesis, we explore five different techniques for handling AEL.
E. Wern The and G. W. Taylor, "Learning with Less Labels in Digital Pathology Via Scribble Supervision from Natural Images," 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), 2022, pp. 1-5, doi: 10.1109/ISBI52829.2022.9761615.
E. W. Teh, DeVries, T., & Taylor, G. W. (2020). ProxyNCA++: Revisiting and Revitalizing Proxy Neighborhood Component Analysis. In Computer Vision �?? ECCV 2020 (pp. 448�??464). Springer International Publishing. https://doi.org/10.1007/978-3-030-58586-0_27
Eu Wern Teh, Terrance DeVries, Brendan Duke, Ruowei Jiang, Parham Aarabi, & Graham W Taylor. (2022). The GIST and RIST of Iterative Self-Training for Semi-Supervised Segmentation. 10.48550/arXiv.2103.17105