One-shot learning with triplet loss for vegetation classification tasks

Автор: Uzhinskiy Alexander Vladimirovich, Ososkov Gennady Alexeevich, Goncharov Pavel Vladimirovich, Nechaevskiy Andrey Vasilevich, Smetanin Artem Alekseevich

Журнал: Компьютерная оптика @computer-optics

Рубрика: Обработка изображений, распознавание образов

Статья в выпуске: 4 т.45, 2021 года.

Бесплатный доступ

Triplet loss function is one of the options that can significantly improve the accuracy of the One-shot Learning tasks. Starting from 2015, many projects use Siamese networks and this kind of loss for face recognition and object classification. In our research, we focused on two tasks related to vegetation. The first one is plant disease detection on 25 classes of five crops (grape, cotton, wheat, cucumbers, and corn). This task is motivated because harvest losses due to diseases is a serious problem for both large farming structures and rural families. The second task is the identification of moss species (5 classes). Mosses are natural bioaccumulators of pollutants; therefore, they are used in environmental monitoring programs. The identification of moss species is an important step in the sample preprocessing. In both tasks, we used self-collected image databases. We tried several deep learning architectures and approaches. Our Siamese network architecture with a triplet loss function and MobileNetV2 as a base network showed the most impressive results in both above-mentioned tasks. The average accuracy for plant disease detection amounted to over 97.8 % and 97.6 % for moss species classification.

Еще

Deep neural networks, siamese networks, triplet loss, plant diseases detection, moss species classification

Короткий адрес: https://sciup.org/140290256

IDR: 140290256   |   DOI: 10.18287/2412-6179-CO-856

Список литературы One-shot learning with triplet loss for vegetation classification tasks

  • Uzhinskiy AV, OsoskovGA, Goncharov PV, Nechaev-skiy AV. Multifunctional platform and mobile application for plant disease detection. CEUR Workshop Proc 2019; 2507: 110-114.
  • Goncharov P, Ososkov G, Nechaevskiy A, Uzhinskiy A, Nestsiarenia I. Disease detection on the plant leaves by deep learning. In Book: Kryzhanovsky B, Dunin-Barkowski W, Redko V, Tiumentsev Y, eds. Advances in Neural Computation, Machine Learning, and Cognitive II. Cham, Switzerland: Springer Nature Switzerland AG; 2019:151-159.
  • Goncharov P, Uzhinskiy A, Ososkov G, Nechaevskiy A, Zudikhina J. Deep siamese networks for plant disease detection. EPJ Web of Conferences 2020; 226: 03010.
  • Schroff F, Kalenichenko D, Philbin J. FaceNet: A unified embedding for face recognition and clustering. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2015: 815-823. DOI: 10.1109/CVPR.2015.7298682.
  • Uzhinskiy A, Ososkov G, Frontasieva M. Management of environmental monitoring data: UNECE ICP Vegetation case. CEUR Workshop Proc 2019; 2507: 202-207.
  • Hughes D, Salathé M. An open access repository of images on plant health to enable the development of mobile disease diagnostics through machine learning and crowdsourcing. arXiv Preprint 2015. Source:
  • Mohanty S, Hughes D, Salathé M. Using deep learning for image-based plant disease detection. Front Plant Sci 2016; 7: 1419.
  • Too EC, Yujian L, Njuki S, Yingchun L. A comparative study of fine-tuning deep learning models for plant disease identification. Comput Electron Agric 2019; 161: 272-279.
  • Ferentinos KP. Deep learning models for plant disease detection and diagnosis. Comput Electron Agric 2018; 145: 311-318.
  • Fuentes A, Yoon S, Kim S, Park D. A robust deep-learning-based detector for real-time tomato plant diseases and pests recognition. Sensors 2017; 17: 2022.
  • Turkoglu M, Hanbay D. Plant disease and pest detection using deep learning-based features. Turk J Elec Eng & Comp Sci 2019; 27: 1636-1651.
  • Selvaraj M, Vergara A, Ruiz H, Safari N, Elayabalan S, Ocimati W, Blomme G. Al-powered banana diseases and pest detection. Plant Methods 2019; 15: 92.
  • Saleem M, Potgieter J, Arif K. Plant disease detection and classification by deep learning. Plants 2019; 8: 468.
  • Ise T, Minagawa M, Onishi M. Classifying 3 moss species by deep learning, using the "chopped picture" method. Open J Ecol 2018; 8: 166-173.
  • Cheng D, Gong Y, Zhou S, Wang J, Zheng N. Person reidentification by multi-channelparts-based cnn with improved triplet loss function. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016: 1335-1344.
  • Hermans A, Beyer L, Leibe B. In defense of the triplet loss for person re-identification. arXiv preprint. Source:
  • Dong X, Shen J. Triplet loss in Siamese network for object tracking. Proc European Conference on Computer Vision (ECCV) 2018; 459-474.
  • Puch S, Sánchez I, Rowe M. Few-shot learning with deep triplet networks for brain imaging modality recognition. In Book: Wang Q, Milletari F, Nguyen HV, Albarqouni S, Jorge Cardoso M, Rieke N, Xu Z, Kamnitsas K, Patel V, Roysam B, Jiang S, Zhou K, Luu K, Le N, eds. Domain adaptation and representation transfer and medical image learning with less labels and imperfect data. Springer; 2019.
  • Anshul T, Daksh T, Padmanabhan R, Aditya N. Deep metric learning for bioacoustic classification: Overcoming training data scarcity using dynamic triplet loss. J Acoust Soc Am 2019; 146: 534-547.
  • Zhang J, Lu C, Wang J, Yue X, Lim S, Al-Makhadmeh Z, Tolba A. Training convolutional neural networks with multi-size images and triplet loss for remote sensing scene classification. Sensors 2020; 20(4): 1188.
Еще
Статья научная