Articles | Volume 22, issue 21
https://doi.org/10.5194/bg-22-6545-2025
© Author(s) 2025. This work is distributed under the Creative Commons Attribution 4.0 License.
Automated mask generation in citizen science smartphone photos and their value for mapping plant species in drone imagery
Download
- Final revised paper (published on 06 Nov 2025)
- Preprint (discussion started on 24 Feb 2025)
Interactive discussion
Status: closed
Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor
| : Report abuse
-
RC1: 'Comment on egusphere-2025-662', Anonymous Referee #1, 18 Mar 2025
- AC1: 'Response to Reviewer 1 Comments', Salim Soltani, 09 May 2025
-
RC2: 'Comment on egusphere-2025-662', Anonymous Referee #2, 19 Apr 2025
- AC2: 'Response to Reviewer 2 Comments', Salim Soltani, 09 May 2025
Peer review completion
AR: Author's response | RR: Referee report | ED: Editor decision | EF: Editorial file upload
ED: Reconsider after major revisions (18 May 2025) by Andrew Feldman
AR by Salim Soltani on behalf of the Authors (22 Jun 2025)
Author's response
Author's tracked changes
Manuscript
ED: Referee Nomination & Report Request started (29 Jun 2025) by Andrew Feldman
RR by Anonymous Referee #1 (18 Aug 2025)
RR by Anonymous Referee #2 (28 Aug 2025)
ED: Publish subject to minor revisions (review by editor) (02 Sep 2025) by Andrew Feldman
AR by Salim Soltani on behalf of the Authors (05 Sep 2025)
Author's response
Author's tracked changes
Manuscript
ED: Publish as is (18 Sep 2025) by Andrew Feldman
AR by Salim Soltani on behalf of the Authors (25 Sep 2025)
In this study, authors develop an end-to-end workflow that transforms the simple labels of crowd-sourced plant photos from iNaturalist and Pl@ntNet into segmentations masks. This mask dataset serves as labelled data to train deep learning species classification models. Authors also successfully utilized the dataset to train a CNN model to classify UAV ortho-imagery and accurately segment plant species at large scale. By reducing the time and labor required for field surveys to collect reference data for remote sensing image classification, this labeled dataset may offer some practical benefits. Overall, the study demonstrates both intellectual merit and practical relevance. The manuscript is also well-structured and well-written. However, the use of these citizen science datasets as labelled data for segmenting UAV images yields low accuracy in various species, hindering practical applications of these datasets and the method. The UAV image segmentation model performance should be improved for further evaluation.
Other comments
Martins et al., 2020. Exploring multiscale object-based convolutional neural network (multi-OCNN) for remote sensing image classification at high spatial resolution. https://doi.org/10.1016/j.isprsjprs.2020.08.004