Articles | Volume 23, issue 9
https://doi.org/10.5194/bg-23-2959-2026
© Author(s) 2026. This work is distributed under the Creative Commons Attribution 4.0 License.
PeatDepth-ML: a global map of peat depth predicted using machine learning
Download
- Final revised paper (published on 04 May 2026)
- Preprint (discussion started on 18 Nov 2025)
Interactive discussion
Status: closed
Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor
| : Report abuse
-
RC1: 'Comment on egusphere-2025-5363', Anonymous Referee #1, 17 Dec 2025
- AC1: 'Reply on RC1', Joe Melton, 21 Feb 2026
-
RC2: 'Comment on egusphere-2025-5363', Anonymous Referee #2, 08 Feb 2026
- AC2: 'Reply on RC2', Joe Melton, 21 Feb 2026
Peer review completion
AR – Author's response | RR – Referee report | ED – Editor decision | EF – Editorial file upload
ED: Reconsider after major revisions (06 Mar 2026) by Benjamin Stocker
AR by Joe Melton on behalf of the Authors (27 Mar 2026)
Author's response
Author's tracked changes
Manuscript
ED: Publish as is (02 Apr 2026) by Benjamin Stocker
AR by Joe Melton on behalf of the Authors (03 Apr 2026)
The authors present PeatDepth-ML, a machine-learning framework for predicting global peat depth using a large compilation of peat depth measurements and environmental covariates. They extend existing peatland mapping approaches by incorporating additional predictors, revised spatial cross-validation, a custom metric targeting deep peat, and a bootstrapping strategy to assess sensitivity to sampling bias. Model performance is evaluated with blocked leave-one-out validation, and the resulting global peat depth map is used to estimate global peat carbon stocks, which are found to be consistent with previous studies.
I think the work is relevant for the journal and generally well-executed, though I think some revisions are in order prior to publication. I will give detailed list of comments in the following. Thank you for your work.
Detailed comments:
Lines 49 and 66: "machine learning" --> use abbreviation "ML".
Line 92, Figure A1: I think Figure A1 is quite important, presenting the peat data distributions. Why not include it in main text instead of in appendix?
Line 97: "However, grid cells with zero peat depth consistently dominate..." --> explicitly state the percentage of zero peat depth as it is the substantial majority of the data. I think it is good to state as the data is quite, though naturally, imbalanced.
Line 185: "machine learning" --> "ML"
Line 189: What were the hyperparameters which were optimized? I did not see them listed.
Line 192: "cross validation" --> "cross-validation"
Line 205: "don't" --> "do not"
Line 209: Add reference for LightGBM, maybe also fully open up the term. Lets not assume reader knows all the abbreviations by default.
Line 247: Did you mention somewhere how many predictors you had in total available for the ML runs? I would be curious to know this.
Figure 8 and A1: I am not used to horizontal histograms or distributions being presented. Was there a particular reason for this? If not, why not use standard orientation in visualization (vertical bars), which, to my experience, is more common.
Figure A1 caption: extra whitespace before ".", "...desert data ."
Line 357: Open up the abbreviations, although well-known, the RMSE, MBE, NME. They are mentioned also in appendix more specifically, but good the clarify the abbreviations, once introduced.
Line 362: Could you please elaborate on the null models a bit. Do you mean baseline models? Also on same line, notice extra period ". ."
Line 370: "BLOOCV" Did you define this abbreviation, even though clear to myself. But still, define it earlier in the text when you mention cross-validation.
Figure 9: The legend is little bit unclear for me. What is "bootstrap results", what results? Maybe rephrase more clearly, if possible.