the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Gap geometry, seasonality and associated losses of biomass – combining UAV imagery and field data from a Central Amazon forest
Adriana Simonetti
Raquel Fernandes Araujo
Carlos Henrique Souza Celes
Flávia Ranara da Silva e Silva
Joaquim dos Santos
Niro Higuchi
Susan Trumbore
Daniel Magnabosco Marra
Abstract. Understanding mechanisms of tree mortality and geometric patterns of canopy gaps is relevant for robust estimates of carbon stocks and balance in tropical forests, and for assessing how they are responding to climate change. We combined monthly RGB images acquired from an unmanned aerial vehicle with field surveys to identify gaps in an 18-ha permanent plot in an old-growth Central Amazon forest over a period of 28 months. In addition to detecting, we measured the size and shape of gaps, and analyzed their temporal variation and correlation with rainfall. We further described associated modes of tree mortality or branch fall and quantified associated losses of biomass. Overall, the sensitivity of gap detection differed between field surveys and imagery data. In total, we detected 32 gaps either in the images and field, ranging in area from 9 m2 to 835 m2. Relatively small gaps (< 39 m2) associated with branch fall were the most frequent (11 gaps). Out of 18 gaps for which both field and imagery data were available, three could not be detected remotely. This result shows that a considerable fraction of tree mortality and branch-fall events (~ 17 %) affect only the lower canopy and the understory of the forest and thus, are likely neglected by assessments of top of the canopy. Regardless the detection method, the size distribution of gaps in our study region was better captured by a Weibull function. As confirmed by our detailed field surveys, we believe that this pattern was not biased by gaps possibly undetected from image data. Although not related to differences in gap size, the main modes of tree mortality partially explained associated losses of biomass. The rate of gap area formation expressed as the percent per month was positively correlated with the frequency of extreme rainfall events, which may be related to a higher frequency of storms propagating destructive wind gusts. Our results demonstrate the importance of combining field observations with remote sensing methods for monitoring gap dynamics in dense forests. The correlation between modes of tree mortality and gap geometry with associated losses of biomass provide evidence on the importance of small-scale events of tree mortality and branch fall as processes that contribute to landscape patterns of carbon balance and species diversity in Amazon forests. Regional assessments of the dynamics and geometry of canopy gaps formed from branch fall and individual tree-mortality (e.g., from few to hundreds of m2) up to catastrophic blowdowns associated with extreme rain and wind (e.g., from hundreds of m2 to thousands of ha) can reduce the uncertainty of landscape assessments of carbon balance, especially as the frequency and intensity of storms causing these events is likely to change with future Amazon climate.
- Preprint
(1394 KB) -
Supplement
(245 KB) - BibTeX
- EndNote
Adriana Simonetti et al.
Status: final response (author comments only)
-
RC1: 'Comment on bg-2022-251', Anonymous Referee #1, 30 Jan 2023
The article “Gap geometry, seasonality and associated losses of biomass – combining UAV imagery and field data from a Central Amazon forest” studies gap formation on an 18ha field plot in the Amazon, using both remote sensing (photogrammetry/Structure from Motion) and field data. It provides an interesting look into canopy dynamics at one particular tropical forest site and a comparison (or validation) between field-based methods and remote sensing, which is crucial in linking traditional approaches with modern technology. Due to its substantial field sampling effort, the study can relate gap formation to different tree mortality modes and to associated biomass losses, thus linking ecological processes to the carbon cycle, which should be of great interest to readers of Biogeosciences. I also found the paper generally very well written, with well thought-through methods and clear and concise descriptions.
There are, however, a few changes/issues that I would recommend the authors to consider before publication. I will highlight a few larger aspects first, and then provide line-by-line comments in a classic review style.
1/ Definition of gap: My impression is that the definition of gaps is not entirely consistent in the study. On the one hand, Brokaw’s definition of gaps as extending down to 2m in canopy height seems to be used (l.161), but on the other hand, the authors argue several times that there are undetected “understory gaps”, or gaps that are not visible in the upper canopy. Specifically, they attribute the differences between UAV imagery and field data to the UAV imagery not being able to detect such subtle changes below the canopy. But if we use the Brokaw definition, that should not be the case, as any gap would, by necessity, be a hole in the upper canopy and extend down to the ground, no? Could it be that the authors implicitly use treefall events or other canopy characteristics as part of their gap definition in their field-based studies? Could this also explain why gaps created by standing dead trees were the main difference? The definition aspect also affects what should be considered the “truth” for the validation – field-based assessments certainly offer more information to interpret gap formation (is it a branch fall or a tree fall? etc.), but to automatically consider them the truth (l.209) is not evident to me. Could one not argue that the 3D canopy height models derived from photogrammetry (or even better, lidar) can more accurately quantify height changes than visual/manual assessments?
2/ Study area size and gap size frequency distributions (GSFD): Having such a detailed comparison between field based and remotely sensed gap structure is an important feat, based on substantial field work, so it makes sense that the authors focussed on a plot size of “only” 18ha. However, this limits the analysis somewhat when it comes to assessing GSFD and the “landscape scale” patterns the authors are interested in. As expected for 18ha, sample sizes are very small (32 gaps in total, but only 14 gaps that co-occur in both field and remote sensing data). I am sceptical that such sample sizes yield much information on which distribution actually fits better, and I would expect the fitted Weibull, exponential and power law distributions to be so uncertain in their parameters (the power law exponent has an uncertainty of 2.137 +- 0.913, which is huge) that there is not much sense in comparing the fit of different GSFDs (one single data point might already shift the goodness of fit). If the authors would like to keep this analysis, I suggest they explicitly use confidence intervals / simulations of data generation to assess how reliably these distributions can actually be differentiated with so few gaps, or maybe focus less on which distribution fits better and more on the field-remote sensing comparison. They should also provide a careful discussion that does not place too much emphasis on the different AIC values (which have generally low delta, anyways). More generally, if this type of analysis is carried out, I would also highly recommend the additional fitting of a lognormal distribution, which comes about through similar generative processes as power law models and is often an equally good fit.
3/ Precipitation and gap formation: This part of the paper, while relevant, is not really motivated in the introduction, and more effort should be spent on explaining why it is relevant to suppose a link between precipitation and gap formation, and why presumably more direct drivers of gap formation (wind or even lightning) were not used. It is understandable that such data may not be available, but nothing in the introduction/methods section explains why precipitation is interesting. I would also remove the analysis of extreme rainfall events, because this seems like a filtering of the data that could be done with many thresholds (90th / 95th percentile, etc.), and with only 3 years and 8 data points for extreme rainfall (Figure 8), I doubt that the correlation the authors found tells us much about the system.
4/ Remote sensing vs. field data in assessing mechanisms of gap formation and biomass loss: My impression was that section 3.3 would be one of the most interesting sections for readers of Biogeosciences, and that the authors could extend their analysis here a little bit without too much effort. For example, I would relate released biomass to overall plot biomass. There could also be an interesting comparison of released biomass visible from gaps, to overall biomass released from tree mortality, also counting understory mortality (if these data exist). Finally, since they have such a comprehensive data set, the authors could also compare other aspects of gaps between the different mortality modes (branch fall, snapped, etc.). I would suggest a look at the metrics the authors already calculated (gap geometry), but also previous and surrounding canopy height, and maybe also gap closure rates, with a focus on the values from remote sensing. Maybe, the authors could also use the RGB signature of the orthophotos as an additional metric to compare between mortality modes. Such an analysis would provide some hints on whether remote sensing/photogrammetry could distinguish different modes of mortality/gap formation/biomass losses, or at least separate one specific mode (standing dead). These are only suggestions and would, of course, only be indicative due to the small sample sizes, but I think they might be very interesting for future studies/Biogeosciences readers and be in line with the authors’ objectives to assess how much we can learn from remote sensing compared to field-based assessments.
Line-by-line comment:
3: Is the title actually accurate? Gap geometry and seasonality do not seem to be such important results/aspects of this study, so maybe rethink/rephrase it?
37: What is a multi-temporal process? Maybe rephrase?
41: Even though this may not be fully relevant to the paper, maybe droughts could be mentioned as another major extreme event?
50: This may be a definition question and not crucial, but in the context of tropical forests, gaps that are thousands of hectares in size (or tens of squarekilometers) seem unlikely, or probably not what tropical ecologists would commonly classify as gaps (e.g. one or several large canopy trees falling and leaving a gap in the canopy). Such a definition seems more common in fire-dominated boreal ecosystems. Maybe you could add one sentence specifically on tropical gap sizes. Also, this would be more in line with the extent of your sample plot.
62-74: My impression is that this part of the paper jumps quite a lot between points, i.e. from the advantages of remote sensing, citing lidar remote sensing studies such as Dalagnol et al. 2021, to different definitions of gaps, to the problems of optical remote sensing. My question would be: Is the discussion of Landsat needed here, as UAV operates on a very different scale. A more interesting point might be how UAV photogrammetry differs from ALS/UAV lidar (e.g. no within-canopy structure, no ground model, but likely cheaper, more flexible [although limited by meterological conditions]).
90: what are “traceable” modes of tree mortality? Or what would be “untraceable” ones?
90: The last question with regard to rainfall is very adhoc and not really set up in the introduction. I would provide justification in the introduction on why precipitation should be relevant for gap formation. Would wind be a more important variable?
113-136: I am no expert in SfM/photogrammetry, but this seems well-described and a good workflow. I have one question: How did you deal with different meteorological conditions during planned flights (fog/rain)? Did you, for example, postpone scans during rainy days? Could this affect your results? How consistent was the timing of the acquisitions on average? I don’t think this would be a major problem, but it would be good to mention this somewhere here.
155-172: This seems like a substantial effort and great, important work! Just out of interest: since you seem to have access to EBA project’s overlapping lidar data, is there a reason why you did not predelineate initial gap distributions from the lidar derived canopy height models?
197-209: This also makes a lot of sense. However, I would move the information from the last sentence (i.e. field value is considered true value) to the beginning to make it clearer to the readers what is considered the validation. I was wondering, however, whether in this case field data can actually be considered the true data? One could make a point that remote sensing (but maybe less so photogrammetry) actually provides a more accurate quantification of the 3D canopy canopy than visual/field-based assessments can. How would you justify your decision?
215-223: While it is common to fit these distributions and the approach is methodologically sound, does this make sense here? 18ha is a very small area when it comes to gap delineation, so even without looking at the results, one would assume that your sample size is going to be so low that the inferred distributions are not telling us a lot (and the results bear this out, with 32 or 14 gaps in total). At the very least, I would expect simulations to construct confidence/credibility intervals that show how much variability there is and how uncertain the differences between the different distribution types are. My guess is that it would be very hard to come to any clear conclusion across 18ha. Also, would it not make sense to also test a lognormal distribution? The lognormal distribution is usually the one closest to the power law and comes about through very similar generative processes, so if you fit distributions.
225-229: Very interesting! I find the idea of quantifying released biomass very appealing.
232: This process of calculating gap area formation rates sounds very complicated. Could you not just take the number/area of gaps that formed between each image acquisition and then divide the number/area by the time between each image? Assuming that images are taken at roughly the same intervals, that should give you a very sound estimate, no? Or am I missing something?
253: “which indicates that there was no traceable change in the upper canopy of the forest”. This is probably more a discussion sentence anyways, but I find this problematic. According to the definition (Brokaw) you use, a gap is an “an opening in the forest canopy extending from the upper stratum to an average height of two meters above ground.” So by definition something in the upper canopy has to change – either you don’t pick it up in the photogrammetry data (maybe one of the processing algorithms is smoothing the canopy too much), or, alternatively, your field-based assessment wrongly found a change in the upper canopy. This could also be an interesting question about gap definitions: should a standing dead tree already be classified as a gap, because light is reaching down almost without obstruction to 2m? How do you interpret this?
260: I’m not sure the p-value is the best way to assess this here. Looking at Figure 3, one would guess that, at the large end of the gap spectrum, UAV seems to find larger gaps than field-based assessments (a difference of ca. 830 m2 to 580m2 for the largest gap seems substantial and larger than I would have expected). How did you derive the p-value? Did you log-transform the data beforehand (if you assume power-law/lognormal scaling, for example, that would be necessary, I assume)
285: My takeaway from Table 2 would actually be that all distributions perform similarly (the dAIC is typically very low), and my guess would be that, if you account for the uncertainty of the small sample size, you cannot really differentiate between any of them here. I would highly recommend to test this! One interesting question is whether the field data have a slightly different exponent/shape, with a steeper decline at the largest gap areas (in line with the visual assessment). But, of course, sample sizes are very low.
315: I like this idea of calculating the biomass loss, and that at least one branch fall exceeded some of the uprooted/snapped tree losses. Could you put this into context of how much total biomass is stocked in the plot? I.e. what percentage is lost by gaps?
334: This is not my favourite figure (and analysis). There are very few data points, and while I understand the general reasoning, it seems a bit like one could also pick a different percentile of extreme rainfall events, and the pattern might disappear. I suggest you remove this Figure and analysis.
351-353: As noted above (and sorry for the repetition), there seems an inconsistency in the gap definition in the paper. If gaps are defined as openings in the upper canopy that clearly reach down to 2m (Brokaw), it does not make much sense to me to say that there are “no clear signs of opening in the upper canopy”. Could it be that your field-based gap definition is slightly wider than the one you apply with the remote sensing/photogrammetry data, and is implicitly based around whether a tree has fallen? I am not saying that this is necessarily wrong, but that could explain divergences between both methods, because unless your photogrammetry approach overly smoothes the canopy, there is no a priori reason why it should not detect openings in the Brokaw sense, no? In this respect, I would also expect 2-3 sentences here on the problem of which of the two data sets (remote sensing or field) is the actual truth!
358: Unless I have missed it, I am not sure that the study shows how much gaps contribute to landscape patterns of biomass. It would help to put the losses into context of the whole-plot biomass stocks (cf. above), but I would still be wary of calling this “landscape” patterns. 18ha is probably not on the scale where landscape effects can be assessed, particularly, because power law-type distributions imply that you will have very few, very large gaps, and your plot may just accidentally miss out on extreme events / the long tails of the GSFD distribution (blowdowns/multiple emergent/canopy trees falling).
362: again, what is an “understory gap”?
379: That the area of the gaps did not vary between methods is not entirely correct (cf. my comments above on this particular p-value), and even if we were to solely rely on the p-value, I would rephrase to say there was no evidence for strong variation in the area between the two methods (although in my opinion, there is some, limited evidence for divergences between the two methods in terms of gap area).
389: Cf. my comments before. I don’t think, we can conclude that power-law distribution is the best distribution here, cf. also the large confidence interval of 2.137 +- 0.913! That is huge uncertainty!
401: It seems to me that in many cases (not just your study), Weibull laws actually fit gap size frequency distributions better than power laws. I would discuss here what that would mean: it is more difficult to interpret (more parameters, not just one nice exponent), and it probably means that there is a change in generative mechanisms in gap formation across scales, which could make a lot of sense, because we probably shift from tree to branch level below a certain size threshold. You could also discuss this in the context of the typical tree size in your plot!
430-437: This motivation for rainfall patterns – correlation with extreme winds or lighting – should come much earlier in the paper (ideally in the introduction), so that the reader understands why these patterns are studied.
448-449: I fully agree with your statement that forest inventories are fundamental, but I am not sure you showed conclusively that “mechanisms of formation could only be distinguished using field data”. A very strong addition in my opinion (in Section 3.3.) would be to compare various attributes of the different gap types (branch/snapped dead/uprooted/standing dead), i.e. area/perimeter ratios, average height of lost canopy, average height within gap, canopy closure rate after gap formation (or even canopy changes before gap formation), and to really test whether there are indicators of how to differentiate different gap formation processes from remote sensing. My guess is, as you state, that you cannot reliably distinguish, for example, between uprooting and snapping, but it could still be that you would find a specific signature for standing dead trees (I imagine you could also use the RGB values of the orthophotos to identify them), or maybe you could find a difference between branch falls and tree falls? This would be very interesting ecologically!
450-457: My impression is that the concluding paragraph is describing too many aspects at the same time (Weibull, mechanisms of gap formation, regional variation). Maybe focus on one priority, which seems to me the mechanisms/drivers of gap formation, and centre the paragraph around this notion?
Citation: https://doi.org/10.5194/bg-2022-251-RC1 - AC1: 'Reply on RC1', Adriana Simonetti Lopes Peixoto, 26 Apr 2023
-
RC2: 'Comment on bg-2022-251', Anonymous Referee #2, 09 Mar 2023
The paper is very well written, it cites the proper literature and follows previous papers methodologies. Authors perform an extensive field campaign effort to collect the gap data, which is very laborious and I congratulate the authors. The study analysis relies on 18 gaps being detected during 2 years on a 600 by 300 m area from both field and UAV data combined for their analyses. From those gaps, a total of 14 were detected by both approaches. My main concern is about the scale of the study and the sample size of gaps analyzed is too small (n=18?) in order to attack any of the four objectives proposed in the study. Thus, we cannot be sure if the results are indeed valid. See below two comments:
- I don’t fully agree with the first section title “Imagery and field data have different sensitivity for detecting small gaps”. If you tell me a method finds 14 out of 18, that is, 77.77% accuracy, that is an excellent agreement and not exactly what the sub-section title suggests. However, the sample size is too small, if you detect or not an additional gap that changes by 5% the accuracy.
- The scale analyzed does not allow to state that “UAV photogrammetry is a robust method for monitoring gap dynamics in Amazon forests”. That may be a great step towards validating the detection of gaps in the field – which is a really challenging activity. However, the limitations of scale were not stressed out in Discussion. The method should be tested on larger scales before saying the method is robust. One suggestion would be to use airborne LiDAR as a reference for detection of the gaps, then using the UAV on top of it and this way gaining a lot of scale. Ducke forest nearby Manaus could be a great candidate for a future experiment. It already has a few airborne lidar flight lines in the past, and if more are collected, together with the UAV data, this could help a lot to couple with the UAV data.
Citation: https://doi.org/10.5194/bg-2022-251-RC2 - AC2: 'Reply on RC2', Adriana Simonetti Lopes Peixoto, 26 Apr 2023
Adriana Simonetti et al.
Adriana Simonetti et al.
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
258 | 107 | 14 | 379 | 40 | 4 | 5 |
- HTML: 258
- PDF: 107
- XML: 14
- Total: 379
- Supplement: 40
- BibTeX: 4
- EndNote: 5
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1