Uncrewed aircraft system spherical photography for the vertical characterization of canopy structural traits

Summary The plant area index (PAI) is a structural trait that succinctly parametrizes the foliage distribution of a canopy and is usually estimated using indirect optical techniques such as digital hemispherical photography. Critically, on‐the‐ground photographic measurements forgo the vertical variation of canopy structure which regulates the local light environment. Hence new approaches are sought for vertical sampling of traits. We present an uncrewed aircraft system (UAS) spherical photographic method to obtain structural traits throughout the depth of tree canopies. Our method explained 89% of the variation in PAI when compared with ground‐based hemispherical photography. When comparing UAS vertical trait profiles with airborne laser scanning data, we found highest agreement in an open birch (Betula pendula/pubescens) canopy. Minor disagreement was found in dense spruce (Picea abies) stands, especially in the lower canopy. Our new method enables easy estimation of the vertical dimension of canopy structural traits in previously inaccessible spaces. The method is affordable and safe and therefore readily usable by plant scientists.


Fig. S1
Calibration functions for both sensors.

Notes S3 Supporting Information references.
Notes S1 Panoramic reprojection to fisheye imagery. Figure 1 in the main article text shows the image transformation which is detailed mathematically in this note. Because there is not a 1:1 mapping between pixels in the spherical panorama and the reprojected fisheye image, interpolation is required. The interpolation method performed used the interpolate.griddata function from the "Scipy" Python package (Virtanen et al., 2020), using a 2D linear interpolation. The input panorama was 8,000x4,000 pixels. Postprocessing, the final hemispherical image had a size of 4,000x4,000 pixels. Orientation of the final hemispherical photos was obtained by the yaw angle in the metadata of first photo taken, as well as the hemispherical image (Li & Ratti, 2019) as it also saved that metadata.
To transform the equirectangular spherical 2D images into fisheye imagery we first extract the top half. Next the interpolate.griddata function is combined with the below expressions to map input pixel locations to corresponding output fisheye locations. The panorama pixel locations correspond to the azimuth or horizontal angle, which varies from 0 to 2π radians. The azimuth angle of a pixel is (Wang, 2019): Where is the azimuth or horizontal angle, in radians, is the width of the whole equirectangular image and is the location of the given pixel in the X-dimension (ranging in this case from 0 to 8,000 pixels). Next, we apply the following expression: In which ( , ) are the new hemispherical (polar) coordinates, is the height of the whole equirectangular image and is the position of the pixel in the Y-dimension (i.e., number of rows or radius, ranging in this case from 0 to 2,000 pixels). Note that these are not the usual polar equations as in this case the image starts in the top. In polar coordinates, the origin of the new hemispherical photo is bottom left corner, whereas in our system it was in top left corner. This is the reason for which the minus symbol in the expression.
Notes S2 Calibration of the imaging sensors.
To avoid projection errors, we conducted calibrations on both of our sensors. Firstly, we conducted the calibration following the Hemisfer calibration protocol. For the DHP, this calibration was performed indoors, taking an upward-looking hemispherical photo of 9 markers located in the wall and on the roof of a room, separated by a 10-degree step, ranging from a 0-degree zenith angle to a 90-degree zenith angle. The relationship between the zenith angle and the radius was obtained in order to obtain the radial distortion (figure S1, a, in blue). A small amount of radial distortion was found, and a custom lens function was defined in Hemisfer for our subsequent analysis.
After trying the same indoor process with the UAS, it was obvious that it was not possible to take a spherical image indoors due to intense drift caused by indoor wind fluxes, turbulences and positioning failure, which resulted in very low stitching quality. Therefore, the process was repeated outdoors, under a bridge. The results showed no radial distortion for the UAS-based hemispherical image after converting the spherical image taken into a digital hemispherical photo (figure S1, a, in orange).
Additionally, we performed another calibration procedure in order to check the previous calibration results. In this case, the analysis was performed using a computational method using the "OpenCV" Python package (Bradski, 2000). The fisheye.calibrate function was used for the DHP sensor and the calibrateCamera function was used for the UAS sensor. With this procedure, we obtained the camera parameters and distortion coefficients for the sensors, information that we then used to obtain the radial distortion of the DHP hemispherical photo (figure S1, b, blue) and the distortion values of the individual images taken by the UAS (not shown).