Services Services Portfolio

Services Portfolio

Services

Services are being built by means of joining a set of Processing Chains that are formed by the assemblage of a number of individual Building Blocks that corresponds to the elementary pieces of the chain.

These Services will be deployed in the different geographic areas in accordance to the needs expressed by the partners and stakeholders.

 

Forest site characterisation

The Forest Characterisation service provides facts on the status and condition of predefined forest properties: Forest extension, stand delineation, forest infrastructures, main forest types, stand variables (dominant height, stand age, stand density), forest disturbances (clear cuts, fire scars), topography (DEM, slope, aspect).

Products


A forest mask classifies forest/non forest land coverages. The forest mask product is the basis for other products such as forest type classification or vegetation stress monitoring.

 

Forest Mask(LIDAR) – FORA (Product using LIDAR data)

Data processing steps:

Input data are checked to fulfil the minimum requirements for required outputs. Spatial data (LIDAR, maps and vectors) should be up loaded in a GIS suite and quality controlled by an interpreter. Quality control should proceed with the following checks:


  • Check LIDAR data classification presence/absence

  • If classification is present, check completeness and compatibility with ASPRS standards

  • Check numerical analysis of the point density and maximum and minimum coordinates.

  • Display LIDAR data to check overlap of the data with the AOI and data quality.

If classification QC is positive, omit this step. If classification QC is negative then run the LIDAR classification algorithm. The LIDAR classification algorithm is a “must” to distinguish ground level LIDAR points and vegetation points.

The estimation of forest mask is driven by parameters of Forest Canopy Cover. This process has the following steps and default parameters:


  • Height break value to calculate LFCC: 2m

  • Output pixel resolution: 5x5m to 10x10m (this will depend on point density)

  • Definition of minimum height to be accountable for height percentile statistics. 2m (this threshold will depend on forest vertical structure)

  • Development of grid LIDAR statistics

  • Extraction of LFCC statistics from the LIDAR statistics by creating raster or tiff information

This phase consists in two steps:


  • Selection of the LFCC threshold to define ‘forest/no forest’ in terms of LFCC (usually, 10%)

  • Delivery of an standard GIS file (shape, geotiff, ESRI raster) with the forest/non-forest mask

Forest mask from other sources can be used to validate the output LIDAR data. An overall accuracy of 85% is the standard acceptable.

 

Forest Mask(optical) – GMV (Product using optical satellite images)

Data processing steps:

Input data are checked to fulfil the minimum requirements for required outputs. Spatial data (images, maps and vectors) should be up loaded in a GIS suite and quality controlled by an interpreter. Quality control should proceed with the following checks:


  • Check processing level: Check for presence/absence of orhtorectification and/or atmospheric correction.

  • Check cloud cover: Cloud areas inside the AOI must be masked.

If the geometric correction is negative, then run this BB. Orthorectification consist in the collection of GCPs and the application of a DEM to correctly place each image pixel.

If spectral correction of images is negative, then run this BB. Atmospheric correction is mandatory for the production of spectral and texture indexes, without atmospheric correction the indexes values are modified by the differential atmosphere absorption and diffusion.

The separation between vegetated / non-vegetated and forest / no-forest areas will be performed through values of vegetation indexes.

Texture indexes are used to improve the results of the classification algorithms.

Images are classified thus: (1) the most suitable vegetation indexes for the site are selected, taking into consideration local factors. (2) A vegetation/non-vegetation threshold is stablished for the specific index chosen. (3) The algorithm is run to classify the image in two classes: vegetation/non-vegetation.

Vegetation/non vegetation areas are further refined by means of unsupervised classification algorithms (isocluster, k-means). The resulting image is a classification in a selected number of classes with uniform spectral and texture characteristics. Thereafter, classes are manually aggregated to a final forest / no forest classification, using expert knowledge supported by vegetation maps or surveys.

This step corrects the “salt and pepper classification effect”, cleaning class patches smaller than a given MMU. Algorithms applied are majority filters or sieve filters.

Forest maps from other sources can be used to validate the output data if they are available. An overall accuracy of 85% is considered by the remote sensing community as the standard of minimum acceptable quality for remote sensing products.


Stand delineation defines homogeneous forest management units on account of given criteria (dominant species, age and/or trees density). Stand delineation also highlights property boundaries and management units (stands). This product is the baseline for forest type classifications and other forestry products.

Data processing steps:

Input data are checked to fulfil the minimum requirements for required outputs. Spatial data (images, LIDAR, maps and vectors) should be up loaded in a GIS suite and quality controlled by an interpreter. Quality control should proceed with the following checks:


  • Check processing level of optical images: Detect if images need orhtorectification.

  • Check pixel size of optical images: Stand delineation can only be produced with VHR1 or VHR2 optical images (pixel size under 5 meters).

  • Check cloud cover of optical images: Cloud areas inside the AOI must be masked to delimitate where the product will be not generated.

  • Check LIDAR data classification presence/absence.

  • If classification is present, check completeness and compatibility with ASPRS standards.

  • Check numerical analysis of the point density and maximum and minimum coordinates.

  • Display LIDAR data to check overlap of the data with the AOI and data quality.

If the geometric correction is negative, then run this BB. Orthorectification consist in the collection of GCPs and the application of a DEM to correctly place each image pixel.

This phase must be done if the spectral correction of the images does not fulfil the minimum requirements. Atmospheric correction is mandatory for the production of texture or spectral indexes. Without atmospheric correction indexes values are modified by the differential atmosphere absorption and diffusion.

Texture indexes are used to improve the results of the segmentation algorithms.

If classification QC is positive, omit this step. If classification QC is negative then run the LIDAR classification algorithm. The LIDAR classification algorithm is a “must” to distinguish ground level LIDAR points and vegetation points.After the execution of LIDAR classification two LIDAR statistics are calculated:


  • LIDAR Forest Cover Coverage.

  • LIDAR 95% percentile.

Segmentation algorithms split the image into homogenous pixel clusters on account of texture and spectral values, to ascertain the forest stands from other land uses and coverages.

The results of the automatic segmentation are visually checked by an operator. Segments can be aggregated, redrawn or split using expert criteria. In this phase additional information can be introduced to improve the results.

Stand delineation outputs are checked against other ground truth sources. US National Maps Accuracy Standards (NMAS) apply to the spatial accuracy of the stand delineation outputs. Accuracy limits depend on the scale of the product. E.g.: the maximum allowed horizontal error of CE90 is 2.54 meters for a map scale of 1:5,000.


Forest infrastructures product describes geographically the forest cartographic features. The forest cartographic features can be point features (i.e. logging machinery / equipment, road infrastructures), linear features (i.e. forest trails, forest roads, streams, contour levels, forest boundaries, road infrastructures) and polygon features (i.e. fire scars, logging infrastructures, plot location, stand location, ownership and rights, wetlands, riparian zones, rivers).

Data processing steps:

Input data are checked if they fulfil the minimum requirements to produce the output data. Spatial data (images, maps and vectors) should be loaded in a GIS software and their characteristics directly checked or measured. Quality control should contain the following checks:


  • Check processing level of the optical images: This check detects if the images need orhtorectification.

  • Check pixel size of the optical images: Stand delineation only can be produced over HR or VHR optical images (pixel size under 1 meter).

  • Check the cloud cover of the optical images: Cloud areas inside the AOI must be masked to delimitate where the product will be not generated.

If the geometric correction is negative, then run this BB. Orthorectification consist in the collection of GCPs and the application of a DEM to correctly place each image pixel.

Image interpreters extract forest infrastructures manually, from visual interpretation, in vector format (Point, line, polygon). Support information: property maps, forest stands delimitation or named locations. After completion, QC procedures apply. Features include:


  • Point: Logging machinery / equipment, road infrastructures.

  • Line: Forest trails, forest roads, streams, contour levels, forest boundaries, road infrastructures.

  • Polygon: Fire scars, logging infrastructures, plot location, stand location, ownership and rights, wetlands, riparian zones, rivers.

When the feature extraction is validated the job of all feature extractors will be integrated in a single geodatabase. In this phase the source data information and metadata is incorporated to the vector information.

Stand delineation outputs are checked against other ground truth sources. US National Maps Accuracy Standards (NMAS) apply to the spatial accuracy of the stand delineation outputs. Accuracy limits depend on the scale of the product. E.g.: the maximum allowed horizontal error of CE90 is 2.54 meters for a map scale of 1:5,000.


Main Forest Types product provides a supervised image classification of the main forest type’s found in the AOI.

Data processing steps:

Input data are checked if they fulfil the minimum requirements to produce the output data. Spatial data (images, maps and vectors) should be loaded in a GIS software and their characteristics directly checked or measured. Quality control should contain the following checks:


  • Check the process level of the optical images: This check detects if the images need orhtorectification and/or atmospheric correction.

  • Check the cloud cover of the optical images: Cloud areas inside the AOI must be masked to delimitate where the product will be not generated.

If the geometric correction is negative, then run this BB. Orthorectification consist in the collection of GCPs and the application of a DEM to correctly place each image pixel.

This phase must be done if the spectral correction of the images does not fulfil the minimum requirements. Atmospheric correction is mandatory for the production of spectral and texture indexes, without atmospheric correction the indexes values are modified by the differential atmosphere absorption and diffusion.

Forest type’s classification shall be performed through spectral indexes.

Texture indexes are used to improve the results of the classification algorithms.

Training areas provide the texture and spectral characteristics of the different classes (forest types). Side information (e.g.: non-EO data of LULC maps, terrestrial habitats and/or forest inventory) is also used to determine the characteristics and number of classes.

Training areas are used thereafter to classify the image through algorithms such as random forest, parallelepiped and maximum likelihood.

This step corrects the “salt and pepper classification effect”, cleaning class patches smaller than a given MMU. Algorithms applied are majority filters or sieve filters.

Forest maps and surveys from other sources are used to validate the output data, if available. An overall accuracy of 85% is the minimum acceptable standard.


The Stand height product provides the dominant tree heights for each forest management sector.

Stand Height(LIDAR) – FORA (Product using LIDAR data)

Data processing steps:

Input data are checked to fulfil the minimum requirements for required outputs. Spatial data (LIDAR, maps and vectors) should be up loaded in a GIS suite and quality controlled by an interpreter. Quality control should proceed with the following checks:


  • Check LIDAR data classification presence/absence.

  • If classification is present, check completeness and compatibility with ASPRS standards.

  • Check numerical analysis of the point density and maximum and minimum coordinates.

  • Display LIDAR data to check overlap of the data with the AOI and data quality.

If classification QC is positive, omit this step. If classification QC is negative then run the LIDAR classification algorithm. The LIDAR classification algorithm is a “must” to distinguish ground level LIDAR points and other points.

Ground inventory data must be geographically correlated with LIDAR. This phase consists in two steps:


  • Analysis of the accuracy of the coordinates taken in field.

  • Point cloud “cutting” to inventory area shape.

LIDAR statistic processing for the whole study area.


  • Development of grid LIDAR statistics through gridmetrics.

  • Definition of grid pixel (i.e. 20-25m) where statistics are calculated.

Forest mask from other sources can be used to validate the output LIDAR data. An overall accuracy of 85% is the standard acceptable:


  • Building of database structure.

  • Adding data from field and clipdata results.

  • Modelling procedures using R: variable selection, analysis, different model formulation analysis. Regression techniques. Statistical fitting accuracy assessment. Model diagnosis (numerical and graphics) and validation.

  • Application of model to the grid developed in BB-LM-6.

  • Delivery of an standard GIS format (shape, geotiff, ESRI raster) file with the Stand Height Information.

The most robust quality check is carried out through the model diagnosis and validation. Nevertheless, Stand Height from other sources can be used to validate the output product.

Stand Height(optical) – GMV (Product using optical satellite images)

Data processing steps:

Input data are checked if they fulfil the minimum requirements to produce the output data. Spatial data (images, maps and vectors) should be loaded in a GIS software and their characteristics directly checked or measured. Quality control should contain the following checks:


  • Check the process level of the optical images: Only not orthorectified images can be used to produce the stand height with optical data. Images should have orbit and RPC parameters to improve the georeferencing.

  • Check pixel size of the optical images: The pixel size ratio between input images and output DEM must be 3/1 or higher.

  • Check the cloud cover of the optical images: Cloud areas inside the AOI must be masked to delimitate where the product will be not generated. Images should have orbit and RPC parameters to improve the georeferencing.

  • Check the overlap between input images: Stand height product can only be developed in the overlap area between the input images. AOI areas outside images overlap area will be not processed.

  • Check the difference in the acquisition angle between input images: Greater differences in the images acquisition angle increases the accuracy of the final stand height. The minimum recommended difference between acquisition angles should be 10º.

  • Check the acquisition date difference between input images: The input images should be acquired in closer dates as possible. The maximum recommended difference in acquisition dates should be 60 days.

  • Check input images sensor: Only images of the same sensor are allowed in this product.

  • Check the extraction of elevation and planimetric measurements in the auxiliary data: Reference planimetric and elevation data are necessary for the correct georreferenciation and developing of this product.

GCPs are equivalent points between the optical image and reference data that are used to georreference the images. The coordinates and elevation of the GCPs provide the necessary information to execute the DSM generation.

DSM generation is an iterative process; the process is run several times to choose the best result. DSM generation consists in two successive sub processes:


  • The generation of epipolar images.

  • DEM automatic extraction.

There are two options to obtain the DEM from DSM.


  • Automatic algorithms that detect the DSM pixels in vegetation or buildings with shape and slope thresholds. Generally automatic algorithms do not provide good results for medium and low resolution DSMs (pixel size larger than 5 meters).

  • Masking of vegetation areas with the forest mask and the interpolation of the ground level values.

This phase consists in two successive sub processes:1) Generation of the tree crown model (CHM).2) Calculation of the tree average height by forest stand.Tree crown model (CHM) is a raster that contains values of the tree dominant height from the ground. Tree crown model CHM is obtained by the subtraction of the DSM minus the DEM. The calculation of the tree height by forest stand can be done with zonal statistics which compute the statistical variables of the CHM raster that is contained in each polygon of the stand delineation.

Stand height data from external sources can be used to validate the output data if they are available. Elevation accuracy is measured in RMSE. The Expected maximum allowable error is stablished in 4 meters according to Canadian DEM standards.

Stand Height(SAR) – GMV (Product using SAR data)

Data processing steps:

Input data are checked if they fulfil the minimum requirements to produce the output data. Spatial data (images, maps and vectors) should be loaded in a GIS software and their characteristics directly checked or measured. Quality control should contain the following checks:


  • Check SAR images product: Only interferometric products can be used to produce stand height with SAR data. Other SAR products cannot be used.

  • Check SAR images pixel size: The pixel size ratio between input images and output DEM must be 3/1 or higher.

  • Check the overlap between input images: Stand height product can only be developed over the overlap area between different input images. AOI areas outside the overlap between images will be not processed.

  • Check the difference of dates between input images: The images acquisition dates should be as close as possible to improve the final results. The maximum recommended difference in acquisition dates is 60 days.

  • Check if reference data provide planimetric and elevation measurements: Reference planimetric and elevation data are necessary to extract GCPs and develop this product.

Coregistration is an automatic process) that automatically locates equivalent points in both SAR images.

Interferometric SAR or InSAR, allows accurate measurements of the radiation travel path. Measurements of radiation travel path variations as a function of the satellite position and time of acquisition allow generation of digital elevation models and measurement of centimetric surface deformations of the terrain.

The Digital Surface Model captures the surface height, not the ground elevation. A DSM is an elevation model that includes the tops of buildings, trees, powerlines, and any other objects. Commonly this is seen as a canopy model and only ‘sees’ ground where there is nothing else overtop of it.

A SAR DSM has just relative values, not georeferenced. Georeferencing these data sets requires the provision of a local DEM (Local provider) or a global DEM (e.g.: SRTM or Aster)


  • Automatic algorithms that detect the DSM pixels in vegetation or buildings with shape and slope thresholds. Generally automatic algorithms do not provide good results for medium and low resolution DSMs (pixel size larger than 5 meters).

  • Masking of vegetation areas with the forest mask and the interpolation of the ground level values.

This phase consists in two successive sub processes:1) Generation of the tree crown model (CHM).2) Calculation of the tree average height by forest stand.Tree crown model (CHM) is a raster that contains values of the tree dominant height from the ground. Tree crown model CHM is obtained by the subtraction of the DSM minus the DEM. The calculation of the tree height by forest stand can be done with zonal statistics which compute the statistical variables of the CHM raster that is contained in each polygon of the stand delineation.

Stand height data from external sources can be used to validate the output data if they are available. Elevation accuracy is measured in RMSE. The Expected maximum allowable error is stablished in 4 meters according to Canadian DEM standards.


Forest age product is only necessary when the forest management plan does not specify the stands dominant ages. Forest age is calculated by management stand using a multi-temporal analysis of historical satellite data.

Data processing steps:

Input data characteristics are checked if they fulfil the minimum requirements to produce the output data. Input spatial data (images, maps and vectors) should be loaded in a GIS software and their characteristics directly checked or measured. The input data of this product consist in a series of historical Landsat images that were acquired in 5 years interval (proposed reference dates: 1983, 1988, 1993, 1998, 2003, 2008, 2013, 2018). Quality control must contain the following checklist:


  • Check the cloud cover of the optical images: Cloud areas inside the AOI must be masked to delimitate where the product will be not generated.

  • Check the processing level in the spatial correction of the optical images. If the images are not properly orthorectified it should be geometrically corrected to ensure their spatial accuracy.

  • Check the processing level in the radiometric correction of the optical images: If the input images have not reflectance values they should be atmospherically corrected to transform radiances to reflectances.

If the geometric correction is negative, then run this BB. Orthorectification consist in the collection of GCPs and the application of a DEM to correctly place each image pixel.

This phase must be done if the spectral correction of the images does not fulfil the minimum requirements. Atmospheric correction is mandatory for the production of texture indexes, without atmospheric correction the indexes values are modified by the differential atmosphere absorption and diffusion.

The differentiation between vegetated / non-vegetated and forest / no-forest areas will be performed through thresholds in spectral indexes.

Texture indexes are used to improve the results of the classification algorithms.

The methodology used to generate forest mask product S1 P1 is used to identify the forest coverage in each reference date. The methodology of forest mask was implemented in this phase and phases 7th and 8th. Images are classified thus: (1) the most suitable vegetation indexes for the site are selected, taking into consideration local factors. (2) A vegetation/non-vegetation threshold is stablished for the specific index chosen. (3) The algorithm is run to classify the image in two classes: vegetation/non-vegetation.

Vegetation/non vegetation areas are further refined by means of unsupervised classification algorithms (isocluster, k-means). The resulting image is a classification in a selected number of classes with uniform spectral and texture characteristics. Thereafter, classes are manually aggregated to a final forest / no forest classification, using expert knowledge supported by vegetation maps or surveys.

This step corrects the “salt and pepper classification effect”, cleaning class patches smaller than a given MMU. Algorithms applied are majority filters or sieve filters.

This phase has two sub phases:

1) Calculation of the date when the forest began to grow in each image pixel. This is done checking the presence of forest in each reference date by image pixels. The date when the forest start to growth is determined with the most recent date when the pixel was classified as no forest. If forest is classified in all reference dates the forest age is almost as old as the oldest record of Landsat images (1983).

2) Calculation of the forest age of each forest management stand with the raster of dates and the stand delineation

Forest age from external sources can be used to validate the output data if they are available. An overall accuracy of 85% is considered by the remote sensing community the standard for minimum acceptable quality of remote sensing products.


Burnt scar detection is a multi-temporal product: it starts with the production of a baseline (forest mask) which is reviewed in each iteration to detect the changes in the forest due wild fires. Burnt scars could be executed in a stand-alone product (the identification of the burnt areas of a given date) or in a surveillance product updated in fixed temporal intervals (i.e. six months).

Data processing steps:

Input data are checked if they fulfil the minimum requirements to produce the output data. Spatial data (images, maps and vectors) should be loaded in a GIS software and their characteristics directly checked or measured. Quality control should contain the following checks:


  • Check process level of the optical images: This check detects if the images need orhtorectification and/or atmospheric correction.

  • Check the cloud cover of the optical images: Cloud areas inside the AOI must be masked to delimitate where the product will be not generated.

If the geometric correction is negative, then run this BB. Orthorectification consist in the collection of GCPs and the application of a DEM to correctly place each image pixel.

This phase must be done if the spectral correction of the images does not fulfil the minimum requirements. Atmospheric correction is mandatory for the production of spectral and texture indexes, without atmospheric correction the indexes values are modified by the differential atmosphere absorption and diffusion.

The separation between vegetated / non-vegetated and forest / no-forest areas will be performed through spectral indexes.

Texture indexes are used to improve the results of the classification algorithms.

Burn Area Index determines if the bare soil burnt in a wildfire. This index is calculated in the most recent image because the aim of this product is detecting the wildfires which happened between the most recent image and the reference image. This index highlights burned land in the red to near-infrared spectrum, by emphasizing the charcoal signal in post-fire images. BAI index is computed from the spectral distance from each pixel to a reference spectral point, where recently burned areas converge. Brighter pixels indicate burned areas.

The methodology used to generate forest mask product S1 P1 is used to identify the forest coverage in each reference date. The methodology of forest mask was implemented in this phase and phases 8th and 9th. Images are classified thus:

1) the most suitable vegetation indexes for the site are selected, taking into consideration local factors.

2) A vegetation/non-vegetation threshold is stablished for the specific index chosen.

3) The algorithm is run to classify the image in two classes: vegetation/non-vegetation.

Vegetation/non vegetation areas are further refined by means of unsupervised classification algorithms (isocluster, k-means). The resulting image is a classification in a selected number of classes with uniform spectral and texture characteristics. Thereafter, classes are manually aggregated to a final forest / no forest classification, using expert knowledge supported by vegetation maps or surveys.

This step corrects the “salt and pepper classification effect”, cleaning class patches smaller than a given MMU. Algorithms applied are majority filters or sieve filters.

This algorithm determines forest losses between the reference image and the most recent image.

The goal of this product is to ascertain forest loses due to fires and forest losses due to other causes. The correlation algorithm uses the BAI with a threshold value of 2 [RD.54] to ascertain losses due to fires. All other losses are not due to fires.

Forest surveys from other sources can be used to validate the output product, if available. An overall accuracy of 85% is the lowest acceptable standard.


The clear cuts product informs of the forest surface from which every tree has been cut down and removed. It is a multitemporal product in so far as it requires a baseline forest mask for a given initial time (T0 mask) which is reanalysed in subsequent iterations, to detect forest changes due to logging.

Data processing steps:

Input data are checked if they fulfil the minimum requirements to produce the output data. Spatial data (images, maps and vectors) should be loaded in a GIS software and their characteristics directly checked or measured. Quality control should contain the following checks:


  • Check processing level of the optical images: This check detects if the images need orhtorectification and/or atmospheric correction.

  • Check the cloud cover of the optical images: Cloud areas inside the AOI must be masked to delimitate where the product will be not generated.

If the geometric correction is negative, then run this BB. Orthorectification consist in the collection of GCPs and the application of a DEM to correctly place each image pixel.

This phase must be done if the spectral correction of the images does not fulfil the minimum requirements. Atmospheric correction is mandatory for the production of spectral and texture indexes, without atmospheric correction the indexes values are modified by the differential atmosphere absorption and diffusion.

The separation between vegetated / non-vegetated and forest / no-forest areas will be performed through spectral indexes.

Texture indexes are used to improve the results of the classification algorithms.

Burn Area Index determines if the bare soil burnt in a wildfire. This index is calculated in the most recent image because the aim of this product is detecting the wildfires which happened between the most recent image and the reference image. This index highlights burned land in the red to near-infrared spectrum, by emphasizing the charcoal signal in post-fire images. BAI index is computed from the spectral distance from each pixel to a reference spectral point, where recently burned areas converge. Brighter pixels indicate burned areas.

The methodology used to generate forest mask product S1 P1 is used to identify the forest coverage in each reference date. The methodology of forest mask was implemented in this phase and phases 8th and 9th. Images are classified thus:

1) the most suitable vegetation indexes for the site are selected, taking into consideration local factors.

2) A vegetation/non-vegetation threshold is stablished for the specific index chosen.

3) The algorithm is run to classify the image in two classes: vegetation/non-vegetation.

Vegetation/non vegetation areas are further refined by means of unsupervised classification algorithms (isocluster, k-means). The resulting image is a classification in a selected number of classes with uniform spectral and texture characteristics. Thereafter, classes are manually aggregated to a final forest / no forest classification, using expert knowledge supported by vegetation maps or surveys.

This step corrects the “salt and pepper classification effect”, cleaning class patches smaller than a given MMU. Algorithms applied are majority filters or sieve filters.

This algorithm determines forest losses between the reference image and the most recent image.

The goal of this product is to ascertain forest loses due to fires and forest losses due to other causes, most luckily by clear cuts. The correlation algorithm uses the BAI with a threshold value of 2 to ascertain losses due to fires. All other losses are assumed to be potential clear cuts.

The potential clear are visually checked by an operator. Operator tries to locate features that indicates that the forest loss area is due logging. These features include regular borders of the forest loss following management plots or trunks piles.

Forest surveys from other sources can be used to validate the output product, if available. An overall accuracy of 85% is the lowest acceptable standard. Expert knowledge is also required to QC this product.


DEM-Elevation product consists of a sampled array of elevations for a number of ground positions at regularly spaced intervals.

Dem-Elevation(LIDAR) – FORA (Product using LIDAR data)

Data processing steps:

Input data are checked to fulfil the minimum requirements for required outputs. Spatial data (LIDAR, maps and vectors) should be up loaded in a GIS suite and quality controlled by an interpreter. Quality control should proceed with the following checks:


  • Check LIDAR data classification presence/absence.

  • If classification is present, check completeness and compatibility with ASPRS standards.

  • Check numerical analysis of the point density and maximum and minimum coordinates.

  • Display LIDAR data to check overlap of the data with the AOI and data quality.

If classification QC is positive, omit this step. If classification QC is negative then run the LIDAR classification algorithm. The LIDAR classification algorithm is a “must” to distinguish ground level LIDAR points and other points.

Ground inventory data must be geographically correlated with LIDAR. This phase consists in two steps:


  • Analysis of the accuracy of the coordinates taken in field.

  • Point cloud “cutting” to inventory area shape.

The development of the DEM based on LIDAR information follows these steps:


  • Grid surface create.

  • Class definition: Defining points classified as ground (depending on the classification of each LIDAR data set).

  • Pixel definition: Depending on the point density and the orography: 1×1, 2×2, 5×5…

Elevation data from external sources can be used to validate the output data if they are available. Elevation accuracy is measured in RMSE. The Expected maximum allowable error is stablished in 4 meters according to Canadian DEM standards.

Dem-Elevation(optical) – GMV (Product using optical satellite images)

Data processing steps:

Input data are checked if they fulfil the minimum requirements to produce the output data. Spatial data (images, maps and vectors) should be loaded in a GIS software and their characteristics directly checked or measured. Quality control should contain the following checks:


  • Check the process level of the optical images: Only not orthorectified images can be used to produce the stand height with optical data. Images should have orbit and RPC parameters to improve the georeferencing.

  • Check pixel size of the optical images: The pixel size ratio between input images and output DEM must be 3/1 or higher.

  • Check the cloud cover of the optical images: Cloud areas inside the AOI must be masked to delimitate where the product will be not generated.

  • Check the overlap between input images: Stand height product can only be developed in the overlap area between the input images. AOI areas outside images overlap area will be not processed.

  • Check the difference in the acquisition angle between input images: Greater differences in the images acquisition angle increases the accuracy of the final stand height. The minimum recommended difference between acquisition angles should be 10º.

  • Check the acquisition date difference between input images: The input images should be acquired in closer dates as possible. The maximum recommended difference in acquisition dates should be 60 days.

  • Check input images sensor: Only images of the same sensor are allowed in this product.

  • Check the extraction of elevation and planimetric measurements in the auxiliary data: Reference planimetric and elevation data are necessary for the correct georreferenciation and developing of this product.

GCPs are equivalent points between the optical image and reference data that are used to georreference the images. The coordinates and elevation of the GCPs provide the necessary information to execute the DSM generation.

DSM generation is an iterative process; the process is run several times to choose the best result. DSM generation consists in two successive sub processes:


  • The generation of epipolar images.

  • DEM automatic extraction.

There are two options to obtain the DEM from DSM.


  • Automatic algorithms that detect the DSM pixels in vegetation or buildings with shape and slope thresholds. Generally automatic algorithms do not provide good results for medium and low resolution DSMs (pixel size larger than 5 meters).

  • Masking of vegetation areas with the forest mask and the interpolation of the ground level values.

After completing the DSM to DEM conversion the final product is refined to remove artefacts and invalid values. The refinement algorithms can include:


  • Statistical filters of the central values: mean, median, mode.

  • Noise removal and smooth filters.

  • Bump and pit filters.

Stand height data from external sources can be used to validate the output data if they are available. Elevation accuracy is measured in RMSE. The Expected maximum allowable error is stablished in 4 meters according to Canadian DEM standards.

Dem-Elevation(SAR) – GMV (Product using SAR data)

Data processing steps:

Input data are checked if they fulfil the minimum requirements to produce the output data. Spatial data (images, maps and vectors) should be loaded in a GIS software and their characteristics directly checked or measured. Quality control should contain the following checks:


  • Check SAR images product: Only interferometric products can be used to produce stand height with SAR data. Other SAR products cannot be used.

  • Check SAR images pixel size: The pixel size ratio between input images and output DEM must be 3/1 or higher.

  • Check the overlap between input images: Stand height product can only be developed over the overlap area between different input images. AOI areas outside the overlap between images will be not processed.

  • Check the difference of dates between input images: The images acquisition dates should be as close as possible to improve the final results. The maximum recommended difference in acquisition dates is 60 days.

  • Check if reference data provide planimetric and elevation measurements: Reference planimetric and elevation data are necessary to extract GCPs and develop this product.

Coregistration is an automatic process) that automatically locates equivalent points in both SAR images.

Interferometric SAR or InSAR, allows accurate measurements of the radiation travel path. Measurements of radiation travel path variations as a function of the satellite position and time of acquisition allow generation of digital elevation models and measurement of centimetric surface deformations of the terrain.

The Digital Surface Model captures the surface height, not the ground elevation. A DSM is an elevation model that includes the tops of buildings, trees, powerlines, and any other objects. Commonly this is seen as a canopy model and only ‘sees’ ground where there is nothing else overtop of it.

A SAR DSM has just relative values, not georeferenced. Georeferencing these data sets requires the provision of a local DEM (Local provider) or a global DEM (e.g.: SRTM or Aster)


  • Automatic algorithms that detect the DSM pixels in vegetation or buildings with shape and slope thresholds. Generally automatic algorithms do not provide good results for medium and low resolution DSMs (pixel size larger than 5 meters).

  • Masking of vegetation areas with the forest mask and the interpolation of the ground level values.

After completing the DSM to DEM conversion the final product is refined to remove artefacts and invalid values. The refinement algorithms can include:


  • Statistical filters of the central values: mean, median, mode.

  • Noise removal and smooth filters.

  • Bump and pit filters.

Stand height data from external sources can be used to validate the output data if they are available. Elevation accuracy is measured in RMSE. The Expected maximum allowable error is stablished in 4 meters according to Canadian DEM standards.


DEM-Slope product can be generated from DEM data from any source: LIDAR, optical images, SAR, third party DEM. The slope product provides contains the incline of the terrain in each pixel and expressed in grades.

The production of the DEM-Slope is simple. It runs the slope calculation algorithm e.g: Burrough and Mcdonell neighbourhood method over DEM data from any source. (e.g: S1 P9 or third party DEM).


DEM-Aspect product can be generated from DEM data from any source: LIDAR, optical images, SAR, third party DEM. The aspect product provides the orientation of the terrain in each pixel and expressed in grades from the geographic north.

The production of the DEM-Aspect is simple It runs the aspect calculation algorithm e.g: Burrough and Mcdonell neighbourhood method over DEM data from any source. (e.g: S1 P9 or third party DEM).


Site index is an indicator of the forest biological productivity; this indicator is a baseline for many forest management activities such as forest inventory. The site index can be obtained only if two consecutive LIDAR flights are available together with information on site index curve.

Data processing steps:

Input data are checked to fulfil the minimum requirements for required outputs. Spatial data (LIDAR, maps and vectors) should be up loaded in a GIS suite and quality controlled by an interpreter. Quality control should proceed with the following checks:


  • Check LIDAR data classification presence/absence.

  • If classification is present, check completeness and compatibility with ASPRS standards.

  • Check numerical analysis of the point density and maximum and minimum coordinates.

  • Display LIDAR data to check overlap of the data with the AOI and data quality.

If classification QC is positive, omit this step. If classification QC is negative then run the LIDAR classification algorithm. The LIDAR classification algorithm is a “must” to distinguish ground level LIDAR points and other points.

LIDAR statistic processing for the whole study area:


  • Development of grid LIDAR statistics through gridmetrics.

  • Definition of grid pixel (i.e. 20-25m).

1) Building of database structure.

2) Adding existing height-diameter equations.

3) Adding data from field and clip data results.

4) Model using R: variable selection, analysis, different model formulation analysis. Regression techniques. Statistical fitting accuracy assessment. Model diagnosis (numerical and graphics) and validation.

1) With height information from LIDAR datasets of different dates is introduced in existing site index curves.

2) Application of site Index information for every pixel with height information

The most robust quality check is carried out through the model diagnosis and validation. Nevertheless, Stand Height from SAR can be used to validate the output data. Besides, a data quality check of Stand Height is developed to assure data range is logical. Site Index Quality Check comes from Height Quality Check. Besides, a logical and biological analysis of the data behaviour on Site Index curves is developed.


Wood characterisation

The Wood Characterisation service consists in the modelling and mapping of wood fibre attributes linked to the wood product potential and performance (i.e. pulp yield, density, strength and stiffness of lumber). Data handled by the wood characterisation models are: remote sensing (LIDAR) data, environmental parameters and timber attributes field measurements. Thereafter, wood characteristics are extrapolated to larger forest areas.

Products


The goal of this product is to obtain the wood density at stand level, which will be measured by extracting cores from the trees as ground truth, using satellite data, LIDAR and climatic data.A predicted value for wood density at the tree level will be calculated according to a mathematical model. Wood density is a key wood quality, most relevant for the pulp industry: when density increases, raw wood demands are lower and yields are higher. Wood density is also related to drought resistance. Wood characterisation product will try to find a predicting parameter for drought susceptibility.

Data processing steps:

Input data are checked if they fulfil the minimum requirements to produce the output data. Spatial data (images, maps and vectors) should be loaded in a GIS software and their characteristics directly checked or measured. Quality control should contain the following checks:


  • Check the process level of the optical images: This check detects if the images need orhtorectification and/or atmospheric correction.

  • Check the cloud cover of the optical images: Cloud areas inside the AOI must be masked to delimitate where the product will be not generated.

If the geometric correction is negative, then run this BB. Orthorectification consist in the collection of GCPs and the application of a DEM to correctly place each image pixel.

This phase must be done if the spectral correction of the images does not fulfil the minimum requirements. Atmospheric correction is mandatory for the production of texture or spectral indexes. Without atmospheric correction indexes values are modified by the differential atmosphere absorption and diffusion.

Spectral indexes shall be run to find possible wood quality predicting parameters to feed the wood quality models.

SAR data are good predictor of wood density. However, SAR signal is too coarse. This product refines the SAR signal through coefficients, providing a larger variety of parameters, some of which are closer to wood quality properties. SAR pixel intensity values are often converted to a physical quantity called the backscattering coefficient or normalized radar cross-section measured in decibel (dB) that decomposes the intensity in those components. It is expected that some of these parameters will be most adequate to feed the wood density model.

LIDAR statistic processing for the whole study area:


  • Development of grid LIDAR statistics through gridmetrics.

  • Definition of grid pixel (i.e. 20-25m).

The calculations is done based in the relationships between the increment core density (at breast height) and the global density on the tree (internal model). This phase consist in the following steps:

1) Increment cores will be processed in the laboratory to obtain basic wood density by gravimetric methods. Then, the data are summarized by plot (stand).

2) Data for trees characterization (basal area, diameter at breast height, height …etc.) is also measured in an ordinary field inventory.

3) Application of the model for estimation tree global density from increment core density at breast height.

EO data (LIDAR, Satellite) and Non-EO data (climate BB-NEOI-5 and BB-NEOI-6) are correlated with the wood density values (ground truth). This correlation is done by the stepwise regression methodology, it is a statistical procedure in which best fitting parameters are find using the Akaike Information Criterion (AIC). Low values of AIC indicates better fitting. This phase is composed with two procedures:

A) Generation of models: The models are generated in three successive steps:

1) The basic model contains only the variable with the lower AIC and the dependent variable.

2) More variables are added successively. The AIC values of the model variables are checked, if the added variable has the smallest AIC it will remain in the model, some older variables can be excluded because its high values of AIC.

3) The process of adding variables stops when the determination coefficient of the dependent variable does not increase.

B) Selection of the best model: The generation of models can be run several times because the inclusion of exclusion of different variables produces different results models. These results models must be compared to select the best. This selection is done in two steps:

1) Analysis of multicollinearity: The best results can be due to a strong relation between the independent variables. This situation is not acceptable but it can detected by the Variance Inflation Factor (VIF). Models with high VIF are discarded.

2) Evaluation of the model performance

This phase is based in the model prediction error methodology. It consists in the definition of different ranges of wood density.

Wood Density survey of external sources can be used to validate the output data. In addition, wood density values obtained in some stands from pulp industry will be used to check the model error.

The final phase of wood density ranking is composed by two sub processes:

1) Application of the wood density model.

2) Wood density range assignation.

Wood density ranking can be delivered by forest stand. This can be done with zonal statistical variables of the density wood ranking using the stand delineation. Other intrinsic restrictions of the model can be added in this phase. I.e. the model could be only calculated for forest stands of a specific age, height or site quality. In this case, stand height, forest age and site index products are needed to filter the final output.


The goal of this product is to obtain a reliable approach to the wood stiffness at stand level. It is a good indicator of wood quality for structural uses. This is determined by wood Modulus of Elasticity (MOE). A mathematical model predicting MOE will be built using satellite, LIDAR and climatic data.

Data processing steps:

Input data are checked if they fulfil the minimum requirements to produce the output data. Spatial data (images, maps and vectors) should be loaded in a GIS software and their characteristics directly checked or measured. Quality control should contain the following checks:


  • Check the cloud cover of the optical images: Cloud areas inside the AOI must be masked to delimitate where the product will be not generated.

  • Check the processing level in the spatial correction of the optical images. If the images are not properly orthorectified it should be geometrically corrected to ensure their spatial accuracy.

  • Check the processing level in the radiometric correction of the optical images: If the input images have not reflectance values they should be atmospherically corrected to transform radiances to reflectances.

If the geometric correction is negative, then run this BB. Orthorectification consist in the collection of GCPs and the application of a DEM to correctly place each image pixel.

This phase must be done if the spectral correction of the images does not fulfil the minimum requirements. Atmospheric correction is mandatory for the production of texture or spectral indexes. Without atmospheric correction indexes values are modified by the differential atmosphere absorption and diffusion.

Spectral indexes shall be run to find possible wood quality predicting parameters to feed the wood quality models.

SAR data are good predictor of wood density. However, SAR signal is too coarse. This product refines the SAR signal through coefficients, providing a larger variety of parameters, some of which are closer to wood quality properties. SAR pixel intensity values are often converted to a physical quantity called the backscattering coefficient or normalized radar cross-section measured in decibel (dB) that decomposes the intensity in those components. It is expected that some of these parameters will be most adequate to feed the wood density model.

LIDAR statistic processing for the whole study area:


  • Development of grid LIDAR statistics through gridmetrics.

  • Definition of grid pixel (i.e. 20-25m).

Field values of wood stiffness are interpolated to the stand level. This phase follows the next steps:


  • Wood Quality Survey: it is carried out by extracting a core from the trees (wood density) and using acoustic methods in standing trees and logs (stress wave velocity).

  • Field inventory: data for stands characterization (basal area, diameter at breast height, height, dominant height, site index…etc.).

  • Application of the existing model of MOE prediction (for example for Pinus pinaster) in standing trees and logs (internal model) of all plots.

EO data (LIDAR, Satellite) and NON-EO data (climate BB-NEOI-5 and BB-NEOI-6) are correlated with the wood stiffness values (ground truth). This correlation is done by the stepwise regression methodology, it is a statistical procedure in which best fitting parameters are find using the Akaike Information Criterion (AIC). Low values of AIC indicates better fitting. This phase is composed with two procedures:

A) Generation of models: The models are generated in three successive steps:

1) The basic model contains only the variable with the lower AIC and the dependent variable.

2) More variables are added successively. The AIC values of the model variables are checked, if the added
variable has the smallest AIC it will remain in the model, some older variables can be excluded because its high values of AIC.

3) The process of adding variables stops when the determination coefficient of the dependent variable does not increase.

B) Selection of the best model: The generation of models can be run several times because the inclusion of exclusion of different variables produces different results models. These results models must be compared to select the best. This selection is done in two steps:

1) Analysis of multicollinearity: The best results can be due to a strong relation between the independent variables. This situation is not acceptable but it can detected by the Variance Inflation Factor (VIF). Models with high VIF are discarded.

2) Evaluation of the model performance

Sawing industry will provide direct measures of wood stiffness in some study stands to check the model error.


Each strength class has defined by a minimum value for each factor and the limiting factor (the factor with the lower value of resistant class). The limiting factor for coniferous is always the wood stiffness, so strength class will be based on the he results of MOE.

Data processing steps:

Wood stiffness is assigned to resistant classes based on the Spanish or European wood structural quality normative (UNE-338:2016)

Sawing industry will provide direct measures of the strength class in some study stands to check the model error.


Each strength class has defined by a minimum value for each factor and the limiting factor (the factor with the lower value of resistant class). The limiting factor for coniferous is always the wood stiffness, so strength class will be based on the he results of MOE.

Data processing steps:

Input data are checked to fulfil the minimum requirements for required outputs. Spatial data (LIDAR, maps and vectors) should be up loaded in a GIS suite and quality controlled by an interpreter. Quality control should proceed with the following checks:


  • Check LIDAR data classification presence/absence.

  • If classification is present, check completeness and compatibility with ASPRS standards.

  • Check numerical analysis of the point density and maximum and minimum coordinates.

  • Display LIDAR data to check overlap of the data with the AOI and data quality.

If classification QC is positive, omit this step. If classification QC is negative then run the LIDAR classification algorithm. The LIDAR classification algorithm is a “must” to distinguish ground level LIDAR points and other points.

Ground inventory data must be geographically correlated with LIDAR. This phase consists in two steps:


  • Analysis of the accuracy of the coordinates taken in field.

  • Point cloud “cutting” to inventory area shape.

LIDAR statistic processing for the whole study area.


  • Development of grid LIDAR statistics through gridmetrics.

  • Definition of grid pixel (i.e. 20-25m).

Model procedures to develop stand density equation.


  • Building of database structure.

  • Adding data from field and clipdata results.

  • Model procedures using R: variable selection, analysis, different model formulation analysis. Regression techniques. Statistical fitting accuracy assessment. Model diagnosis (numerical and graphics) and validation.

  • Application of model to the grid developed in BB-LM-5.

  • Delivery of an standard GIS format (shape, geotiff, ESRI raster) file with the Stand Density Information.

The most robust quality check is carried out through the model diagnosis and validation.


Biomass and CO2 stocking

The Biomass and CO2 stocking service provides estimations of the living volume of trees in a forest and its CO2 stock. These products are key for the forest biomass industry and carbon accountings.

Products


Above ground biomass provides a measurement of the forest biomass per surface unit. Above ground biomass can be a survey product or a multi-temporal product. Survey product consists in a single reference date calculation of the above ground biomass. Multi-temporal product consists in the calculation of the above ground biomass increment or decrease between two or more reference dates. Above ground biomass product can be produced from three types of EO input data: optical satellite images, SAR satellite images and LIDAR.

Above Ground Biomass(LIDAR) – FORA (Product using LIDAR data)

Data processing steps:

Input data are checked to fulfil the minimum requirements for required outputs. Spatial data (LIDAR, maps and vectors) should be up loaded in a GIS suite and quality controlled by an interpreter. Quality control should proceed with the following checks:


  • Check the classification of the LIDAR data: Check the presence of the classification over the LIDAR data and their correctness.

  • Numerical analysis of the point density and maximum and minimum coordinates.

  • Visualization of the LIDAR data to check overlap of the data with the AOI and data quality.

If classification QC is positive, omit this step. If classification QC is negative then run the LIDAR classification algorithm. The LIDAR classification algorithm is a “must” to distinguish ground level LIDAR points and other points.

Ground inventory data must be geographically correlated with LIDAR. This phase consists in two steps:


  • Analysis of the accuracy of the coordinates taken in field.

  • Point cloud “cutting” to inventory area shape.

This phase consists in two steps:


  • Development of grid LIDAR statistics through gridmetrics.

  • Definition of grid pixel (i.e. 20-25m).

This phase consists in two steps:


  • Building of database structure.

  • Adding data from field and clip results.

  • Model procedures using R: variable selection, analysis, different model formulation analysis. Regression techniques. Statistical fitting accuracy assessment. Model diagnosis (numerical and graphics) and validation.

  • Application of model to the grid.

  • Delivery of an standard GIS format (shape, geotiff, ESRI raster) file with the Stand Density Information.

The most robust quality check is carried out through the model diagnosis and validation..

 

Above Ground Biomass (optical) – GMV (Product using optical satellite images)

Data processing steps:

Input data are checked to fulfil the minimum requirements for required outputs. Spatial data (images, maps and vectors) should be up loaded in a GIS suite and quality controlled by an interpreter. Quality control should proceed with the following checks:


  • Check the cloud cover of the optical images: Cloud areas inside the AOI must be masked to delimitate where the product will be not generated.

  • Check the processing level in the spatial correction of the optical images. If the images are not properly orthorectified it should be geometrically corrected to ensure their spatial accuracy.

  • Check the processing level in the radiometric correction of the optical images: If the input images have not reflectance values they should be atmospherically corrected to transform radiances to reflectances.

If the geometric correction is negative, then run this BB. Orthorectification consist in the collection of GCPs and the application of a DEM to correctly place each image pixel.

If spectral correction of images is negative, then run this BB. Atmospheric correction is mandatory for the production of spectral and texture indexes, without atmospheric correction the indexes values are modified by the differential atmosphere absorption and diffusion.

Spectral indexes are new possible predictor parameters of the biomass algorithm from the original spectral bands.

Texture indexes are used to improve the results of the classification algorithms.

This procedure needs a set of biomass field data to train the model of the correlation algorithms. The objective of this phase is to select the best field biomass predictor variables from the spectral indexes and images spectral bands. The correlation grade is stablished with the R2 coefficient between the predictor variables and the biomass field data. Not only linear correlation are considered: the exponential, inverse and logarithm of spectral indexes and spectral bands are calculated. The most correlated predictor variables will be selected for the next phase of biomass predictor algorithm generation.

The predictor algorithm is generated with the multivariate linear regression methodology. This method produces polynomial equation for biomass calculation. Each predictor variables has a different calculated coefficient in this equation. The biomass predictor equation is used to calculate the biomass distribution in raster format using GIS software only over the forest area.

Additional biomass field measurements can be used to calculate the accuracy of the final biomass product.

 

Above Ground Biomass (SAR) – GMV (Product using SAR data)

Data processing steps:

Input data are checked to fulfil the minimum requirements for required outputs. Spatial data (images, maps and vectors) should be up loaded in a GIS suite and quality controlled by an interpreter. Quality control should proceed with the following checks:


  • Check cloud cover of the optical images: Cloud areas inside the AOI must be masked to delimitate where the product will be not generated.

  • Check processing level in the spatial correction of the optical images. If the images are not properly orthorectified it should be geometrically corrected to ensure their spatial accuracy.

  • Check if reference data provide planimetric and elevation measurements: Reference planimetric and elevation data are necessary to extract GCPs and develop this product.

If the geometric correction is negative, then run this BB. Orthorectification consist in the collection of GCPs and the application of a DEM to correctly place each image pixel.

This polarimetry transform the raw SAR image in the following parameters of the SAR signal:


  • Double-bounce power.

  • Volume power.

  • Surface power.

SAR data are good predictor of wood density. However, SAR signal is too coarse. This product refines the SAR signal through coefficients, providing a larger variety of parameters, some of which are closer to wood quality properties. SAR pixel intensity values are often converted to a physical quantity called the backscattering coefficient or normalized radar cross-section measured in decibel (dB) that decomposes the intensity in those components. It is expected that some of these parameters will be most adequate to feed the wood density model.

SAR vegetation index (SARVI) is an alternative index of the optical NDVI. The radar vegetation index is calculated with the polarized backscattering coefficients with the following formula.

This procedure needs a set of biomass field data to train the model of the correlation algorithms. The objective of this phase is to select the best field biomass predictor variables from the SAR-polarimetry model SAR backscattering coefficients, SAR vegetation index and images spectral bands. The correlation grade is stablished with the R2 coefficient between the predictor variables and the biomass field data. Not only linear correlation are considered: the exponential, inverse and logarithm of spectral indexes and spectral bands are calculated. The most correlated predictor variables will be selected for the next phase of biomass predictor algorithm generation.

The predictor algorithm is generated with the multivariate linear regression methodology. This method produces polynomial equation for biomass calculation. Each predictor variables has a different calculated coefficient in this equation. The biomass predictor equation is used to calculate the biomass distribution in raster format using GIS software only over the forest area.

Additional biomass field measurements can be used to calculate the accuracy of the final biomass product.


CO2 stock can be a survey product or a multi-temporal product. Survey product consists in a single reference date calculation of the CO2 stock. Multi-temporal product consists in the calculation of the CO2 stock increment or decrease between two or more reference dates. CO2 stock is related with the above ground biomass calculation. The relation between biomass and CO2 stock is stablished with measurements of wood carbon content.

Data processing steps:

The calculation of CO2 stock can be done with a simple ratio between biomass = 2 * CO2 stock or data of the carbon content from wood quality surveys. The derived products of CO2 stock can be a single date CO2 stock from biomass data or the CO2 increment from biomass changes.

[/vc_column][/vc_row]

Forest condition

The Forest Condition Service monitors and measures forest health condition, identifying stressed vegetation, due to drought, plagues or any other hampering cause.

Products


Biotic damages is an on demand product. It is produced when an activation is raised because the concurrence of a pest or disease outbreak. The objective is detect the forest loss due the catastrophic event by analysing an image just after the event and an image just before the event. The output of this product is the forest area affected by the biotic damages and an actualized forest mask after the event.

Data processing steps:

Input data are checked if they fulfil the minimum requirements to produce the output data. Spatial data (images, maps and vectors) should be loaded in a GIS software and their characteristics directly checked or measured. Quality control should contain the following checks:


  • Check processing level of the optical images: This check detects if the images need orhtorectification and/or atmospheric correction.

  • Check the cloud cover of the optical images: Cloud areas inside the AOI must be masked to delimitate where the product will be not generated.

If the geometric correction is negative, then run this BB. Orthorectification consist in the collection of GCPs and the application of a DEM to correctly place each image pixel.

This phase must be done if the spectral correction of the images does not fulfil the minimum requirements. Atmospheric correction is mandatory for the production of texture or spectral indexes. Without atmospheric correction indexes values are modified by the differential atmosphere absorption and diffusion.

The separability of vegetated / non-vegetated and forest / no-forest areas will be performed through vegetation indexes.

Texture indexes are used to improve the results of the classification algorithms.

Images are classified thus:

1) the most suitable vegetation indexes for the site are selected, taking into consideration local factors.

2) A vegetation/non-vegetation threshold is stablished for the specific index chosen.

3) The algorithm is run to classify the image in two classes: vegetation/non-vegetation..

Vegetation/non vegetation areas are further refined by means of unsupervised classification algorithms (isocluster, k-means). The resulting image is a classification in a selected number of classes with uniform spectral and texture characteristics. Thereafter, classes are manually aggregated to a final forest / no forest classification, using expert knowledge supported by vegetation maps or surveys.

This step corrects the “salt and pepper classification effect”, cleaning class patches smaller than a given MMU. Algorithms applied are majority filters or sieve filters.

This algorithm determines forest losses between the reference image and the most recent image.

Forest surveys from other sources can be used to validate the output data if they are available. An overall accuracy of 85% is considered by the remote sensing community the standard for minimum acceptable quality of remote sensing products.


Drought estimations is an on demand product. It is produced when an activation is raised because a drought event which was detected by meteorological stations. The objective is detect the forest loss due the catastrophic event by analysing an image just after the event and an image just before the event. The output of this product is the forest area affected by the drought event and an actualized forest mask after the event.

Data processing steps:

Input data are checked if they fulfil the minimum requirements to produce the output data. Spatial data (images, maps and vectors) should be loaded in a GIS software and their characteristics directly checked or measured. Quality control should contain the following checks:


  • Check processing level of the optical images: This check detects if the images need orhtorectification and/or atmospheric correction.

  • Check the cloud cover of the optical images: Cloud areas inside the AOI must be masked to delimitate where the product will be not generated.

If the geometric correction is negative, then run this BB. Orthorectification consist in the collection of GCPs and the application of a DEM to correctly place each image pixel.

This phase must be done if the spectral correction of the images does not fulfil the minimum requirements. Atmospheric correction is mandatory for the production of texture or spectral indexes. Without atmospheric correction indexes values are modified by the differential atmosphere absorption and diffusion.

The separability of vegetated / non-vegetated and forest / no-forest areas will be performed through vegetation indexes.

Texture indexes are used to improve the results of the classification algorithms.

Images are classified thus:

1) the most suitable vegetation indexes for the site are selected, taking into consideration local factors.

2) A vegetation/non-vegetation threshold is stablished for the specific index chosen.

3) The algorithm is run to classify the image in two classes: vegetation/non-vegetation..

Vegetation/non vegetation areas are further refined by means of unsupervised classification algorithms (isocluster, k-means). The resulting image is a classification in a selected number of classes with uniform spectral and texture characteristics. Thereafter, classes are manually aggregated to a final forest / no forest classification, using expert knowledge supported by vegetation maps or surveys.

This step corrects the “salt and pepper classification effect”, cleaning class patches smaller than a given MMU. Algorithms applied are majority filters or sieve filters.

This algorithm determines forest losses between the reference image and the most recent image.

Forest surveys from other sources can be used to validate the output data if they are available. An overall accuracy of 85% is considered by the remote sensing community the standard for minimum acceptable quality of remote sensing products.


Wind-damages is an on demand product. It is produced when an activation is raised because a fast wind event which was detected by meteorological stations. The objective is detect the forest loss due the catastrophic event by analysing an image just after the event and an image just before the event. The output of this product is the forest area affected by the wind event and an actualized forest mask after the event.

Data processing steps:

Input data are checked if they fulfil the minimum requirements to produce the output data. Spatial data (images, maps and vectors) should be loaded in a GIS software and their characteristics directly checked or measured. Quality control should contain the following checks:


  • Check processing level of the optical images: This check detects if the images need orhtorectification and/or atmospheric correction.

  • Check the cloud cover of the optical images: Cloud areas inside the AOI must be masked to delimitate where the product will be not generated.

If the geometric correction is negative, then run this BB. Orthorectification consist in the collection of GCPs and the application of a DEM to correctly place each image pixel.

This phase must be done if the spectral correction of the images does not fulfil the minimum requirements. Atmospheric correction is mandatory for the production of texture or spectral indexes. Without atmospheric correction indexes values are modified by the differential atmosphere absorption and diffusion.

The separability of vegetated / non-vegetated and forest / no-forest areas will be performed through vegetation indexes.

Texture indexes are used to improve the results of the classification algorithms.

Images are classified thus:

1) the most suitable vegetation indexes for the site are selected, taking into consideration local factors.

2) A vegetation/non-vegetation threshold is stablished for the specific index chosen.

3) The algorithm is run to classify the image in two classes: vegetation/non-vegetation..

Vegetation/non vegetation areas are further refined by means of unsupervised classification algorithms (isocluster, k-means). The resulting image is a classification in a selected number of classes with uniform spectral and texture characteristics. Thereafter, classes are manually aggregated to a final forest / no forest classification, using expert knowledge supported by vegetation maps or surveys.

This step corrects the “salt and pepper classification effect”, cleaning class patches smaller than a given MMU. Algorithms applied are majority filters or sieve filters.

This algorithm determines forest losses between the reference image and the most recent image.

Forest surveys from other sources can be used to validate the output data if they are available. An overall accuracy of 85% is considered by the remote sensing community the standard for minimum acceptable quality of remote sensing products.


Snow-damages is an on demand product. It is produced when an activation is raised because a snow which was precipitation event detected by meteorological stations. The objective is detect the forest loss due the catastrophic event by analysing an image just after the event and an image just before the event. The output of this product is the forest area affected by the snow event and an actualized forest mask after the event.

Data processing steps:

Input data are checked if they fulfil the minimum requirements to produce the output data. Spatial data (images, maps and vectors) should be loaded in a GIS software and their characteristics directly checked or measured. Quality control should contain the following checks:


  • Check processing level of the optical images: This check detects if the images need orhtorectification and/or atmospheric correction.

  • Check the cloud cover of the optical images: Cloud areas inside the AOI must be masked to delimitate where the product will be not generated.

If the geometric correction is negative, then run this BB. Orthorectification consist in the collection of GCPs and the application of a DEM to correctly place each image pixel.

This phase must be done if the spectral correction of the images does not fulfil the minimum requirements. Atmospheric correction is mandatory for the production of texture or spectral indexes. Without atmospheric correction indexes values are modified by the differential atmosphere absorption and diffusion.

The separability of vegetated / non-vegetated and forest / no-forest areas will be performed through vegetation indexes.

Texture indexes are used to improve the results of the classification algorithms.

Images are classified thus:

1) the most suitable vegetation indexes for the site are selected, taking into consideration local factors.

2) A vegetation/non-vegetation threshold is established for the specific index chosen.

3) The algorithm is run to classify the image in two classes: vegetation/non-vegetation..

Vegetation/non vegetation areas are further refined by means of unsupervised classification algorithms (isocluster, k-means). The resulting image is a classification in a selected number of classes with uniform spectral and texture characteristics. Thereafter, classes are manually aggregated to a final forest / no forest classification, using expert knowledge supported by vegetation maps or surveys.

This step corrects the “salt and pepper classification effect”, cleaning class patches smaller than a given MMU. Algorithms applied are majority filters or sieve filters.

This algorithm determines forest losses between the reference image and the most recent image.

Forest surveys from other sources can be used to validate the output data if they are available. An overall accuracy of 85% is considered by the remote sensing community the standard for minimum acceptable quality of remote sensing products.


Forest vitality index is a multi-temporal product. It detects changes in the vegetation pigmentary indexes (carotenoid, anthocyanin, chlorophyll) that are related with the tree health and vitality status. The input images of this product must be acquired in the day of the year with the peak activity of the vegetation to avoid seasonal effects (leaf fall) that produce false positives or mask vitality changes.

Data processing steps:

Input data are checked if they fulfil the minimum requirements to produce the output data. Spatial data (images, maps and vectors) should be loaded in a GIS software and their characteristics directly checked or measured. Quality control should contain the following checks:


  • Check processing level of the optical images: This check detects if the images need orhtorectification and/or atmospheric correction.

  • Check the cloud cover of the optical images: Cloud areas inside the AOI must be masked to delimitate where the product will be not generated.

If the geometric correction is negative, then run this BB. Orthorectification consist in the collection of GCPs and the application of a DEM to correctly place each image pixel.

This phase must be done if the spectral correction of the images does not fulfil the minimum requirements. Atmospheric correction is mandatory for the production of texture or spectral indexes. Without atmospheric correction indexes values are modified by the differential atmosphere absorption and diffusion.

The separability of vegetated / non-vegetated and forest / no-forest areas will be performed through vegetation indexes.

This algorithm determines forest losses between the reference image and the most recent image.


  • Consistent decrease of forest vitality (-2).

  • Probable decrease of forest vitality (-1).

  • No vitality changes (0).

  • Probable increase of forest vitality (1).

  • Consistent increase of forest vitality (2).

Forest surveys from other source can be used to validate the output data if they are available. An overall accuracy of 85% is considered by the remote sensing community the standard for minimum acceptable quality of remote sensing products.


Frost-damages is an on demand product. It is produced when an activation is raised because a frost event which was detected by meteorological stations. The objective is detect the forest loss due the catastrophic event by analysing an image just after the event and an image just before the event. The output of this product is the forest area affected by the frost event and an actualized forest mask after the event.

Data processing steps:

Input data are checked if they fulfil the minimum requirements to produce the output data. Spatial data (images, maps and vectors) should be loaded in a GIS software and their characteristics directly checked or measured. Quality control should contain the following checks:


  • Check processing level of the optical images: This check detects if the images need orhtorectification and/or atmospheric correction.

  • Check the cloud cover of the optical images: Cloud areas inside the AOI must be masked to delimitate where the product will be not generated.

If the geometric correction is negative, then run this BB. Orthorectification consist in the collection of GCPs and the application of a DEM to correctly place each image pixel.

This phase must be done if the spectral correction of the images does not fulfil the minimum requirements. Atmospheric correction is mandatory for the production of texture or spectral indexes. Without atmospheric correction indexes values are modified by the differential atmosphere absorption and diffusion.

The separability of vegetated / non-vegetated and forest / no-forest areas will be performed through vegetation indexes.

Texture indexes are used to improve the results of the classification algorithms.

Images are classified thus:

1) the most suitable vegetation indexes for the site are selected, taking into consideration local factors.

2) A vegetation/non-vegetation threshold is established for the specific index chosen.

3) The algorithm is run to classify the image in two classes: vegetation/non-vegetation.

Vegetation/non vegetation areas are further refined by means of unsupervised classification algorithms (isocluster, k-means). The resulting image is a classification in a selected number of classes with uniform spectral and texture characteristics. Thereafter, classes are manually aggregated to a final forest / no forest classification, using expert knowledge supported by vegetation maps or surveys.

This step corrects the “salt and pepper classification effect”, cleaning class patches smaller than a given MMU. Algorithms applied are majority filters or sieve filters.

This algorithm determines forest losses between the reference image and the most recent image.

Forest surveys from other sources can be used to validate the output data if they are available. An overall accuracy of 85% is considered by the remote sensing community the standard for minimum acceptable quality of remote sensing products.


Ecosystem vulnerabilities

The Ecosystem Vulnerabilities Service provides information on specific ecosystem parameters, namely: watershed extent, hydrologic network, biodiversity indicators, habitat fragmentation, floods and forest fire risks.

Products


The Basin Stream delineation seeks a data base capable of estimating the flood prone areas along the stream banks. The stream network is most important to monitor the health condition of riparian forests.

Data processing steps:

Input data are checked if they fulfil the minimum requirements to produce the output data. Spatial data (images, maps and vectors) should be loaded in a GIS software and their characteristics directly checked or measured. Quality control should contain the following checks:


  • Check processing level of the optical images: This check detects if the images need orhtorectification correction.

  • Check the cloud cover of the optical images: Cloud areas inside the AOI must be masked to delimitate where the product will be not generated.

  • Check pixel size of the optical images: Optical images can only be useful for watershed delineation if they are high resolution or bellow (pixel size under 5 meters).

  • Check pixel size of the DEMs Reginal scale requires DEMs with pixel size 10 – 30 m and local scale requires DEMs with pixel size below 5 – 10 m.

  • Check the coverage of the DEMs over the AOI. DEM should exceed the AOI almost more than 5 pixels.

If the geometric correction is negative, then run this BB. Orthorectification consist in the collection of GCPs and the application of a DEM to correctly place each image pixel.

The input data of this building block can be DEMs produced in MSF project or third party DEMs. This building block is composed with the successive execution of the following GIS algorithms:

1) Calculate fill. It fills sinks in DEMs in order to remove small imperfection in the data.

2) Calculate flow direction: This tool creates a raster of flow direction from each cell to its steepest downslope neighbour.

3) Calculate flow accumulation.

4) Stream network delineation.

This phase is optional, it consists in the manual improvement of the watershed delineation.

Stand delineation outputs are checked against other ground truth sources. US National Maps Accuracy Standards (NMAS) apply to the spatial accuracy of the stand delineation outputs. Accuracy limits depend on the scale of the product. E.g.: the maximum allowed horizontal error of CE90 is 2.54 meters for a map scale of 1:5,000.


Biodiversity provides a measurement of the richness of forest types or tree species in a certain study area. This measurement is a main factor to take in consideration for the environmental forest management.

In this phase three input data can be used to calculate the biodiversity indicator: Main Forest Types, Forest Inventory Data or Vegetation Survey.

Habitat fragmentation indicator is a categorization of the types of spatial relationships between the different, habitats, forest types or tree species. Fragmentation categories are from total isolations (patches) to total connection (interior). Fragmentation results are provided in a resume table (with fragmentation types by each forest class) and a raster to provide a spatial overview of forest fragmentation.

Data processing steps:

This building block is optional because it is needed to convert input data from vector data to raster data. In this phase three input data can be used to calculate the biodiversity indicator: Main Forest Types, Forest Inventory Data or Vegetation Survey.

This building block is a density algorithm. It is defined by the surface percentage of a certain forest type in a given window sample. Window size is an important parameter because it deepens on the analysis scale, lower window size produce lower fragmentation values.

This building block is an algorithm that determines the connectivity between pixels of the same forest type.

The fragmentation type is assigned according to the values of Pf and Pff.


Flood risk indicator is provided with a spatial dataset that delimits the extension of potential floods in the study. This risk methodology type is a deterministic approach rather than a probabilistic approach in which the risk parameter is the distribution of the total flood risk.

Data processing steps:

The selected flood model only works in a single basin with a single discharge stream. Basins are delimitated with the watershed delineation and stream network. Basin extension provides surface values to calculate the precipitation accumulation in each watershed.

The precipitation data is analysed to extract the maximum precipitation rate for each catchment basins that were delimitated in the previous phase. The precipitation rate is transformed to flow rates with the calculation of flood rates along the drainage network.

This algorithm determines the extension of flood using as inputs the following data:


  • DEM

  • Stream network

  • Flow rates calculated with meteorological data

Forested areas is overlapped with the flood area to locate the potential forest areas affected by floods.


Soil erosion risk provides a quantification of the total soil and terrain lost per year by study pixel. Working units in this product are regular pixels because the calculation has better accuracy and continuity in raster spatial analysis because many factors came from DEM data.

The calculation of the Indicator of soil erosion is done with the Revised Universal Soil Loss Equation (RUSLE): A = 𝑅 * 𝐾 * 𝐿𝑆 * 𝐶 * P. This equation needs the following inputs:


  • Topographic factor (LS): This factor is calculated by Digital Elevation Model. To obtain the LS factor is necessary to calculate intermediate data, as the fill & flow direction, slope and flow accumulation.

  • Rainfall erosivity factor (R): Precipitation data.

  • Soil erodibility factor (K): Soil Map or geological maps.

  • Crop or vegetation factor (C): From main forest types or vegetation survey maps.

  • Crop or vegetation factor (C): From main forest types or vegetation survey maps.


Forestry accounting

Forestry Accounting provides analytics based on the System of Environmental Economic Accounting (SEEA) proposed by United Nations; SEEA integrates economic and environmental data to provide a comprehensive view of the relationships between economy and environment.

Products


This product describes the changes in land use that can affect the wood stock and production during a given period of time.

Data processing steps:

Forest types are reclassified to standard environmental accounting classes.

Changes between the starting date (T0) and the ending date (T1) in forest types and species.

Forest inventory data can be used to validate the output data, if they are available.


Physical wood accounts will compute the differences between the opening and closing stocks of timber resources, and the additions and reductions will be discriminated. This task works on the basis of homogeneous management forest units based on forest types. Hence, the data is needed at the homogeneous forest unit level, both for the beginning and end of the period.

Data processing steps:

Input products are analysed by overlap methodology to extract the m3 of timber per each parcel of site index. This analysis will be undertaken both for the starting (T0) and ending (T1) dates of forest land stocks.

This step is done via an overlap analysis to differentiate natural and cultivated timber resources. Like in the previous step this analysis will be undertaken both for the starting (T0) and ending (T1) dates of the forest land stocks.

In this phase natural timber resources will be differentiated between available wood supply and not available wood supply. Like in the previous steps this analysis will be undertaken both for the starting (T0) and ending (T1) dates of the forest land stocks.

Timber outputs of the starting date will be used to calculate the natural mortality of all classified forest types.

In this step all reduction flows will be estimated following the UN standard of environmental accounting. These reduction of stocks include:


  • Removals: Volume of timber extracted during the accounting period. Removal data will be obtained from available datasets at regional level.

  • Natural losses: Volume of timber disrupted by natural losses other than felling (e.g. fire, snow, wind). Data from natural losses will be obtained from available datasets at regional level and natural mortality. Natural losses are calculated where timber resources can be removed.

  • Catastrophic losses: Volume of timber disrupted by exceptional and significant natural losses (e.g. a large unexpected fire). Catastrophic losses data will be obtained from available datasets at regional level.

  • Reclassification losses: Volume of timber lost due to land management reclassification (e.g. forests that is added to a national park during the accounting period). Reclassification losses data will be obtained from available datasets at regional level.

All addition flows will be estimated following the UN standard of environmental accounting. These stocks additions include:


  • Natural growth: Volume of timber increase during the accounting period. A growth model per forest types will be used to assess the rate of timber addition to the stock.

  • Reclassification additions: Additional volume of timber available to extract because land use reclassification (e.g. forests removed from a national park during the accounting period). Reclassification gains will be obtained from available datasets at regional level.


This product consists in the assignment of a monetary value to productive forest items (natural available and cultivated) that were obtained in product S6 P1 Physical Wood Accounts. This task works on the basis of homogeneous management forest units based on forest types. Hence, the data is needed at the homogeneous forest unit level, both for the beginning and end of the period.

Data processing steps:

Physical wood accounts are transformed in monetary values following standards of environmental accounting. Essential input data will determine the monetary value of the stocks by forest type, management cost and prices trends in the local markets.This analysis will be undertaken both for the starting (T0) and ending (T1) date of the forest land stocks.

Removals volumes will be transformed to monetary values.

Other stock reductions (catastrophic losses and reclassifications) volumes will be transformed to monetary values.

Addition to stocks (natural growth and reclassifications) volumes will be transformed to monetary values.

In this step all timber values revaluations due to price changes are assessed.