File Name: forest detection and enhancement of remote sensing satellite images .zip
Accurate information on the location and magnitude of vegetation change in scenic areas can guide the configuration of tourism facilities and the formulation of vegetation protection measures. High spatial resolution remote sensing images can be used to detect subtle vegetation changes. The major objective of this study was to map and quantify forest vegetation changes in a national scenic location, the Purple Mountains of Nanjing, China, using multi-temporal cross-sensor high spatial resolution satellite images to identify the main drivers of the vegetation changes and provide a reference for sustainable management.
We apologize for the inconvenience...
The environmental challenges the world faces nowadays have never been greater or more complex. Global areas covered by forests and urban woodlands are threatened by natural disasters that have increased dramatically during the last decades, in terms of both frequency and magnitude.
Large-scale forest fires are one of the most harmful natural hazards affecting climate change and life around the world. Thus, to minimize their impacts on people and nature, the adoption of well-planned and closely coordinated effective prevention, early warning, and response approaches are necessary. This paper presents an overview of the optical remote sensing technologies used in early fire warning systems and provides an extensive survey on both flame and smoke detection algorithms employed by each technology.
Three types of systems are identified, namely terrestrial, airborne, and spaceborne-based systems, while various models aiming to detect fire occurrences with high accuracy in challenging environments are studied. Finally, the strengths and weaknesses of fire detection systems based on optical remote sensing are discussed aiming to contribute to future research projects for the development of early warning fire systems.
Over the last few years, climate change and human-caused factors have a significant impact on the environment. Some of these events include heat waves, droughts, dust storms, floods, hurricanes, and wildfires.
Wildfires have extreme consequences on local and global ecosystems and cause serious damages to infrastructure, injuries, and losses in human lives; therefore, fire detection and the accurate monitoring of the disturbance type, size, and impact over large areas is becoming increasingly important [ 1 ].
To this end, strong efforts have been made to avoid or mitigate such consequences by early fire detection or fire risk mapping [ 2 ]. Traditionally, forest fires were mainly detected by human observation from fire lookout towers and involved only primitive tools, such as the Osborne fire Finder [ 3 ]; however, this approach is inefficient, as it is prone to human error and fatigue.
On the other hand, conventional sensors for the detection of heat, smoke, flame, and gas typically take time for the particles to reach the point of sensors and activate them. In addition, the range of such sensors is relatively small, hence, a large number of sensors need to be installed to cover large areas [ 4 ]. Recent advances in computer vision, machine learning, and remote sensing technologies offer new tools for detecting and monitoring forest fires, while the development of new materials and microelectronics have allowed sensors to be more efficient in identifying active forest fires.
These systems are usually equipped with visible, IR, or multispectral sensors whose data are processed by machine learning methods. These methods rely either on the extraction of handcrafted features or on powerful deep learning networks Figure 1 for early detection of forest fires Figure 2 as well as for modeling fire or smoke behavior.
Finally, we present the strengths and weaknesses of the aforementioned methods and sensors, as well as future trends in the field of early fire detection. Systems discussed in this review target the detection of fire in the early stages of the fire cycle. This paper is organized as follows: Section 2 covers different optical remote sensing systems for early fire detection, organized into three subsections—for Terrestrial, Aerial, and Satellite systems respectively— Section 3 includes discussion and the future scope of research.
These sensors need to be carefully placed to ensure adequate visibility. Thus, they are usually located in watchtowers, which are structures located on high vantage points for monitoring high-risk situations and can be used not only for detection but also for verification and localization of reported fires.
There are two types of cameras used for early fire detection, namely optical cameras and IR cameras that can capture data ranging from low resolution to ultra-high resolution for different fire detection scenarios [ 15 ].
Optical cameras provide color information, whereas IR imaging sensors can provide a measure of the thermal radiation emitted by objects in the scene [ 16 ]. More recently, early detection systems that combine both types have also been introduced. The computer-based methods can process a high number of data aiming to achieve a consistent level of accuracy maintaining a low false alarm rate. In the following, we first present traditional approaches that are based on handcrafted features followed by more recent methods using deep learning for automated feature extraction.
Detection methods that use optical sensors or RGB cameras combine features that are related to the physical properties of flame and smoke, such as color, motion, spectral, spatial, temporal, and texture characteristics.
The following color spaces have been used for the task of early fire detection: RGB [ 17 , 18 , 19 ], YCbCr [ 20 ], CIELAB [ 21 ], YUV [ 22 , 23 ], and HSV [ 24 ]; however, a drawback of color-based fire detection models is the high false alarm rates, since single-color information is insufficient in most cases for the early and robust fire detection.
Thus, many of the developed methodologies combine color and motion information in images and videos [ 25 ]. Zhang et al. Avgerinakis et al. Likewise, Mueller et al. Other researchers focused on the flickering effect of fire. This is observed in flame contours at a frequency of around 10 Hz, independently of the burning material and the burner [ 29 ].
To this end, Gunay et al. Training HMMs leads to the reduction of data redundancy and improvement of reliability, while real-time detection is also achieved [ 31 ]. The use of multi-feature fire-based detection can offer more accurate results. Chen et al. The algorithm was applied to a video dataset consisting of different daytime and nighttime environments; however, at night, color analysis is less useful and night smoke is less visible. Thus, nighttime wildfire detection typically relies on motion analysis.
In [ 36 ], Barmpoutis et al. Thereafter, they used a support vector machine SVM classifier to increase the robustness of fire detection. Many other researchers have used infrared cameras aiming to reduce the false alarm rates of the optical-based terrestrial systems. The MWIR band detectors, although they are optimal for fire detection, are expensive due to the cooling system required, so typically LWIR cameras are used.
In IR videos the existence of rapid time-varying contours is an important sign of the presence of fire in the scene. Arrue et al. More specifically, they used an adaptive infrared threshold, a segmentation method, and a neural network for early fire detection. Specifically, they first estimated the boundaries of moving bright regions in each frame and then used spatio—temporal analysis in the wavelet domain using HMMs.
In contrast to single-sensor systems, multisensor systems typically cover wider areas and can achieve higher accuracies by fusing data from different sensors. Sensor data were processed and transmitted to a monitoring center employing computer vision and pattern recognition algorithms for automated fire detection and localization.
The algorithm took into account color, spatial, and temporal information for flame detection, while for smoke detection, an online adaptive decision fusion ADF framework was developed. This framework consisted of several algorithms aiming to detect slow-moving objects, smoke-colored regions, and smoke region smoothness. Furthermore, improved early wildfire detection was achieved by fusing smoke detection from visual cameras and flame detection from infrared LWIR cameras.
Similarly, Bosch et al. In this, each sensor consists of an optical and a thermal camera and an integrated system for the processing of data and communication.
More recently, Barmpoutis et al. Their modeling, combining color, motion, and spatio—temporal features led to higher detection rates and a significant reduction of false alarms. Temporal and spatial dynamic texture analysis of flame for forest fire detection was performed in [ 42 ].
Dynamic texture features were derived using two-dimensional 2D spatial wavelet decomposition in the temporal domain and three-dimensional 3D volumetric wavelet decomposition.
In [ 43 ], the authors improved the smoke modeling of the fire incidents through dynamic textures solving higher order LDS h-LDS. Finally, in [ 44 ], the authors took the advantage of the geometric properties of stabilized h-LDS sh-LDS space, and they proposed a novel descriptor, namely, histograms of Grassmannian points HoGP to improve the classification of both flame and smoke sequences.
In contrast to previously discussed methods that rely on handcrafted features, deep learning DL methods [ 45 ] can automatically extract and learn complex feature representations.
Since the seminal work of Krizhevsky et al. To this end, Luo et al. Firstly, they identified the candidate regions based on the background dynamic update and dark channel a priori method [ 48 ]. Then, the features of the candidate region were extracted automatically by a CNN consisting of five convolutional layers and three fully connected layers.
In [ 49 ], the authors combined deep learning and handcrafted features to recognize the fire and smoke areas. For static features, the AlexNet architecture was adapted, while for dynamic features an adaptive weighted direction algorithm was used. Moreover, Sharma et al. It is worth mentioning that for the training, they created an unbalanced dataset including more non-fire images. Firstly, they trained a full image fire classifier to decide whether the image contains the fire or not and then applied a fine-grained patch classifier to localize the fire patches within this image.
The full image classifier is a deep CNN that has been fine-tuned from AlexNet and the fine-grained patch classifier is a two-layer fully connected neural networks trained with the upsampled Pool-5 features.
Muhammad et al. Frizzi et al. The architecture of this model was similar to LeNet-5 including dropout layers and used a leaky rectified linear unit ReLU activation function. Muhamad et al. In [ 56 ], the authors combined AlexNet as a baseline architecture and the internet of multimedia things IoMT for fire detection and disaster management.
The developed system introduced an adaptive prioritization mechanism for cameras in the surveillance system allowing high-resolution cameras to be activated to confirm the fire and analyze the data in real-time. Furthermore, Dunnings and Breckon [ 57 ] used low-complexity CNN architectural variants and applied a superpixel localization approach aiming to reduce the computational performance offering up to 17 fps processing time.
Since the number of publicly available wildfire datasets is still limited, Sousa et al. A method that had been pre-trained on the ImageNet Inception-v3 model was retrained and evaluated using ten-fold cross-validation on the Corsican Fire Database [ 59 ]. Extending deep learning approaches, Barmpoutis et al. To that end, the faster R-CNN with non-maximum annexation was utilized to realize the smoke target location based on static spatial information and then a 3D CNN was used for smoke recognition by combining dynamic spatial—temporal information.
Jadon et al. Moreover, in [ 64 ] the authors extracted spatial features through a faster R-CNN for the detection of the suspected regions of fire SroFs and non-fire. Then, the features of the detected SroFs in successive frames were used by a long short-term memory LSTM for them to identify whether there is a fire or not in a short-term period.
Finally, a majority voting method and the exploitation of fire dynamics were used for the final decision. Shi et al. More specifically, they utilized the pixel-wise image saliency aggregating PISA method [ 66 ] to identify the candidate regions and then classified them into fire or non-fire regions. Instead of extracting bounding boxes, Yuan et al. FCNs can achieve end-to-end pixel-wise segmentation so the precise location of smoke can be identified in images.
They also created synthetic smoke images instead of labeling the real smoke images manually for training and then tested the network on both synthetic and real videos. Cheng et al. Then, a GAN was employed for predicting the smoke trend heatmap based on the space—time analysis of the smoke videos.
Finally, in [ 69 ] the authors used a two-stage training of deep convolutional GANs for smoke detection. This procedure included a regular training step of a deep convolutional DC -GAN with real images and noise vectors and a training step of the discriminator separately using the smoke images without the generator.
Terrestrial imaging systems can detect both flame and smoke, but in many cases, it is almost impossible to view, in a timely manner, the flames of a wildfire from a ground-based camera or a mounted camera on a forest watchtower.
To this end, autonomous unmanned aerial vehicles UAVs can provide a broader and more accurate perception of the fire from above, even in areas that are inaccessible or considered too dangerous for operations by firefighting crews.
Change detection CD is essential for accurate understanding of land surface changes with multitemporal Earth observation data. Due to the great advantages in spatial information modeling, Morphological Attribute Profiles MAPs are becoming increasingly popular for improving the recognition ability in CD applications. However, most of the MAPs-based CD methods are implemented by setting the scale parameters of Attribute Profiles APs manually and ignoring the uncertainty of change information from different sources. To address these issues, a novel method for CD in high-resolution remote sensing HRRS images based on morphological attribute profiles and decision fusion is proposed in this study. By establishing the objective function based on the minimum of average interscale correlation, a morphological attribute profile with adaptive scale parameters ASP-MAPs is presented to exploit the spatial structure information. On this basis, a multifeature decision fusion framework based on the Dempster—Shafer D-S theory is constructed for obtaining the CD map.
Remote sensing is the acquisition of information about an object or phenomenon without making physical contact with the object and thus is in contrast to on-site observation. The term is applied especially to acquiring information about the Earth. Remote sensing is used in numerous fields, including geography, land surveying and most Earth science disciplines for example, hydrology, ecology , meteorology, oceanography, glaciology, geology ; it also has military, intelligence, commercial, economic, planning, and humanitarian applications, among others. In current usage, the term "remote sensing" generally refers to the use of satellite or aircraft-based sensor technologies to detect and classify objects on Earth.