Change Detection and Image Time Series Analysis 2
Здесь есть возможность читать онлайн «Change Detection and Image Time Series Analysis 2» — ознакомительный отрывок электронной книги совершенно бесплатно, а после прочтения отрывка купить полную версию. В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Жанр: unrecognised, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.
- Название:Change Detection and Image Time Series Analysis 2
- Автор:
- Жанр:
- Год:неизвестен
- ISBN:нет данных
- Рейтинг книги:4 / 5. Голосов: 1
-
Избранное:Добавить в избранное
- Отзывы:
-
Ваша оценка:
- 80
- 1
- 2
- 3
- 4
- 5
Change Detection and Image Time Series Analysis 2: краткое содержание, описание и аннотация
Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Change Detection and Image Time Series Analysis 2»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.
Change Detection and Image Time Series Analysis 2 — читать онлайн ознакомительный отрывок
Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Change Detection and Image Time Series Analysis 2», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.
Интервал:
Закладка:
Techniques in (ii) include multiscale approaches with a focus on the use of the coarser resolutions in the data set, in order to obtain fast computational algorithms. In the seminal papers (Basseville et al. 1992a, 1992b), the basis for multiscale autoregressive modeling in dyadic trees was introduced. Since then, straightforward approaches were performed to deal with multiresolution images using trees (Pérez 1993; Chardin 2000; Laferté et al. 2000; Kato and Zerubia 2012; Voisin 2012; Hedhli et al . 2014). A detailed review of some of these methods can be found in Graffigne et al . (1995) and Willsky (2002).
In broader terms, multisensor analysis encompasses all processes dealing with data and information from multiple sensors to achieve refined/improved information, compared to the result that could be obtained by using data from only one individual source (Waltz and Llinas 1990; Pohl and van Genderen 1998; Hall and Llinas 2001). The accuracy of the classification of remote sensing images, for instance, is generally improved when multiple source image data are introduced in the processing chain in a suitable manner (e.g. (Dousset and Gourmelon 2003; Nguyen et al. 2011; Gamba et al. 2011; Hedhli et al . 2015)). As mentioned above, images from microwave and optical sensors provide complementary information that helps in discriminating the different classes. Several procedures have been introduced in the literature including, on the one hand, post-classification techniques in which, first, the two data sets are separately segmented, and then the joint classification is produced by using, for example, random forest (e.g. Waske and van der Linden 2008), support vector machines with ad hoc kernels (Muñoz-Marí et al. 2010) and artificial neural networks (Mas and Flores 2008). On the other hand, other methods directly classify the combined multisensor data by using, for instance, statistical mixture models (e.g. (Dousset and Gourmelon 2003; Voisin et al. 2012; Prendes 2015)), entropy-based techniques (e.g. Roberts et al. 2008) and fuzzy analysis (e.g. Benz 1999; Stroppiana et al. 2015). Furthermore, for complex data, especially when dealing with urban areas, radar images can contribute to the differentiation between different land covers, owing to the differences in surface roughness, shape, and moisture content of the observed ground surface (e.g. Brunner et al. 2010). The use of multisensor data in image classification has become increasingly popular with the increased availability of sophisticated software and hardware facilities to handle the increasing volumes of data. The decision on which of these techniques is the most suitable is very much driven by the applications and the typology of input remote sensing data.
Recently, with the exposure of neural networks, several multisensor data fusion techniques have been proposed based on feed-forward multilayer perceptron and convolutional neural network (CNN) architectures. Indeed, the huge amount of data makes the use of deep neural network (DNN) models possible. Many effective multi-task approaches have been developed recently to train DNN models on some large-scale remote sensing benchmarks (e.g. Chen et al . 2017; Carvalho et al . 2019; Cheng et al . 2020). The aim of these multi-task methods is to learn an embedding space from different sensors (i.e. task). This could be done by first learning the embedding of each modality separately and then combining all of the learned features as a joint representation. Then, this representation is used as an input for the last layers of different high level visual applications, for example, remote sensing classification, monitoring or change detection. Alternatively, DNN models could be used as an heterogeneous data fusion framework, learning the related parameters from all of the input sources (e.g. Ghamisi et al. 2016; Benedetti et al. 2018; Minh et al. 2018). Despite the regularization techniques used to mitigate the high computational complexity of DNN methods (Pan et al. 2015), the training of these techniques is still greedy and hard to converge, especially with remote sensing data sets.
In the next section, we will describe two advanced methods for the supervised classification of multisource satellite image time series. These methods have the advantage of being applicable to series of two or more images taken by single or multiple sensors, operating at the same or different spatial resolutions, and with the same or different radar frequencies and spectral bands. In general, the available images in the series are temporally and spatially correlated. Indeed, temporal and spatial contextual constraints are unavoidable in multitemporal data interpretation. Within this framework, Markov models provide a convenient and consistent way of modeling context-dependent spatio-temporal entities originated from multiple information sources, such as images in a multitemporal, multisensor, multiresolution and multimission context.
1.2. Methodology
1.2.1. Overview of the proposed approaches
Let us consider a time series composed of K images, acquired over the same area on K acquisition dates, by up to K optical and SAR different sensors. Each image in the series is generally composed of multiple features (i.e. it is vector-valued), possibly corresponding to distinct spectral bands or radar polarizations. Specifically,
indicates the feature vector of pixel ( p, q ) in the k -th image in the series
In general, each sensor may operate at a distinct spatial resolution; hence, a multisensor and multiresolution time series is being considered.
The acquisition times of the images in the series are assumed to be close enough so that no significant changes occur in the land cover of the observed area. In particular, we assume that no abrupt changes (e.g. due to natural disasters such as floods or earthquakes) occur within the overall time span of the series. This assumption makes it possible to use the whole time series to classify the land cover in the scene, by using the benefit of the complementary properties of the images acquired by different sensors and at different spatial resolutions. Furthermore, this assumption may be especially relevant when the temporal dynamic of the ground scene per se is an indicator of land cover membership, such as in the case of forested (e.g. deciduous vs. evergreen) or agricultural areas. We denote as the land cover classes in the scene and as
their set. We operate in a supervised framework; hence, we assume that training samples are available for all of these classes.
The overall formulation introduced in Hedhli et al. (2016) to address multitemporal fusion in the case of single-sensor imagery, and based on multiple quad-trees in cascade, is generalized here to take benefit from the images acquired by different sensors and from their mutual synergy. The multiscale topology of the quad-trees and of hierarchical MRFs defined on quad-trees intrinsically allows multiresolution and multisensor data to be naturally fused in the land cover mapping process.
Читать дальшеИнтервал:
Закладка:
Похожие книги на «Change Detection and Image Time Series Analysis 2»
Представляем Вашему вниманию похожие книги на «Change Detection and Image Time Series Analysis 2» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.
Обсуждение, отзывы о книге «Change Detection and Image Time Series Analysis 2» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.