...
首页> 外文期刊>ISPRS Journal of Photogrammetry and Remote Sensing >Contextually guided very-high-resolution imagery classification with semantic segments
【24h】

Contextually guided very-high-resolution imagery classification with semantic segments

机译:具有语义段的上下文指导的超高分辨率图像分类

获取原文
获取原文并翻译 | 示例
           

摘要

Contextual information, revealing relationships and dependencies between image objects, is one of the most important information for the successful interpretation of very-high-resolution (VHR) remote sensing imagery. Over the last decade, geographic object-based image analysis (GEOBIA) technique has been widely used to first divide images into homogeneous parts, and then to assign semantic labels according to the properties of image segments. However, due to the complexity and heterogeneity of VHR images, segments without semantic labels (i.e., semantic-free segments) generated with low-level features often fail to represent geographic entities (such as building roofs usually be partitioned into chimney/antenna/shadow parts). As a result, it is hard to capture contextual information across geographic entities when using semantic-free segments. In contrast to low-level features, "deep" features can be used to build robust segments with accurate labels (i.e., semantic segments) in order to represent geographic entities at higher levels. Based on these semantic segments, semantic graphs can be constructed to capture contextual information in VHR images. In this paper, semantic segments were first explored with convolutional neural networks (CNN) and a conditional random field (CRF) model was then applied to model the contextual information between semantic segments. Experimental results on two challenging VHR datasets (i.e., the Vaihingen and Beijing scenes) indicate that the proposed method is an improvement over existing image classification techniques in classification performance (overall accuracy ranges from 82% to 96%). (C) 2017 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.
机译:上下文信息,揭示图像对象之间的关系和依赖性,是成功解释超高分辨率(VHR)遥感图像的最重要信息之一。在过去的十年中,基于地理对象的图像分析(GEOBIA)技术已被广泛使用,首先将图像划分为同类部分,然后根据图像段的属性分配语义标签。但是,由于VHR图像的复杂性和异质性,使用低级特征生成的没有语义标签的片段(即无语义片段)通常无法表示地理实体(例如,通常将建筑物屋顶划分为烟囱/天线/阴影)部分)。结果,当使用无语义段时,很难跨地理实体捕获上下文信息。与低级特征相反,“深”特征可用于构建具有准确标签的健壮分段(即语义分段),以便在更高级别上表示地理实体。基于这些语义段,可以构建语义图以捕获VHR图像中的上下文信息。本文首先使用卷积神经网络(CNN)探索语义段,然后应用条件随机场(CRF)模型对语义段之间的上下文信息进行建模。在两个具有挑战性的VHR数据集(即Vaihingen和Beijing场景)上的实验结果表明,所提出的方法是对现有图像分类技术的分类性能的一种改进(总体准确性范围为82%至96%)。 (C)2017国际摄影测量与遥感学会(ISPRS)。由Elsevier B.V.发布。保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号