• search hit 32 of 87
Back to Result List

Deep Interactive Region Segmentation and Captioning

  • Based on recent developments in dense image captioning, it is now possible to describe every object of a photographed scene with a caption while objects are determined by bounding boxes. However, the user interpretation of such an output is not trivial due to the existence of many overlapping bounding boxes. Furthermore, in current captioning frameworks, the user is not able to involve personal preferences to exclude areas that are out of interest. In this paper, we propose a novel hybrid deep learning architecture for interactive region segmentation and captioning whereby the user is able to specify an arbitrary region of the image that should be highlighted and described. To this end, we trained three different highly deep architectures on our special training data to identify the User Intention Region (UIR). In parallel, a dense image captioning model is utilized to locate all the objects of the scene by drawing bounding boxes and produce their linguistic descriptions. During our fusion approach, the detected UIR will be explainedBased on recent developments in dense image captioning, it is now possible to describe every object of a photographed scene with a caption while objects are determined by bounding boxes. However, the user interpretation of such an output is not trivial due to the existence of many overlapping bounding boxes. Furthermore, in current captioning frameworks, the user is not able to involve personal preferences to exclude areas that are out of interest. In this paper, we propose a novel hybrid deep learning architecture for interactive region segmentation and captioning whereby the user is able to specify an arbitrary region of the image that should be highlighted and described. To this end, we trained three different highly deep architectures on our special training data to identify the User Intention Region (UIR). In parallel, a dense image captioning model is utilized to locate all the objects of the scene by drawing bounding boxes and produce their linguistic descriptions. During our fusion approach, the detected UIR will be explained with the caption of the best match bounding box. To the best of our knowledge, this is the first work that provides such a comprehensive output. Our experiments show the superiority of the proposed approach over state-of-the-art interactive segmentation methods on several well-known segmentation benchmarks. In addition, replacement of the bounding boxes with the result of the interactive segmentation leads to a better understanding of the dense image captioning output as well as an enhancement in object localization accuracy.show moreshow less

Export metadata

Additional Services

Search Google Scholar
Metadaten
Author: Ali Sharifi Boroujerdi, Maryam Khanian, Michael BreußGND
DOI:https://doi.org/10.1109/SITIS.2017.27
ISBN:978-1-5386-4283-2
Title of the source (English):2017 13th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Jaipur, India
Publisher:IEEE
Place of publication:Piscataway, NJ
Document Type:Conference Proceeding
Language:English
Year of publication:2018
Tag:Machine learning; deep learning
First Page:103
Last Page:110
Faculty/Chair:Fakultät 1 MINT - Mathematik, Informatik, Physik, Elektro- und Informationstechnik / FG Angewandte Mathematik
Einverstanden ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.