About Multimodal Semantic Segmentation:
With the increasing availability of remote sensing (RS) data from platforms, the potential of multimodal RS techniques for large-scale segmentation has become evident. MMSeg-YREB, specifically designed for large-scale multimodal segmentation, integrates diverse data sources to provide complementary and comprehensive information. This integrated approach has demonstrated significant improvements in the accuracy and robustness of land use and land cover segmentation across vast urban and regional landscapes. MMSeg-YREB’s extensive scope and diversity enable a wide range of applications, from detailed urban planning to efficient environmental monitoring. Leveraging contemporary artificial intelligence (AI) technologies and rich multimodal RS data, we aim to develop advanced semantic segmentation models with high transferability and scalability. These models hold the promise of driving forward technical innovations and accelerating the development of EO applications across various cities and regions.
This contest is organized in conjunction with the 14th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS).
All participants have to submit
1. The test set segmentation results need to be submitted via the Codalab platform (https://codalab.lisn.upsaclay.fr/competitions/19945) before the deadline. One needs to indicate also: 1) the group ID, 2) the affiliation, 3) the list of members, 4) the corresponding member with the related email address.
2. the report descripting their developed methods and models to the WHISPERS paper review system. A special session will be organized for accepted papers, which will also be included in the WHISPERS proceedings.
Moreover, winners will get an award with certificate and will be involved in the writing of an IEEE JSTARS paper summarizing the outcomes of this contest.