Reykjavik

Call for Special Session Papers

For SISAP 2025, we call for contributions for the following two special sessions:

Special session papers will supplement the regular research papers and be included in the proceedings of SISAP 2025, published by Springer in the Lecture Notes in Computer Science (LNCS) series.

Special session submissions may include vision/position papers, evaluated based on the quality of the arguments and ideas proposed. To ensure high-quality conference papers, all submissions, including invited papers, will undergo peer review. If a session receives many high-quality submissions, some may be moved to regular sessions, and conversely, relevant accepted submissions may be moved into special sessions.


IRMC: Interactive Retrieval for Multimedia Collections

Multimedia retrieval systems aim to resolve users' search-related information needs. These range from simply finding a specific or a handful of media items to answering questions about the contents of a multimedia collection. The basic flow of such systems relies on similarity search based on natural language queries or content-based queries followed up with browsing the ranked result list. However, more often than not, the initial query will not find relevant items. In such cases, the following steps include rewriting, expanding, or refining the original query. In modern retrieval systems, interactive (human-in-the-loop) approaches enhance the original query or its result set. This can include applying filters, conducting image-to-image searches from the results, performing relevance feedback, and temporal querying. Additionally, these systems can internally optimize queries through query expansion or rewriting, with newer systems relying on LLMs to achieve this.

Several advanced multimedia retrieval systems are showcased annually in the interactive live search challenges Video Browser Showdown (VBS) at MMM and Lifelog Search Challenge (LSC) at ICMR. These are two venues where researchers can try out new retrieval approaches for multimedia collections with realistic tasks, where experts and novice users operate the systems. Many lessons are learned from interactive settings, common lessons from most systems, and system-specific ones. These lessons are typically used to fuel further research into multimedia retrieval approaches.

Interactive retrieval approaches have two distinct applications: We can explore a diverse multimedia collection or apply it to specialized multimedia datasets, such as medical imaging and marine exploration. In the first case, a user wants to get an overview and explore the collection through diverse features. When working with specialized datasets, domain-specific content is analyzed. A collection is relatively homogeneous, which makes the search for similarities difficult. Despite differences in individual multimedia collections, these domains face common challenges, such as limited annotations, homogeneous data, and the need for explainable results.

Beyond the challenge venues, other interesting interactive multimedia retrieval research for common and specialized collections are published in several multimedia conferences and journals, such as MMM, ACM ICMR, ACM MM, ACM MMSys, IEEE TMM, ACM TOMM, MTAP, and more. We believe this session is the ideal venue to bring together the SISAP community — specialized in similarity search, the backbone of retrieval systems — and the interactive multimedia retrieval community to share challenges, exchange insights, and foster research in both directions.

Submissions and Topics of Interest

We welcome the following types of contributions:

Topics include, but are not limited to:

Organizers


BRIDGES: Bridging Past and Present: Similarity Search for Digital Cultural Heritage and GLAM Content

With the increasing digitization of (tangible and non‐tangible) cultural heritage (CH) collections and the significant growth of born‐digital cultural heritage objects, Galleries, Libraries, Archives, and Museums (GLAM) organizations face challenges in managing, searching, and retrieving vast amounts of diverse multimedia content (images, videos and increasingly also 3D models). One challenge, especially with digitized content, is the lack of proper metadata – and often, off‐the‐shelf AI‐based tools are not very helpful since the type of data they were trained with has little or no overlap with the historical context of GLAM organizations (especially when it comes to digitized non‐tangible CH content). Traditional keyword‐based retrieval methods often fail to capture the complex semantic relationships within cultural heritage materials. Similarity search techniques, leveraging machine learning and information retrieval (and partly also aspects of computer vision) for CH and GLAM content, offer new possibilities for intuitive and effective content exploration.

Aside from the technical aspects, similarity search in GLAM collections raises important issues of fairness, context and cultural sensitivity. The intent extends beyond identifying similar objects; it contains the imperative to ensure that these tools acknowledge and respect the significance and historical context. This is especially important when working with underrepresented histories or artefacts that are not sufficiently documented. Involving GLAM professionals in designing and refining these systems can create more meaningful and accessible ways to explore cultural heritage.

This special session focuses on advances in similarity search and retrieval techniques for digital cultural heritage and GLAM content, addressing both technical innovations and real‐world applications. The session aims to bring together researchers, practitioners, and GLAM professionals to explore novel methodologies, discuss challenges, and showcase successful implementations of similarity‐based retrieval in cultural heritage contexts.

We welcome the following types of contributions:

We solicit contributions on (but not limited to) the following topics :

Organizers