As AI agents and tools increasingly perform tasks on behalf of users, the digital ecosystem is rapidly filling with dynamic, task-oriented systems that offer overlapping capabilities. This shift raises a fundamental question: how can users and orchestrating systems reliably discover, compare, and select the most appropriate agents for a given task? AgentSearch frames this challenge as a first-class information retrieval problem. Unlike traditional IR, where static documents are retrieved, agent search involves dynamic entities whose suitability depends on capabilities, behavior, reliability, constraints, and task-dependent performance. This workshop brings together researchers and practitioners to develop principled methods for indexing, retrieving, ranking, and evaluating AI agents and tools, establishing a coherent research agenda at the intersection of information retrieval and agentic systems.
AgentSearch
The First Workshop on Indexing, Retrieval, and Ranking of AI Agents
SIGIR 2026 · Melbourne | Naarm, Australia · July 24, 2026
About
Topics of Interest
Representation & Discovery of Agents
- Modeling and structuring agent capabilities, constraints, and costs
- Profiling, probing, and capacity estimation of agents and tools
- Organization and indexing of dynamic or behavior-based properties
- Agent and tool discovery in large, heterogeneous ecosystems
Retrieval, Ranking & Recommendation
- Learning-to-rank methods for agent selection
- Task-aware, constraint-aware, and reliability-aware ranking
- Agent recommendation and proactive suggestion mechanisms
- Multi-agent routing, orchestration, and coordination
- Hybrid and ensemble retrieval strategies
Evaluation & Benchmarking
- Metrics for agent suitability, robustness, and performance
- Task-dependent and workflow-aware evaluation protocols
- Benchmark and dataset construction for agent search
- LLM-based, human-centered, or preference-based evaluation
Responsible & Human-Centered Agent Search
- Safety, robustness, and risk mitigation in agent retrieval
- Fairness and exposure in agent marketplaces and rankings
- Transparency and explainability of agent recommendations
- Personalization and user modeling for agent selection
- Human-in-the-loop and interactive agent search systems
Workshop Program
The workshop features an interactive format including keynotes, panel discussions, breakout sessions, and poster presentations.
| Time | Activity |
|---|---|
| 09:00 - 09:15 | Opening Remarks |
| 09:15 - 10:00 | Keynote 1: TBD |
| 10:00 - 10:30 | Oral Presentations - Session 1: TBD |
| 10:30 - 11:00 | Coffee Break & Poster Session |
| 11:00 - 11:45 | Keynote 2: TBD |
| 11:45 - 12:30 | Oral Presentations - Session 2: TBD |
| 12:30 - 14:00 | Lunch Break |
| 14:00 - 15:00 | Panel Discussion: TBD |
| 15:00 - 15:30 | Challenge Results Presentation & Discussion |
| 15:30 - 16:00 | Coffee Break & Poster Session |
| 16:00 - 17:00 | Breakout Sessions & Interactive Discussions |
| 17:00 - 17:15 | Closing Remarks |
Keynotes & Invited Speakers
Mark Sanderson
RMIT University
Mark Sanderson is Professor of Information Retrieval at RMIT University and Dean of Research for the STEM College. He is widely recognized for his early work demonstrating the value of search result snippets, which are now a standard feature of modern search engines. His research spans information retrieval, web search, and human information interaction, with a particular focus on how users engage with and benefit from search systems.
Edgar Meij
Bloomberg
Edgar Meij is Head of AI Platforms within Bloomberg's Artificial Intelligence group, where he leads more than ten teams of engineers and researchers responsible for the company's core AI, NLP, machine learning, large language model, and search technology platforms. His expertise includes information retrieval, natural language processing, knowledge graphs, semantic search, and large-scale computing infrastructure.
More speakers to be announced soon.
Call for Papers
The first workshop on Indexing, Retrieval, and Ranking of AI Agents is accepting submissions that describe original research findings, preliminary research results, proposals for new work, and recent relevant studies already published in high-quality venues.
Submission Types
We welcome various types of submissions:-
Extended Abstracts:
A concise summary of work in progress or preliminary findings, outlining the main contribution and approach.
At most 2 pages (excluding references). -
Research, Perspective, and Demo Papers:
We welcome original or recently published research on indexing, retrieval, and ranking of AI agents;
perspective papers that suggest future directions and research opportunities;
and demo papers that showcase systems for agent search.
Submissions may follow the relevant SIGIR track guidelines (short/full papers, perspectives, or demonstrations).
4-9 pages (excluding references).
Submission Instructions
Page limit does not include references. All content, including any appendix, must fit within the page limit. Note that we care more about the quality of the submission and its potential to spur discussion at the workshop than about page length as long as the paper meets the page limits.
At least one author of each accepted paper must register for the workshop and present the paper in person (strongly preferred) or remotely.
We encourage, but do not require, authors to release any code and/or datasets associated with their paper. For demo papers, we particularly encourage anonymous code submission (e.g., via a link in the anonymized paper).
All submissions must be in PDF format and written in English.
Please use the SIGIR 2026 template (ACM two-column format).
For LaTeX, the following should be used:
\documentclass[sigconf,natbib=true,anonymous=true]{acmart}
All submissions will be double-blind reviewed by the program committee and judged by their relevance to the workshop, especially to the main themes identified above, and their potential to generate discussion. Submissions should be made through EasyChair: https://easychair.org/conferences/?conf=agentsearch2026, and code should be submitted anonymously. When submitting, select one of the paper types (Extended Abstract, Research, Perspective, or Demo).
Note: This is a non-archival workshop. Accepted papers may be uploaded to arXiv.org, allowing submission elsewhere as they will be considered non-archival. The workshop's website will maintain a link to the arXiv versions of the papers. Authors retain full copyright and are free to submit to other venues.
Important Dates
- Paper Submission Deadline:
April 15, 2026 (AoE)May 8, 2026 (AoE) - Notification of Acceptance: May 20, 2026 (AoE)
- Camera-Ready Deadline: TBD
- Workshop Date: July 24, 2026
AgentSearch Challenge
Participate in the first shared task on ranking AI agents given task descriptions. The challenge provides a benchmark for evaluating agent search systems in practical scenarios.
Challenge Overview
Coming soon.
Dataset & Tasks
-
Coming soon.
Evaluation
Coming soon.
Challenge Timeline
- Registration Opens: TBD
- Dataset Release: TBD
- Submission Deadline: TBD
- Results Announcement: TBD
Program Committee
- Chuan Meng University of Edinburgh
- Haolun Wu McGill University
- Hossein A. Rahmani University College London
- Shuofei Qiao Zhejiang University
Contact
For questions, suggestions, or more information about the workshop, please contact us: