About the Session
As the field of medical imaging evolves, the integration of multi-agent large language models (LLMs) presents groundbreaking opportunities to enhance diagnostic accuracy and streamline clinical workflows. This interactive session provides a deep dive into how multi-agent LLMs collaborate to process and analyze medical imaging data, from system construction to real-world optimization.
Attendees will gain hands-on experience in building, applying, and refining multi-agent LLMs, exploring key frameworks such as PyTorch, MONAI, HuggingFace, and LangGraph. Through guided exercises, participants will learn how collaborative intelligence among AI agents is shaping the future of medical imaging AI.
This is a BYOD (Bring Your Own Device) session. Participants should bring a laptop with the following frameworks installed:
- PyTorch
- MONAI
- HuggingFace
- LangGraph
Join us to explore how multi-agent LLMs are shaping the future of medical imaging AI!
Objectives
- Describe the fundamental components and architecture of multi-agent LLMs for medical imaging AI.
- Explain the applications and potential benefits of multi-agent LLMs in improving diagnostic workflows and clinical decision support.
- Identify key challenges and considerations in optimizing multi-agent LLM performance for clinical applications.
- Construct a baseline multi-agent LLM system using PyTorch, MONAI, HuggingFace, and LangGraph for medical imaging data processing.
- Implement multi-agent collaboration strategies to enhance image interpretation and streamline workflows.
- Evaluate the performance of multi-agent LLMs using clinical benchmarks and optimization techniques for improved diagnostic accuracy.
Presented By