Summary
How to Enhance GPU Utilization in Deep Learning?
Deep learning workloads encompass both throughput-intensive training tasks and latency-sensitive inference tasks. Traditionally, dedicated GPU clusters are provisioned separately for training and inference to meet strict Service-Level Objectives (SLOs), often leading to underutilized resources. The paper "PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications" introduces a system that enables multiple deep learning applications to time-share the same GPU efficiently.
Join us as we explore the paper in depth.
The access link for online participation will be posted here the day before the event.
Please read the paper in advance: https://www.usenix.org/conference/osdi20/presentation/bai
About Our Paper Reading Sessions
Our Paper Reading Sessions provide a space for AI users, data scientists, and software engineers to stay up to date with the latest research, engage in insightful discussions, and explore practical applications of cutting-edge AI advancements.
Stay ahead in AI—join the conversation!
Important Information
Photographs, audio recordings and films will be made during the event. By participating in the event, you agree that photos, audio and video recordings in which you are recognizable may be published as part of the public relations work of the AI Service Center Berlin-Brandenburg and the HPI.
Are you interested in our free workshops and other events that we offer? Then sign up for our newsletter and take a look at our event overview.
The AI Service Centre Berlin-Brandenburg is a project of the Hasso Plattner Institute funded by the Federal Ministry of Education and Research. Its aim is to lower the barriers for using AI in business and society.
Requirements
Join the AI Maker Community Slack Workspace: The communication during the session will happen through our Slack Workspace, in the #ai-maker-sessions channel: https://join.aimaker.community
Please register for this free event on Eventbrite.
Event Details
This is an online event. We will post a link to the session in Slack shortly before the session starts.