ExaTrkX as a Service

Date:

In this talk, we describe the Exa.TrkX pipeline implementation as a Triton Inference Server for particle tracking. Clients will send track-finding requests to the server and the server will return track candidates to the client after processing. The pipeline contains three discrete deep learning models and two CUDA-based algorithms. Because of the heterogeneity and dependency chain of the pipeline, we will explore different server settings to maximize the throughput of the pipeline, and we will study the scalability of the inference server and time reduction of the client.

Direct Link