StaffAttract
  • Login
  • Create Account
  • Products
    • Private Ad Placement
    • Reports Management
    • Publisher Monetization
    • Search Jobs
  • About Us
  • Contact Us
  • Unsubscribe

Login

Forgot Password?

Create Account

Job title, industry, keywords, etc.
City, State or Postcode

Engineering Manager - Inference

Perplexity - San Francisco, CA

Apply Now

Job Description

About The Role We are looking for an Inference Engineering Manager to lead our AI Inference team. This is a unique opportunity to build and scale the infrastructure that powers Perplexity's products and APIs, serving millions of users with state‑of‑the‑art AI capabilities.You will own the technical direction and execution of our inference systems while building and leading a world‑class team of inference engineers. Our current stack includes Python, PyTorch, Rust, C++, and Kubernetes. You will help architect and scale the large‑scale deployment of machine learning models behind Perplexity's Comet, Sonar, Search, and Deep Research products.Why Perplexity?Build SOTA systems that are the fastest in the industry with cutting‑edge technologyHigh‑impact work on a smaller team with significant ownership and autonomyOpportunity to build 0‑to‑1 infrastructure from scratch rather than maintaining legacy systemsWork on the full spectrum: reducing cost, scaling traffic, and pushing the boundaries of inferenceDirect influence on technical roadmap and team culture at a rapidly growing companyResponsibilitiesLead and grow a high‑performing team of AI inference engineersDevelop APIs for AI inference used by both internal and external customersArchitect and scale our inference infrastructure for reliability and efficiencyBenchmark and eliminate bottlenecks throughout our inference stackDrive large sparse/​MoE model inference at rack scale, including sharding strategies for massive modelsPush the frontier with building inference systems to support sparse attention, disaggregated pre‑fill/​decoding serving, etc.Improve the reliability and observability of our systems and lead incident responseOwn technical decisions around batching, throughput, latency, and GPU utilizationPartner with ML research teams on model optimization and deploymentRecruit, mentor, and develop engineering talentEstablish team processes, engineering standards, and operational excellenceQualifications5+ years of engineering experience with 2+ years in a technical leadership or management roleDeep experience with ML systems and inference frameworks (PyTorch, TensorFlow, ONNX, TensorRT, vLLM)Strong understanding of LLM architecture: Multi‑Head Attention, Multi/Grouped‑Query Attention, and common layersExperience with inference optimizations: batching, quantization, kernel fusion, FlashAttentionFamiliarity with GPU characteristics, roofline models, and performance analysisExperience deploying reliable, distributed, real‑time systems at scaleTrack record of building and leading high‑performing engineering teamsExperience with parallelism strategies: tensor parallelism, pipeline parallelism, expert parallelismStrong technical communication and cross‑functional collaboration skillsNice to HaveExperience with CUDA, Triton, or custom kernel developmentBackground in training infrastructure and RL workloadsExperience with Kubernetes and container orchestration at scalePublished work or contributions to inference optimization researchCompensation Range: $300K - $385K#J-18808-Ljbffr

Created: 2026-02-20

➤
Footer Logo
Privacy Policy | Terms & Conditions | Contact Us | About Us
Designed, Developed and Maintained by: NextGen TechEdge Solutions Pvt. Ltd.