Software Development Engineer - AI/ML, AWS Neuron
Amazon - Cupertino, CA
Apply NowJob Description
Job Summary: Amazon Web Services (AWS) is seeking a Software Development Engineer for their Annapurna Labs team, which builds the AWS Neuron SDK for deep learning and GenAI workloads. The role involves architecting and implementing features, optimizing machine learning models for performance on AWSs custom ML accelerators, and collaborating with cross-functional teams to enhance inference capabilities. Responsibilities: • Design, develop, and optimize machine learning models and frameworks for deployment on custom ML hardware accelerators. • Participate in all stages of the ML system development lifecycle including distributed computing based architecture design, implementation, performance profiling, hardware-specific optimizations, testing and production deployment. • Build infrastructure to systematically analyze and onboard multiple models with diverse architecture. • Design and implement high-performance kernels and features for ML operations, leveraging the Neuron architecture and programming models • Analyze and optimize system-level performance across multiple generations of Neuron hardware • Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks • Implement optimizations such as fusion, sharding, tiling, and scheduling • Conduct comprehensive testing, including unit and end-to-end model testing with continuous deployment and releases through pipelines. • Work directly with customers to enable and optimize their ML models on AWS accelerators • Collaborate across teams to develop innovative optimization techniques Qualifications: Required: • 3+ years of non-internship professional software development experience • Bachelors degree or equivalent in Computer Science • 3+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience • Fundamentals of Machine learning and LLMs, their architecture, training and inference lifecycles along with work experience on optimizations for improving the model execution. • Software development experience in C++, Python (experience in at least one language is required). • Strong understanding of system performance, memory management, and parallel computing principles. • Proficiency in debugging, profiling, and implementing best software engineering practices in large-scale systems. Preferred: • Familiarity with PyTorch, JIT compilation, and AOT tracing. • Familiarity with CUDA kernels or equivalent ML or low-level kernels. • Candidates with performant kernel development such as CUTLASS, FlashInfer etc., would be well suited. • Familiar with syntax and tile-level semantics similar to Triton. • Experience with online/offline inference serving with vLLM, SGLang, TensorRT or similar platforms in production environments. • Deep understanding of computer architecture, operation systems level software and working knowledge of parallel computing. Company: Launched in 2006, Amazon Web Services (AWS) began exposing key infrastructure services to businesses in the form of web services -- now widely known as cloud computing. Founded in 2002, the company is headquartered in Seattle, USA, with a team of 10001+ employees. The company is currently Late Stage.
Created: 2026-03-07