Research Engineer - Multimodal Embodiment Trust (...
Meta - Menlo Park, CA
Apply NowJob Description
Summary: Meta is seeking Research Engineers to join the Multimodal Embodiment Trust team within Meta Superintelligence Labs, dedicated to advancing the safe development and deployment of Superintelligent AI. Product & Applied Research group is focused on building AI-powered experiences for people, bringing frontier models to consumers. Our two primary goals are: to build a superintelligent personal sidekick that billions of people use to make their lives better; and to provide fresh, personal, insightful entertainment by allowing people to make, share, and consume AI-generated media and immersive experiences. Required Skills: Research Engineer - Multimodal Embodiment Trust (multiple locations) Responsibilities: 1. Design, implement, and evaluate novel, systemic, and foundational safety techniques for large language models and multimodal AI systems 2. Create, curate, and analyze high-quality datasets for safety system and foundations 3. Fine-tune and evaluate LLMs to adhere to Metau2019s safety policies and evolving global standards 4. Contribute to applied research through risk analysis, experimentation, measurement, and and building mitigations 5. Work closely with researchers, engineers, and cross-functional partners to integrate safety solutions into Metau2019s products and services Minimum Qualifications: Minimum Qualifications: 6. Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience 7. PhD in Computer Science, Machine Learning, or a relevant technical field 8. Experience in LLM/NLP, computer vision, or related AI/ML model training 9. End-to-end experience working on complex technical projects 10. Publications at peer-reviewed conferences (e.g. ICLR, NeurIPS, ICML, KDD, CVPR, ICCV, ACL) 11. Programming experience in Python and hands-on experience with frameworks such as PyTorch Preferred Qualifications: Preferred Qualifications: 12. Hands-on experience applying state-of-the-art techniques to build robust AI system solutions for safety and policy adherence 13. Experience developing, fine-tuning, or evaluating LLMs across multiple languages and modalities (text, image, voice, video, reasoning, etc) 14. Demonstrated experience to innovate in safety foundational research, including custom guideline enforcement, dynamic policy adaptation, and rapid hotfixing of model vulnerabilities 15. Experience designing, curating, and evaluating safety datasets, including adversarial and borderline prompt cases 16. Experience with distributed training of LLMs (hundreds/thousands of GPUs), scalable safety mitigations, and automation of safety tooling Public Compensation: $88.46/hour to $257,000/year + bonus + equity + benefits Industry: Internet Equal Opportunity: Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment. Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at .
Created: 2026-03-07