Evaluation Scenario Writer - AI Agent Testing ...
Mindrift - Iowa, LA
Apply NowJob Description
This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. Please submit your resume in English and indicate your level of English.At Mindrift, innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI.What We Do The Mindrift platform connects specialists with AI projects from major tech innovators. Our mission is to unlock the potential of Generative AI by tapping into real-world expertise from across the globe.About The Role We're looking for someone who can design realistic and structured evaluation scenarios for LLM-based agents. You'll create test cases that simulate human‑performed tasks and define gold‑standard behavior to compare agent actions against. You'll work to ensure each scenario is clearly defined, well‑scored, and easy to execute and reuse. You'll need a sharp analytical mindset, attention to detail, and an interest in how AI agents make decisions.Although every project is unique, you might typically:Create structured test cases that simulate complex human workflowsDefine gold‑standard behavior and scoring logic to evaluate agent actions.Analyze agent logs, failure modes, and decision pathsWork with code repositories and test frameworks to validate your scenariosIterate on prompts, instructions, and test cases to improve clarity and difficultyEnsure that scenarios are production‑ready, easy to run, and reusableHow To Get Started Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you'll help shape the future of AI while ensuring technology benefits everyone.RequirementsBachelor's and/or Master's Degree in Computer Science, Software Engineering, Data Science / Data Analytics, Artificial Intelligence / Machine Learning, Computational Linguistics / Natural Language Processing (NLP), Information Systems or other related fields.Background in QA, software testing, data analysis, or NLP annotationGood understanding of test design principles (e.g., reproducibility, coverage, edge cases)Strong written communication skills in EnglishComfortable with structured formats like JSON/YAML for scenario descriptionCan define expected agent behaviors (gold paths) and scoring logicBasic experience with Python and JSCurious and open to working with AI‑generated content, agent logs, and prompt‑based behaviorNice to HaveExperience in writing manual or automated test casesFamiliarity with LLM capabilities and typical failure modesUnderstanding of scoring metrics (precision, recall, coverage, reward functions)BenefitsGet paid for your expertise, with rates that can go up to $80/hour depending on your skills, experience, and project needsTake part in a flexible, remote, freelance project that fits around your primary professional or academic commitmentsParticipate in an advanced AI project and gain valuable experience to enhance your portfolioInfluence how future AI models understand and communicate in your field of expertiseSeniority level Entry levelEmployment type Part‑timeJob function OtherIndustries IT Services and IT ConsultingReferrals increase your chances of interviewing at Mindrift by 2x#J-18808-Ljbffr
Created: 2026-02-10