Mindrift
Evaluation Scenario Writer - AI Agent Testing Specialist
Mindrift, Frankfort, Kentucky, United States
Overview
About The Role
We are looking for someone who can design
realistic and structured evaluation scenarios
for LLM-based agents. You will create test cases that simulate human-performed tasks and define gold-standard behavior to compare agent actions against. You will work to ensure each scenario is clearly defined, well-scored, and easy to execute and reuse. A sharp analytical mindset, attention to detail, and an interest in how AI agents make decisions are important for this role. What you do may vary by project, but you might typically: Create structured test cases that simulate complex human workflows Define gold-standard behavior and scoring logic to evaluate agent actions Analyze agent logs, failure modes, and decision paths Work with code repositories and test frameworks to validate your scenarios Iterate on prompts, instructions, and test cases to improve clarity and difficulty Ensure that scenarios are production-ready, easy to run, and reusable How To Get Started Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you'll help shape the future of AI while ensuring technology benefits everyone. Responsibilities
Create structured test cases that simulate complex human workflows Define gold-standard behavior and scoring logic to evaluate agent actions Analyze agent logs, failure modes, and decision paths Work with code repositories and test frameworks to validate scenarios Iterate on prompts, instructions, and test cases to improve clarity and difficulty Ensure scenarios are production-ready, easy to run, and reusable Requirements
Bachelor's and/or Master's Degree in Computer Science, Software Engineering, Data Science / Data Analytics, Artificial Intelligence / Machine Learning, Computational Linguistics / Natural Language Processing (NLP), Information Systems or related fields Background in QA, software testing, data analysis, or NLP annotation Good understanding of test design principles (e.g., reproducibility, coverage, edge cases) Strong written communication skills in English Comfortable with structured formats like JSON/YAML for scenario description Ability to define expected agent behaviors (gold paths) and scoring logic Basic experience with Python and JS Curious and open to working with AI-generated content, agent logs, and prompt-based behavior Nice to Have
Experience in writing manual or automated test cases Familiarity with LLM capabilities and typical failure modes Understanding of scoring metrics (precision, recall, coverage, reward functions) Benefits
Contribute on your own schedule, from anywhere in the world Get paid for your expertise, with rates that can go up to $80/hour depending on your skills, experience, and project needs Take part in a flexible, remote, freelance project that fits around your primary professional or academic commitments Participate in an advanced AI project and gain valuable experience to enhance your portfolio Influence how future AI models understand and communicate in your field of expertise Seniority level
Entry level Employment type
Part-time Job function
Other Industries
IT Services and IT Consulting Note: This posting may include location-specific eligibility details. Please submit your resume in English and indicate your level of English where applicable.
#J-18808-Ljbffr
We are looking for someone who can design
realistic and structured evaluation scenarios
for LLM-based agents. You will create test cases that simulate human-performed tasks and define gold-standard behavior to compare agent actions against. You will work to ensure each scenario is clearly defined, well-scored, and easy to execute and reuse. A sharp analytical mindset, attention to detail, and an interest in how AI agents make decisions are important for this role. What you do may vary by project, but you might typically: Create structured test cases that simulate complex human workflows Define gold-standard behavior and scoring logic to evaluate agent actions Analyze agent logs, failure modes, and decision paths Work with code repositories and test frameworks to validate your scenarios Iterate on prompts, instructions, and test cases to improve clarity and difficulty Ensure that scenarios are production-ready, easy to run, and reusable How To Get Started Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you'll help shape the future of AI while ensuring technology benefits everyone. Responsibilities
Create structured test cases that simulate complex human workflows Define gold-standard behavior and scoring logic to evaluate agent actions Analyze agent logs, failure modes, and decision paths Work with code repositories and test frameworks to validate scenarios Iterate on prompts, instructions, and test cases to improve clarity and difficulty Ensure scenarios are production-ready, easy to run, and reusable Requirements
Bachelor's and/or Master's Degree in Computer Science, Software Engineering, Data Science / Data Analytics, Artificial Intelligence / Machine Learning, Computational Linguistics / Natural Language Processing (NLP), Information Systems or related fields Background in QA, software testing, data analysis, or NLP annotation Good understanding of test design principles (e.g., reproducibility, coverage, edge cases) Strong written communication skills in English Comfortable with structured formats like JSON/YAML for scenario description Ability to define expected agent behaviors (gold paths) and scoring logic Basic experience with Python and JS Curious and open to working with AI-generated content, agent logs, and prompt-based behavior Nice to Have
Experience in writing manual or automated test cases Familiarity with LLM capabilities and typical failure modes Understanding of scoring metrics (precision, recall, coverage, reward functions) Benefits
Contribute on your own schedule, from anywhere in the world Get paid for your expertise, with rates that can go up to $80/hour depending on your skills, experience, and project needs Take part in a flexible, remote, freelance project that fits around your primary professional or academic commitments Participate in an advanced AI project and gain valuable experience to enhance your portfolio Influence how future AI models understand and communicate in your field of expertise Seniority level
Entry level Employment type
Part-time Job function
Other Industries
IT Services and IT Consulting Note: This posting may include location-specific eligibility details. Please submit your resume in English and indicate your level of English where applicable.
#J-18808-Ljbffr