Mindrift
Evaluation Scenario Writer - AI Agent Testing Specialist
Mindrift, Dallas, Texas, United States, 75215
This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. Please submit your resume in English and indicate your level of English.
At Mindrift, innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI.
What We Do The Mindrift platform, launched and powered by Toloka, connects domain experts with cutting-edge AI projects from innovative tech clients. Our mission is to unlock the potential of GenAI by tapping into real-world expertise from across the globe.
About The Role We're looking for someone who can design realistic and structured evaluation scenarios for LLM-based agents. You'll create test cases that simulate human‑performed tasks and define gold‑standard behavior to compare agent actions against. You'll work to ensure each scenario is clearly defined, well‑scored, and easy to execute and reuse. You'll need a sharp analytical mindset, attention to detail, and an interest in how AI agents make decisions.
Typical responsibilities
Designing structured test scenarios based on real‑world tasks
Defining the golden path and acceptable agent behavior.
Annotating task steps, expected outputs, and edge cases
Working with devs to test your scenarios and improve clarity
Reviewing agent outputs and adapting tests accordingly
How To Get Started Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you'll help shape the future of AI while ensuring technology benefits everyone.
Requirements
Bachelor's and/or Master's Degreein Computer Science, Software Engineering, Data Science / Data Analytics, Artificial Intelligence / Machine Learning, Computational Linguistics / Natural Language Processing (NLP), Information Systems or other related fields.
Background in QA, software testing, data analysis, or NLP annotation
Good understanding of test design principles (e.g., reproducibility, coverage, edge cases)
Strong written communication skills in English
Comfortable with structured formats like JSON/YAML for scenario description
Can define expected agent behaviors (gold paths) and scoring logic
Basic experience with Python and JS
Curious and open to working with AI-generated content, agent logs, and prompt-based behaviour
You are ready to learn new methods, able to switch between tasks and topics quickly and sometimes work with challenging, complex guidelines
Our freelance role is fully remote so, you just need a laptop, internet connection, time available and enthusiasm to take on a challenge
Nice to Have
Experience in writing manual or automated test cases
Familiarity with LLM capabilities and typical failure modes
Understanding of scoring metrics (precision, recall, coverage, reward functions)
Benefits
Get paid for your expertise, with rates that can go up to $60/hour depending on your skills, experience, and project needs
Take part in a flexible, remote, freelance project that fits around your primary professional or academic commitments
Participate in an advanced AI project and gain valuable experience to enhance your portfolio
Influence how future AI models understand and communicate in your field of expertise
Seniority level Entry level
Employment type Part‑time
Job function Other
Industries IT Services and IT Consulting
#J-18808-Ljbffr
At Mindrift, innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI.
What We Do The Mindrift platform, launched and powered by Toloka, connects domain experts with cutting-edge AI projects from innovative tech clients. Our mission is to unlock the potential of GenAI by tapping into real-world expertise from across the globe.
About The Role We're looking for someone who can design realistic and structured evaluation scenarios for LLM-based agents. You'll create test cases that simulate human‑performed tasks and define gold‑standard behavior to compare agent actions against. You'll work to ensure each scenario is clearly defined, well‑scored, and easy to execute and reuse. You'll need a sharp analytical mindset, attention to detail, and an interest in how AI agents make decisions.
Typical responsibilities
Designing structured test scenarios based on real‑world tasks
Defining the golden path and acceptable agent behavior.
Annotating task steps, expected outputs, and edge cases
Working with devs to test your scenarios and improve clarity
Reviewing agent outputs and adapting tests accordingly
How To Get Started Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you'll help shape the future of AI while ensuring technology benefits everyone.
Requirements
Bachelor's and/or Master's Degreein Computer Science, Software Engineering, Data Science / Data Analytics, Artificial Intelligence / Machine Learning, Computational Linguistics / Natural Language Processing (NLP), Information Systems or other related fields.
Background in QA, software testing, data analysis, or NLP annotation
Good understanding of test design principles (e.g., reproducibility, coverage, edge cases)
Strong written communication skills in English
Comfortable with structured formats like JSON/YAML for scenario description
Can define expected agent behaviors (gold paths) and scoring logic
Basic experience with Python and JS
Curious and open to working with AI-generated content, agent logs, and prompt-based behaviour
You are ready to learn new methods, able to switch between tasks and topics quickly and sometimes work with challenging, complex guidelines
Our freelance role is fully remote so, you just need a laptop, internet connection, time available and enthusiasm to take on a challenge
Nice to Have
Experience in writing manual or automated test cases
Familiarity with LLM capabilities and typical failure modes
Understanding of scoring metrics (precision, recall, coverage, reward functions)
Benefits
Get paid for your expertise, with rates that can go up to $60/hour depending on your skills, experience, and project needs
Take part in a flexible, remote, freelance project that fits around your primary professional or academic commitments
Participate in an advanced AI project and gain valuable experience to enhance your portfolio
Influence how future AI models understand and communicate in your field of expertise
Seniority level Entry level
Employment type Part‑time
Job function Other
Industries IT Services and IT Consulting
#J-18808-Ljbffr