Mindrift
Evaluation Scenario Writer - AI Agent Testing Specialist
Mindrift, New York, New York, us, 10261
About Mindrift
At Mindrift, innovation meets opportunity. We use the power of collective human intelligence to ethically shape the future of AI.
What We Do Mindrift’s platform, launched and powered by Toloka, connects domain experts with cutting‑edge AI projects from innovative tech clients. Our mission is to unlock the potential of GenAI by tapping into real‑world expertise from across the globe.
About The Role We’re looking for someone who can design realistic and structured evaluation scenarios for LLM‑based agents. You’ll create test cases that simulate human‑performed tasks and define gold‑standard behavior to compare agent actions against. You’ll work to ensure each scenario is clearly defined, well‑scored, and easy to execute and reuse. You’ll need a sharp analytical mindset, attention to detail, and an interest in how AI agents make decisions.
Responsibilities
Design structured test scenarios based on real‑world tasks
Define the golden path and acceptable agent behavior
Annotate task steps, expected outputs, and edge cases
Work with devs to test your scenarios and improve clarity
Review agent outputs and adapt tests accordingly
How To Get Started Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills on your own schedule. From creating training prompts to refining model responses, you’ll help shape the future of AI while ensuring technology benefits everyone.
Requirements
Bachelor’s and/or Master’s Degree in Computer Science, Software Engineering, Data Science/Data Analytics, Artificial Intelligence/ML, Computational Linguistics/NLP, Information Systems or related fields
Background in QA, software testing, data analysis, or NLP annotation
Good understanding of test design principles (reproducibility, coverage, edge cases)
Strong written communication skills in English
Comfortable with structured formats like JSON/YAML for scenario description
Can define expected agent behaviors (gold paths) and scoring logic
Basic experience with Python and JS
Curious and open to working with AI‑generated content, agent logs, and prompt‑based behavior
Ready to learn new methods, able to switch between tasks and topics quickly in challenging, complex guidelines
Our freelance role is fully remote so you just need a laptop, internet connection, time available and enthusiasm to take on a challenge
Nice to Have
Experience in writing manual or automated test cases
Familiarity with LLM capabilities and typical failure modes
Understanding of scoring metrics (precision, recall, coverage, reward functions)
Benefits
Get paid for your expertise, rates can go up to $60/hour depending on skills, experience, and project needs
Flexible, remote, freelance project that fits around professional or academic commitments
Advanced AI project experience to enhance portfolio
Influence how future AI models understand and communicate in your field of expertise
#J-18808-Ljbffr
What We Do Mindrift’s platform, launched and powered by Toloka, connects domain experts with cutting‑edge AI projects from innovative tech clients. Our mission is to unlock the potential of GenAI by tapping into real‑world expertise from across the globe.
About The Role We’re looking for someone who can design realistic and structured evaluation scenarios for LLM‑based agents. You’ll create test cases that simulate human‑performed tasks and define gold‑standard behavior to compare agent actions against. You’ll work to ensure each scenario is clearly defined, well‑scored, and easy to execute and reuse. You’ll need a sharp analytical mindset, attention to detail, and an interest in how AI agents make decisions.
Responsibilities
Design structured test scenarios based on real‑world tasks
Define the golden path and acceptable agent behavior
Annotate task steps, expected outputs, and edge cases
Work with devs to test your scenarios and improve clarity
Review agent outputs and adapt tests accordingly
How To Get Started Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills on your own schedule. From creating training prompts to refining model responses, you’ll help shape the future of AI while ensuring technology benefits everyone.
Requirements
Bachelor’s and/or Master’s Degree in Computer Science, Software Engineering, Data Science/Data Analytics, Artificial Intelligence/ML, Computational Linguistics/NLP, Information Systems or related fields
Background in QA, software testing, data analysis, or NLP annotation
Good understanding of test design principles (reproducibility, coverage, edge cases)
Strong written communication skills in English
Comfortable with structured formats like JSON/YAML for scenario description
Can define expected agent behaviors (gold paths) and scoring logic
Basic experience with Python and JS
Curious and open to working with AI‑generated content, agent logs, and prompt‑based behavior
Ready to learn new methods, able to switch between tasks and topics quickly in challenging, complex guidelines
Our freelance role is fully remote so you just need a laptop, internet connection, time available and enthusiasm to take on a challenge
Nice to Have
Experience in writing manual or automated test cases
Familiarity with LLM capabilities and typical failure modes
Understanding of scoring metrics (precision, recall, coverage, reward functions)
Benefits
Get paid for your expertise, rates can go up to $60/hour depending on skills, experience, and project needs
Flexible, remote, freelance project that fits around professional or academic commitments
Advanced AI project experience to enhance portfolio
Influence how future AI models understand and communicate in your field of expertise
#J-18808-Ljbffr