Mindrift
Evaluation Scenario Writer - AI Agent Testing Specialist
Mindrift, San Antonio, Texas, United States, 78208
Be among the first 25 applicants.
This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. Please submit your resume in English and indicate your level of English.
Company Overview At Mindrift, innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI. The Mindrift platform connects specialists with AI projects from major tech innovators. Our mission is to unlock the potential of Generative AI by tapping into real-world expertise from across the globe.
About The Role We're looking for someone who can design realistic and structured evaluation scenarios for LLM-based agents. You'll create test cases that simulate human‑performed tasks and define gold‑standard behavior to compare agent actions against. You'll work to ensure each scenario is clearly defined, well‑scored, and easy to execute and reuse. You'll need a sharp analytical mind, attention to detail, and an interest in how AI agents make decisions.
Typical responsibilities include:
Create structured test cases that simulate complex human workflows
Define gold‑standard behavior and scoring logic to evaluate agent actions.
Analyze agent logs, failure modes, and decision paths
Work with code repositories and test frameworks to validate your scenarios
Iterate on prompts, instructions, and test cases to improve clarity and difficulty
Ensure that scenarios are production‑ready, easy to run, and reusable
How To Get Started Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you'll help shape the future of AI while ensuring technology benefits everyone.
Requirements
Bachelor's and/or Master's degree in Computer Science, Software Engineering, Data Science, Artificial Intelligence, Machine Learning, Computational Linguistics, Natural Language Processing, Information Systems or related fields.
Background in QA, software testing, data analysis, or NLP annotation
Good understanding of test design principles (e.g., reproducibility, coverage, edge cases)
Strong written communication skills in English
Comfortable with structured formats like JSON/YAML for scenario description
Can define expected agent behaviors (gold paths) and scoring logic
Basic experience with Python and JavaScript
Curious and open to working with AI‑generated content, agent logs, and prompt‑based behavior
Nice to Have
Experience in writing manual or automated test cases
Familiarity with LLM capabilities and typical failure modes
Understanding of scoring metrics (precision, recall, coverage, reward functions)
Benefits
Get paid for your expertise, with rates up to $80/hour depending on your skills, experience, and project needs
Flexible remote freelance project that fits around your primary professional or academic commitments
Advance your portfolio by participating in an advanced AI project
Influence how future AI models understand and communicate in your field of expertise
Seniority level Entry level
Employment type Part‑time
Job function Other
Industry IT Services and IT Consulting
#J-18808-Ljbffr
This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. Please submit your resume in English and indicate your level of English.
Company Overview At Mindrift, innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI. The Mindrift platform connects specialists with AI projects from major tech innovators. Our mission is to unlock the potential of Generative AI by tapping into real-world expertise from across the globe.
About The Role We're looking for someone who can design realistic and structured evaluation scenarios for LLM-based agents. You'll create test cases that simulate human‑performed tasks and define gold‑standard behavior to compare agent actions against. You'll work to ensure each scenario is clearly defined, well‑scored, and easy to execute and reuse. You'll need a sharp analytical mind, attention to detail, and an interest in how AI agents make decisions.
Typical responsibilities include:
Create structured test cases that simulate complex human workflows
Define gold‑standard behavior and scoring logic to evaluate agent actions.
Analyze agent logs, failure modes, and decision paths
Work with code repositories and test frameworks to validate your scenarios
Iterate on prompts, instructions, and test cases to improve clarity and difficulty
Ensure that scenarios are production‑ready, easy to run, and reusable
How To Get Started Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you'll help shape the future of AI while ensuring technology benefits everyone.
Requirements
Bachelor's and/or Master's degree in Computer Science, Software Engineering, Data Science, Artificial Intelligence, Machine Learning, Computational Linguistics, Natural Language Processing, Information Systems or related fields.
Background in QA, software testing, data analysis, or NLP annotation
Good understanding of test design principles (e.g., reproducibility, coverage, edge cases)
Strong written communication skills in English
Comfortable with structured formats like JSON/YAML for scenario description
Can define expected agent behaviors (gold paths) and scoring logic
Basic experience with Python and JavaScript
Curious and open to working with AI‑generated content, agent logs, and prompt‑based behavior
Nice to Have
Experience in writing manual or automated test cases
Familiarity with LLM capabilities and typical failure modes
Understanding of scoring metrics (precision, recall, coverage, reward functions)
Benefits
Get paid for your expertise, with rates up to $80/hour depending on your skills, experience, and project needs
Flexible remote freelance project that fits around your primary professional or academic commitments
Advance your portfolio by participating in an advanced AI project
Influence how future AI models understand and communicate in your field of expertise
Seniority level Entry level
Employment type Part‑time
Job function Other
Industry IT Services and IT Consulting
#J-18808-Ljbffr