Mindrift
Location & Eligibility
This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. Please submit your resume in English and indicate your level of English proficiency. About Mindrift
At Mindrift, innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI. What We Do
The Mindrift platform, launched and powered by Toloka, connects domain experts with cutting‑edge AI projects from innovative tech clients. Our mission is to unlock the potential of GenAI by tapping into real‑world expertise from across the globe. Who We’re Looking For
We’re looking for curious and intellectually proactive contributors—people who double‑check assumptions and play devil’s advocate. If you thrive in ambiguity, enjoy remote asynchronous work, and want to learn how modern AI systems are tested and evaluated, we want to hear from you. Project Overview
We are seeking QA experts for autonomous AI agents in a project focused on validating and improving complex task structures, policy logic, and agent evaluation frameworks. Throughout the project, you will balance quality assurance, research, and logical problem‑solving. Responsibilities
Review evaluation tasks and scenarios for logic, completeness, and realism. Identify inconsistencies, missing assumptions, or unclear decision points. Define clear expected behaviours (gold standards) for AI agents. Annotate cause‑effect relationships, reasoning paths, and plausible alternatives. Think through complex systems and policies as a human would to ensure agents are tested properly. Collaborate with QA, writers, or developers to suggest refinements or edge‑case coverage. Requirements
Excellent analytical thinking: ability to reason about complex systems, scenarios, and logical implications. Strong attention to detail: spot contradictions, ambiguities, and vague requirements. Familiarity with structured data formats: read (not necessarily write) JSON/YAML. Ability to assess scenarios holistically: identify what’s missing, unrealistic, or potentially breaking. Good communication and clear writing (in English) to document findings. We also value applicants who have: Experience with policy evaluation, logic puzzles, case studies, or structured scenario design. Background in consulting, academia, olympiads (e.g. logic/math/informatics), or research. Exposure to LLMs, prompt engineering, or AI‑generated content. Familiarity with QA or test‑case thinking (edge cases, failure modes, "what could go wrong"). Some understanding of how scoring or evaluation works in agent testing (precision, coverage, etc.). Benefits
Competitive pay up to $60/hour depending on skills, experience, and project needs. Flexible, remote, freelance project that fits around your primary professional or academic commitments. Advanced AI project experience to enhance your portfolio. Opportunity to influence how future AI models understand and communicate in your field of expertise.
#J-18808-Ljbffr
This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. Please submit your resume in English and indicate your level of English proficiency. About Mindrift
At Mindrift, innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI. What We Do
The Mindrift platform, launched and powered by Toloka, connects domain experts with cutting‑edge AI projects from innovative tech clients. Our mission is to unlock the potential of GenAI by tapping into real‑world expertise from across the globe. Who We’re Looking For
We’re looking for curious and intellectually proactive contributors—people who double‑check assumptions and play devil’s advocate. If you thrive in ambiguity, enjoy remote asynchronous work, and want to learn how modern AI systems are tested and evaluated, we want to hear from you. Project Overview
We are seeking QA experts for autonomous AI agents in a project focused on validating and improving complex task structures, policy logic, and agent evaluation frameworks. Throughout the project, you will balance quality assurance, research, and logical problem‑solving. Responsibilities
Review evaluation tasks and scenarios for logic, completeness, and realism. Identify inconsistencies, missing assumptions, or unclear decision points. Define clear expected behaviours (gold standards) for AI agents. Annotate cause‑effect relationships, reasoning paths, and plausible alternatives. Think through complex systems and policies as a human would to ensure agents are tested properly. Collaborate with QA, writers, or developers to suggest refinements or edge‑case coverage. Requirements
Excellent analytical thinking: ability to reason about complex systems, scenarios, and logical implications. Strong attention to detail: spot contradictions, ambiguities, and vague requirements. Familiarity with structured data formats: read (not necessarily write) JSON/YAML. Ability to assess scenarios holistically: identify what’s missing, unrealistic, or potentially breaking. Good communication and clear writing (in English) to document findings. We also value applicants who have: Experience with policy evaluation, logic puzzles, case studies, or structured scenario design. Background in consulting, academia, olympiads (e.g. logic/math/informatics), or research. Exposure to LLMs, prompt engineering, or AI‑generated content. Familiarity with QA or test‑case thinking (edge cases, failure modes, "what could go wrong"). Some understanding of how scoring or evaluation works in agent testing (precision, coverage, etc.). Benefits
Competitive pay up to $60/hour depending on skills, experience, and project needs. Flexible, remote, freelance project that fits around your primary professional or academic commitments. Advanced AI project experience to enhance your portfolio. Opportunity to influence how future AI models understand and communicate in your field of expertise.
#J-18808-Ljbffr