Logo
Mindrift

AI Agent Evaluation Analyst (Freelance)

Mindrift, Sauk Trail Beach, Wisconsin, United States

Save Job

This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. Please submit your resume in English and indicate your level of English proficiency.

At Mindrift, innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI.

What We Do The Mindrift platform, launched and powered by Toloka, connects domain experts with cutting-edge AI projects from innovative tech clients. Our mission is to unlock the potential of GenAI by tapping into real-world expertise from across the globe.

Who we're looking for: We're looking for curious and intellectually proactive contributors, the kind of person who double-checks assumptions and plays devil's advocate. Are you comfortable with ambiguity and complexity? Does an async, remote, flexible opportunity sound exciting? Would you like to learn how modern AI systems are tested and evaluated?

This is a flexible, project-based opportunity well-suited for:

Analysts, researchers, or consultants with strong critical thinking skills

Students (senior undergrads / grad students) looking for an intellectually interesting gig

People open to a part-time and non-permanent opportunity

About the project We're on the hunt for QAs for autonomous AI agents for a new project focused on validating and improving complex task structures, policy logic, and agent evaluation frameworks. Throughout the project, you'll have to balance quality assurance, research, and logical problem‑solving. This project opportunity is ideal for people who enjoy looking at systems holistically and thinking through scenarios, implications, and edge cases. You do not need a coding background, but you must be curious, intellectually rigorous, and capable of evaluating the soundness and consistency of complex setups. If you've ever excelled in things like consulting, CHGK, Olympiads, case solving, or systems thinking — you might be a great fit.

What you'll be doing

Review evaluation tasks and scenarios for logic, completeness, and realism

Identify inconsistencies, missing assumptions, or unclear decision points

Help define clear expected behaviors (gold standards) for AI agents

Annotate cause‑effect relationships, reasoning paths, and plausible alternatives

Think through complex systems and policies as a human would to ensure agents are tested properly

Work closely with QA, writers, or developers to suggest refinements or edge‑case coverage

Requirements

Excellent analytical thinking: Can reason about complex systems, scenarios, and logical implications

Strong attention to detail: Can spot contradictions, ambiguities, and vague requirements

Familiarity with structured data formats: Can read, not necessarily write JSON/YAML

Ability to assess scenarios holistically: What's missing, what's unrealistic, what might break?

Good communication and clear writing (in English) to document your findings

We also value applicants who have

Experience with policy evaluation, logic puzzles, case studies, or structured scenario design

Background in consulting, academia, olympiads (e.g. logic/math/informatics), or research

Exposure to LLMs, prompt engineering, or AI‑generated content

Familiarity with QA or test‑case thinking (edge cases, failure modes, "what could go wrong")

Some understanding of how scoring or evaluation works in agent testing (precision, coverage, etc.)

Benefits

Get paid for your expertise, with rates that can go up to $52/hour depending on your skills, experience, and project needs

Take part in a flexible, remote, freelance project that fits around your primary professional or academic commitments

Participate in an advanced AI project and gain valuable experience to enhance your portfolio

Influence how future AI models understand and communicate in your field of expertise

#J-18808-Ljbffr