
Freelance English Content Writer - AI Trainer
Mindrift, Sauk Trail Beach, Wisconsin, United States
Please submit your resume in English and indicate your level of English proficiency.
Mindrift connects specialists with project-based AI opportunities for leading tech companies, focused on testing, evaluating, and improving AI systems.
Participation is project-based, not permanent employment.
What This Opportunity Involves This project requires you to put yourself in the position of a range of different user personas and engage in realistic multi-turn conversations with LLMs, working towards a clearly defined goal. You will need to:
use a range of tones and registers
stress-test the models' ability to respond adequately based on several abstract dimensions (e.g. instruction-following, emotional intelligence, consistency under changing constraints)
react and adapt to model output while maintaining tight focus on each individual task's requirements
Think of yourself as a controlled adversary, crafting plausible human dialogue that exposes subtle model weaknesses while maintaining narrative coherence.
For this, you will need to:
Think like a storyteller and a tester
Understand how humans actually speak, hesitate, contradict themselves, and elevate emotionally
Be able to deliberately engineer conversational pressure without breaking realism
Be methodical enough to document observations clearly and consistently
Be able to pinpoint failure modes and LLM patterns
What We Look For This opportunity is a good fit if you are seeking for open to part‑time, non‑permanent projects. Ideally, contributors will have:
an under‑ or postgraduate qualification in an Arts‑based subject (English, Creative Writing, Journalism, MFL, Psychology, Cognitive Science), related field, or work experience at an equivalent level; or 1+ years' experience in Conversational AI Testing, Narrative Design, or Adversarial Model Testing
C2‑level English (CPE, TOEFL 114+, IELTS 8.0 or above)
Nice to have
Conversational UX / dialogue design experience
An understanding of prompt engineering or LLM evaluation
Experience with QA testing for complex systems
A background in narrative design, interactive fiction, or screenwriting
A qualification in, or professional experience with, behavioural research, psychology, or linguistics
Demonstrated familiarity with LLM behaviour, failure modes, and evaluation concepts
Experience working with structured guidelines, rubrics, or annotation frameworks
You will receive training in our guidelines and in how to create structured, focused conversations that meet the project's goals. You will also be assigned a mentor who will guide you through your first conversations and provide clear, actionable feedback to support your improvement.
How It Works Apply → Pass qualification(s) → Join a project → Complete tasks → Get paid
Project time expectations For this project, tasks are estimated to require around 10-20 hours per week during active phases, based on project requirements. This is an estimate, not a guaranteed workload, and applies only while the project is active.
Payment
Paid contributions, with rates up to $30/hour*
Fixed project rate or individual rates, depending on the project
Some projects include incentive payments
Note: Rates vary based on expertise, skills assessment, location, project needs, and other factors. Higher rates may be offered to highly specialized experts. Lower rates may apply during onboarding or non-core project phases. Payment details are shared per project
#J-18808-Ljbffr
Mindrift connects specialists with project-based AI opportunities for leading tech companies, focused on testing, evaluating, and improving AI systems.
Participation is project-based, not permanent employment.
What This Opportunity Involves This project requires you to put yourself in the position of a range of different user personas and engage in realistic multi-turn conversations with LLMs, working towards a clearly defined goal. You will need to:
use a range of tones and registers
stress-test the models' ability to respond adequately based on several abstract dimensions (e.g. instruction-following, emotional intelligence, consistency under changing constraints)
react and adapt to model output while maintaining tight focus on each individual task's requirements
Think of yourself as a controlled adversary, crafting plausible human dialogue that exposes subtle model weaknesses while maintaining narrative coherence.
For this, you will need to:
Think like a storyteller and a tester
Understand how humans actually speak, hesitate, contradict themselves, and elevate emotionally
Be able to deliberately engineer conversational pressure without breaking realism
Be methodical enough to document observations clearly and consistently
Be able to pinpoint failure modes and LLM patterns
What We Look For This opportunity is a good fit if you are seeking for open to part‑time, non‑permanent projects. Ideally, contributors will have:
an under‑ or postgraduate qualification in an Arts‑based subject (English, Creative Writing, Journalism, MFL, Psychology, Cognitive Science), related field, or work experience at an equivalent level; or 1+ years' experience in Conversational AI Testing, Narrative Design, or Adversarial Model Testing
C2‑level English (CPE, TOEFL 114+, IELTS 8.0 or above)
Nice to have
Conversational UX / dialogue design experience
An understanding of prompt engineering or LLM evaluation
Experience with QA testing for complex systems
A background in narrative design, interactive fiction, or screenwriting
A qualification in, or professional experience with, behavioural research, psychology, or linguistics
Demonstrated familiarity with LLM behaviour, failure modes, and evaluation concepts
Experience working with structured guidelines, rubrics, or annotation frameworks
You will receive training in our guidelines and in how to create structured, focused conversations that meet the project's goals. You will also be assigned a mentor who will guide you through your first conversations and provide clear, actionable feedback to support your improvement.
How It Works Apply → Pass qualification(s) → Join a project → Complete tasks → Get paid
Project time expectations For this project, tasks are estimated to require around 10-20 hours per week during active phases, based on project requirements. This is an estimate, not a guaranteed workload, and applies only while the project is active.
Payment
Paid contributions, with rates up to $30/hour*
Fixed project rate or individual rates, depending on the project
Some projects include incentive payments
Note: Rates vary based on expertise, skills assessment, location, project needs, and other factors. Higher rates may be offered to highly specialized experts. Lower rates may apply during onboarding or non-core project phases. Payment details are shared per project
#J-18808-Ljbffr