
Data Science Expert - AI Content Specialist
Alignerr, New York, NY, United States
Data Science Expert - AI Content Specialist
About the Role
What if your expertise in machine learning, statistics, and data engineering could directly influence how the world's most advanced AI systems think and reason? We're partnering with leading AI research labs to find data science experts who can stress-test, evaluate, and improve cutting-edge language models - and we need people who truly know their stuff.
This is a fully remote, flexible contract role designed for data scientists, ML engineers, and quantitative researchers who want to do meaningful, intellectually stimulating work on their own schedule.
Organization
: Alignerr
Type
: Hourly Contract
Location
: Remote (Global)
Commitment
: 10-40 hours/week
What You'll Do Design Advanced Challenges
- Craft rigorous data science problems spanning hyperparameter optimization, Bayesian inference, cross-validation strategies, dimensionality reduction, and more to expose the limits of AI reasoning
Author Ground-Truth Solutions
- Develop precise, step-by-step technical solutions including Python/R scripts, SQL queries, and mathematical derivations that serve as the definitive benchmark for AI responses
Audit AI-Generated Code
- Critically evaluate AI outputs using libraries like Scikit-Learn, PyTorch, and TensorFlow, assessing correctness, efficiency, and best practices
Identify Reasoning Failures
- Spot and document logical errors in AI reasoning - such as data leakage, overfitting, or improper handling of imbalanced datasets - and provide structured, actionable feedback to improve model performance
Who You Are Holds or is pursuing a Master's or PhD in Data Science, Statistics, Computer Science, or a related quantitative field
Strong foundational knowledge across core areas: supervised/unsupervised learning, deep learning, NLP, or big data technologies (Spark, Hadoop)
Able to communicate complex algorithmic concepts and statistical findings clearly and concisely in writing
Highly precise and detail-oriented when reviewing code syntax, mathematical notation, and statistical conclusions
Self-motivated and comfortable working independently in an asynchronous environment
No prior AI industry experience required
Nice to Have Experience with data annotation, data quality evaluation, or AI evaluation frameworks
Familiarity with production-level ML workflows - MLOps, CI/CD for models, model monitoring
Background in academic research or technical writing
Why Join Us Work directly with industry-leading AI models and contribute to their development at a foundational level
Fully remote and asynchronous - work on your schedule, from anywhere in the world
Freelance autonomy with consistent, intellectually engaging task-based work
Collaborate with a global network of experts contributing to the frontier of AI research
Potential for ongoing work and contract renewals as new projects launch
About the Role
What if your expertise in machine learning, statistics, and data engineering could directly influence how the world's most advanced AI systems think and reason? We're partnering with leading AI research labs to find data science experts who can stress-test, evaluate, and improve cutting-edge language models - and we need people who truly know their stuff.
This is a fully remote, flexible contract role designed for data scientists, ML engineers, and quantitative researchers who want to do meaningful, intellectually stimulating work on their own schedule.
Organization
: Alignerr
Type
: Hourly Contract
Location
: Remote (Global)
Commitment
: 10-40 hours/week
What You'll Do Design Advanced Challenges
- Craft rigorous data science problems spanning hyperparameter optimization, Bayesian inference, cross-validation strategies, dimensionality reduction, and more to expose the limits of AI reasoning
Author Ground-Truth Solutions
- Develop precise, step-by-step technical solutions including Python/R scripts, SQL queries, and mathematical derivations that serve as the definitive benchmark for AI responses
Audit AI-Generated Code
- Critically evaluate AI outputs using libraries like Scikit-Learn, PyTorch, and TensorFlow, assessing correctness, efficiency, and best practices
Identify Reasoning Failures
- Spot and document logical errors in AI reasoning - such as data leakage, overfitting, or improper handling of imbalanced datasets - and provide structured, actionable feedback to improve model performance
Who You Are Holds or is pursuing a Master's or PhD in Data Science, Statistics, Computer Science, or a related quantitative field
Strong foundational knowledge across core areas: supervised/unsupervised learning, deep learning, NLP, or big data technologies (Spark, Hadoop)
Able to communicate complex algorithmic concepts and statistical findings clearly and concisely in writing
Highly precise and detail-oriented when reviewing code syntax, mathematical notation, and statistical conclusions
Self-motivated and comfortable working independently in an asynchronous environment
No prior AI industry experience required
Nice to Have Experience with data annotation, data quality evaluation, or AI evaluation frameworks
Familiarity with production-level ML workflows - MLOps, CI/CD for models, model monitoring
Background in academic research or technical writing
Why Join Us Work directly with industry-leading AI models and contribute to their development at a foundational level
Fully remote and asynchronous - work on your schedule, from anywhere in the world
Freelance autonomy with consistent, intellectually engaging task-based work
Collaborate with a global network of experts contributing to the frontier of AI research
Potential for ongoing work and contract renewals as new projects launch