Logo
Cantina

Inference Engineer, Video AI Job at Cantina in San Francisco

Cantina, San Francisco, CA, US, 94199

Save Job

A bit about Cantina:

Cantina, founded by Sean Parker, is a new social platform with the most advanced AI character creator. Build, share, and interact with AI bots and your friends directly in the Cantina or across the internet.

Cantina bots are lifelike, social creatures, capable of interacting wherever humans go on the internet. Recreate yourself using powerful AI, imagine someone new, or choose from thousands of existing characters. Bots are a new media type that offer a way for creators to share infinitely scalable and personalized content experiences combined with seamless group chat across voice, video, and text.

If you're excited about the potential AI has to shape human creativity and social interactions, join us in building the future!

A bit about the role: We're looking for an Inference Engineer who specializes in productionizing and hosting video AI models at scale. You'll be responsible for taking cutting-edge neural networks from research to production, building robust inference infrastructure, and optimizing model performance for real-time applications. This role focuses on the deployment and serving of large video models.

As an Inference Engineer, you will:
  • Deploy video AI models to production - Take research models and build production-ready inference endpoints with APIs, ensuring efficient operation across cloud infrastructure.
  • Maintain and optimize inference systems - Debug complex model serving issues, optimize latency performance, monitor system health, and ensure 99.9% uptime for AI-powered features.
  • Implement model optimizations - Work with neural network architectures including diffusion networks, VAEs, and transformers. Apply streaming optimizations and understand video model architectures to implement effective performance improvements.
  • Manage inference infrastructure - Leverage containerization with Docker, cloud storage solutions like S3, and cluster computing to build scalable model serving infrastructure.
  • Collaborate with research teams - Work closely with AI researchers to understand model requirements, architectural constraints, and optimization opportunities for new video generation models.
A bit about you:
  • 2+ years of ML engineering experience with focus on model inference and deployment
  • Strong understanding of neural network architectures, particularly diffusion networks, VAEs, and transformer models
  • Experience with video and image models - Understanding of how video/image generation models work, their architectures, and optimization strategies specific to video processing
  • Multi-GPU inference expertise - Experience running model components across multiple GPUs, implementing parallel processing strategies for large models
  • Production model hosting experience - Track record of deploying and maintaining ML models in production environments, including streaming and real-time inference
  • Experience with containerization (Docker), AWS, and cluster computing environments
  • Familiarity with machine learning frameworks (PyTorch, TensorFlow)
  • Experience with inference platforms and model serving solutions
Technical Stack You'll Work With:
  • Cloud: AWS (S3, DynamoDB), Kubernetes clusters
  • ML Infrastructure: Model serving platforms, Docker
  • Languages: Python
  • Frameworks: PyTorch, TensorFlow
  • Models: Video generation models, diffusion networks, VAEs, transformers
Optimization: Multi-GPU inference, real-time processing techniques

Pay Equity:

In compliance with Pay Transparency Laws, the base salary range for this role is between $175,000-$225,000 for those located in the San Francisco Bay Area, New York City and Seattle, WA. When determining compensation, a number of factors will be considered, including skills, experience, job scope, location, and competitive compensation market data.

Benefits:
  • Health Care - 99% of premiums for medical, vision, dental are fully paid for by Cantina, plus One Medical membership.
  • Monthly Wellness Stipend - $500/month to use on whatever you'd like!
  • Rest and Recharge - 15 PTO days per year, 10 sick days, all Federal holidays, and 2 floating holidays.
  • 401(K) - Eligible to participate on day one of employment.
  • Parental Leave & Fertility Support
  • Competitive Salary & Equity
  • Lunch and snacks provided for in-office employees.
  • WFH equipment provided for full-time hybrid/remote employees.
In Summary: Cantina is a new social platform with the most advanced AI character creator . Cantina bots are lifelike, social creatures, capable of interacting wherever humans go on the internet . We're looking for an Inference Engineer who specializes in productionizing and hosting video AI models at scale . En Español: Cantina, fundada por Sean Parker, es una nueva plataforma social con el creador de personajes AI más avanzado. Construye, comparta e interactúa con bots de IA y sus amigos directamente en la Cantina o a través de Internet. Los bots de Cantina son seres reales, sociales, capaces de interactuar dondequiera que vayan los humanos en internet. Recreate usando inteligencia artificial poderosa, imagina a alguien nuevo o elija entre miles de caracteres existentes. Mantener y optimizar los sistemas de inferencia - Desarmar problemas complejos del modelo que sirve, optimizar el rendimiento de latencia, monitorear la salud del sistema y garantizar un 99.9% de tiempo de actividad para las características impulsadas por IA. Implementar optimizaciones de modelos - Trabajar con arquitecturas de redes neuronales incluyendo redes de difusión, VAEs y transformadores. Aplicar optimizas de transmisión y entender las arquitectura de modelos de video para implementar mejoras efectivas en el desempeño. Un poco sobre usted: 2+ años de experiencia en ingeniería ML con enfoque en la inferencia y despliegue de modelos Conocimiento sólido de las arquitecturas de redes neurales, particularmente redes de difusión, VAEs y modelos transformadores Experiencia con los modelos de video e imagen - Comprensión de cómo funcionan los models de generación de vídeo/imagen, sus arquitectonas y estrategias de optimización específicas para el procesamiento de videos La experiencia de inferencia multi-GPU - Experimentación ejecutando componentes del modelo a través de múltiples GPUs, implementación de estrategia de procesamiento paralelo para modelos grandes Experiencia de alojamiento de modelos de producción Al determinar la compensación, se tendrán en cuenta una serie de factores, incluidas las habilidades, experiencia, alcance del trabajo, ubicación y datos sobre el mercado de compensación competitiva. Beneficios: atención médica - 99% de las primas para médicos, visión, odontología son pagadas completamente por Cantina, más un miembro médico. Beca mensual de bienestar - $ 500 / mes para usar lo que quiera! Descanso y recarga - 15 días PTO al año, 10 días de enfermedad, todos los feriados federales y 2 vacaciones flotantes.