
Technical Sourcer - Research (Contract)
Luma AI, Seattle, WA, United States
Technical Sourcer - Research (Contract)
As a Research Sourcer, you are a talent scout for the world's most brilliant minds. You will support the hyper-growth of our Research team, covering the full spectrum of the talent ecosystem: Experienced Industry Hiring, University/PhD Hiring, and Conference-based Sourcing. Your responsibilities will include: Strategic Pipeline Building: Identify and engage elite researchers in areas like Diffusion Models, Transformers, Multimodality, and RL. Hyper-Growth Execution: Own the top-of-funnel for senior industry experts, PhD interns, and New Grad researchers. Conference Strategy: Support the sourcing and mapping for top AI conferences to build long-term relationships with the global research community. Collaborative Partnership: Work closely with Hiring Managers to calibrate on specific needs and build up the sourcing strategy along the team. You should have: 0.5+ years of experience in technical sourcing or recruiting. We are open to "rising stars" with a high ceiling for growth. A solid understanding of the Foundational Model landscape (or a willingness to learn it faster than anyone else). The ability to use data to track your progress, calibrate with Hiring Managers, and pivot your sourcing strategy based on real-time funnel health. Thrive in a "startup-speed" environment where you have to be proactive and build processes as you go. Bonus points if you: Have the ability (or a fierce desire) to navigate ArXiv, Google Scholar, and OpenReview. Can translate Luma's research goals into a compelling pitch, engaging experts in deep conversations about the future of Foundation Models. Luma's mission is to build unified general intelligence that can generate, understand, and operate in the physical world. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.
As a Research Sourcer, you are a talent scout for the world's most brilliant minds. You will support the hyper-growth of our Research team, covering the full spectrum of the talent ecosystem: Experienced Industry Hiring, University/PhD Hiring, and Conference-based Sourcing. Your responsibilities will include: Strategic Pipeline Building: Identify and engage elite researchers in areas like Diffusion Models, Transformers, Multimodality, and RL. Hyper-Growth Execution: Own the top-of-funnel for senior industry experts, PhD interns, and New Grad researchers. Conference Strategy: Support the sourcing and mapping for top AI conferences to build long-term relationships with the global research community. Collaborative Partnership: Work closely with Hiring Managers to calibrate on specific needs and build up the sourcing strategy along the team. You should have: 0.5+ years of experience in technical sourcing or recruiting. We are open to "rising stars" with a high ceiling for growth. A solid understanding of the Foundational Model landscape (or a willingness to learn it faster than anyone else). The ability to use data to track your progress, calibrate with Hiring Managers, and pivot your sourcing strategy based on real-time funnel health. Thrive in a "startup-speed" environment where you have to be proactive and build processes as you go. Bonus points if you: Have the ability (or a fierce desire) to navigate ArXiv, Google Scholar, and OpenReview. Can translate Luma's research goals into a compelling pitch, engaging experts in deep conversations about the future of Foundation Models. Luma's mission is to build unified general intelligence that can generate, understand, and operate in the physical world. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.