
Senior Machine Learning Engineer - AI Enabler Team Job at Jobgether in Italy
Jobgether, Italy, NY, United States
Senior Machine Learning Engineer - AI Enabler Team in Italy
In this role, you will work at the forefront of AI and cloud infrastructure, tackling complex challenges related to large‑scale machine learning systems and intelligent automation. You will design and optimise advanced ML pipelines, enabling applications to dynamically select and deploy the most efficient models while balancing cost and performance. As part of a highly technical and innovation‑driven team, you’ll contribute to building next‑generation AI infrastructure in a fast‑paced, R&D‑focused environment. This position offers significant ownership, allowing you to shape architecture decisions and influence product direction. You will collaborate with cross‑functional experts while exploring cutting‑edge technologies in distributed systems, LLM optimisation, and cloud‑native environments. It’s an ideal opportunity for engineers passionate about solving real‑world AI scalability challenges.
Accountabilities
Design, build, and optimise machine learning pipelines for training and inference in large‑scale environments
Evaluate and improve large language model (LLM) performance, focusing on cost‑efficiency and output quality
Develop and implement advanced inference optimisation techniques, including quantisation, reduced precision, and performance tuning
Architect scalable solutions for distributed ML training and inference across cloud‑native infrastructures
Collaborate with cross‑functional teams to define and deliver innovative AI‑driven features
Contribute to system design decisions and continuously improve infrastructure automation and performance
Stay up to date with the latest advancements in AI, ML frameworks, and cloud technologies
Requirements
5+ years of hands‑on experience in machine learning, data science, or related fields with a strong project portfolio
Advanced programming skills in Python and solid software engineering fundamentals
Expertise in ML inference optimisation techniques and tools (e.g., vLLM, TensorRT, or similar)
Strong understanding of distributed systems, training pipelines, and checkpointing strategies
Experience with cloud platforms and containerised environments (e.g., Kubernetes)
Familiarity with real‑time data processing, APIs, and scalable system design
Ability to thrive in fast‑paced, ambiguous environments with competing priorities
Strong communication skills and fluency in English
Must be based in Europe (GMT 0 to GMT +3)
Benefits
Competitive salary package (€6,500 – €9,000 gross depending on experience)
Remote‑first work environment with flexible working conditions
Equity options and long‑term incentive opportunities
Private health insurance coverage
Dedicated learning budget for courses, certifications, and conferences
10% of work time allocated to personal projects or skill development
Annual hackathons and innovation‑driven initiatives
Team‑building activities and global company events
Equipment budget to support your productivity
Additional paid time off to support work‑life balance
#J-18808-Ljbffr
In this role, you will work at the forefront of AI and cloud infrastructure, tackling complex challenges related to large‑scale machine learning systems and intelligent automation. You will design and optimise advanced ML pipelines, enabling applications to dynamically select and deploy the most efficient models while balancing cost and performance. As part of a highly technical and innovation‑driven team, you’ll contribute to building next‑generation AI infrastructure in a fast‑paced, R&D‑focused environment. This position offers significant ownership, allowing you to shape architecture decisions and influence product direction. You will collaborate with cross‑functional experts while exploring cutting‑edge technologies in distributed systems, LLM optimisation, and cloud‑native environments. It’s an ideal opportunity for engineers passionate about solving real‑world AI scalability challenges.
Accountabilities
Design, build, and optimise machine learning pipelines for training and inference in large‑scale environments
Evaluate and improve large language model (LLM) performance, focusing on cost‑efficiency and output quality
Develop and implement advanced inference optimisation techniques, including quantisation, reduced precision, and performance tuning
Architect scalable solutions for distributed ML training and inference across cloud‑native infrastructures
Collaborate with cross‑functional teams to define and deliver innovative AI‑driven features
Contribute to system design decisions and continuously improve infrastructure automation and performance
Stay up to date with the latest advancements in AI, ML frameworks, and cloud technologies
Requirements
5+ years of hands‑on experience in machine learning, data science, or related fields with a strong project portfolio
Advanced programming skills in Python and solid software engineering fundamentals
Expertise in ML inference optimisation techniques and tools (e.g., vLLM, TensorRT, or similar)
Strong understanding of distributed systems, training pipelines, and checkpointing strategies
Experience with cloud platforms and containerised environments (e.g., Kubernetes)
Familiarity with real‑time data processing, APIs, and scalable system design
Ability to thrive in fast‑paced, ambiguous environments with competing priorities
Strong communication skills and fluency in English
Must be based in Europe (GMT 0 to GMT +3)
Benefits
Competitive salary package (€6,500 – €9,000 gross depending on experience)
Remote‑first work environment with flexible working conditions
Equity options and long‑term incentive opportunities
Private health insurance coverage
Dedicated learning budget for courses, certifications, and conferences
10% of work time allocated to personal projects or skill development
Annual hackathons and innovation‑driven initiatives
Team‑building activities and global company events
Equipment budget to support your productivity
Additional paid time off to support work‑life balance
#J-18808-Ljbffr