Logo
job logo

Senior Specialist - Architecture

LTM, Charlotte, North Carolina, United States, 28245

Save Job

Role description

Role: Own MVP scope milestones and stakeholder alignment Overall GenAI solution design for the MVP Aligns business context FRTBMARGIRR with RAG architecture Ensures traceability explainability and MVP scope adherence Key Outputs:

Endtoend GenAI MVP architecture aligned to FRTB subsection scope Clear traceability and explainability embedded in the MVP designJob Summary Senior Specialist with 10 to 15 years of experience in AI and Generative AI within the BlueverseGenerative AI cluster focusing on developing finetuning and deploying advanced generative AI models

Job Description:

Develop and implement innovative AI and Generative AI solutions leveraging large language models LLMs such as GPT LLaMA and Claude Design and customize prompts and workflows to efficiently extract and process data from diverse unstructured sources including emails PDFs and meeting notes Finetune AI models using advanced techniques like LoRA to improve model accuracy and performance Collaborate with business stakeholders to translate complex requirements into scalable AIdriven solutions Utilize AI frameworks and libraries such as LangChain Hugging Face Transformers and OpenAI APIs to build robust AI applications Ensure adherence to organizational policies security standards and compliance during AI model development and deployment Continuously research and integrate emerging AI technologies and methodologies to enhance existing systems and workflows

Roles and Responsibilities:

Build and optimize document extraction and generation workflows using generative AI models Extract structured information from unstructured inputs including emails PDFs and meeting notes Create and maintain a comprehensive library of prompts and generalized extraction templates Evaluate model performance using taskspecific metrics such as accuracy and completeness Collaborate closely with subject matter experts SMEs for iterative model improvements Deploy AI models on approved infrastructure ensuring model isolation encryption and compliance with security policies Monitor scale and optimize AI models to meet performance regulatory and operational requirements Utilize enterprise integration tools and cloud platforms Azure AWS GCP to support AI model lifecycle management Implement containerization and orchestration technologies like Docker and Kubernetes to enable scalable deployments Employ MLOps practices including CICD pipelines model versioning monitoring with tools such as Prometheus and Grafana and logging frameworks