
Snowflake Developer
Tata Consultancy Services, Atlanta, GA, United States
Snowflake Developer
Responsibilities include:
Guide the team to migrate from on-prem Cloudera to Azure cloud environment.
Design and implement scalable data lake solutions using Snowflake and Databricks, and develop and optimize data pipelines for ingestion, transformation, and storage.
Manage data governance, quality, and security across cloud environments; implement performance tuning, automation, and CI/CD for data workflows.
Collaborate with cross‑functional teams to support cloud migration activities.
Cloudera Cluster Management
Install, configure, manage, and monitor Cloudera Hadoop clusters, ensuring high availability, performance, and security, including management of HDFS, YARN, and other ecosystem components.
Performance Optimization
Tune Hadoop, Hive, and Spark jobs and configurations for optimal performance, efficiency, and resource utilization; optimize queries, manage partitions, and leverage in-memory capabilities.
Troubleshooting and Support
Diagnose and resolve issues related to Linux servers, networks, cluster health, job failures, and performance bottlenecks; provide on‑call support and collaborate with other teams to ensure smooth operations.
Security, Governance, and Secrets Management
Implement and manage security measures within the Cloudera environment, including Kerberos, Apache Ranger, and Atlas, to ensure data governance and compliance.
Set up and manage HashiCorp Vault for secure keys and secrets management.
Utilize CyberArk for privileged access management and secure administrative tasks on the cluster.
Data and Application Migration
Migrate Hadoop, Hive, and Spark data and applications to Azure cloud services such as Azure Synapse Analytics, Azure Databricks, or Snowflake, ensuring data integrity, performance tuning, and validation.
Automation and Scripting
Develop scripts (shell, Ansible, Python) for automating administrative tasks, deployments, and monitoring; work with users to develop, debug, optimize Hive/Spark/Python programs that connect to the Cloudera environment.
Documentation
Create and maintain documentation for system configurations, operational procedures, and troubleshooting knowledge bases.
Vendor Collaboration
Work closely with the Cloudera vendor to stay current with the latest releases, perform upgrades, and address vulnerabilities.
Salary Range: $110,000–$120,000 per year.
#J-18808-Ljbffr
Responsibilities include:
Guide the team to migrate from on-prem Cloudera to Azure cloud environment.
Design and implement scalable data lake solutions using Snowflake and Databricks, and develop and optimize data pipelines for ingestion, transformation, and storage.
Manage data governance, quality, and security across cloud environments; implement performance tuning, automation, and CI/CD for data workflows.
Collaborate with cross‑functional teams to support cloud migration activities.
Cloudera Cluster Management
Install, configure, manage, and monitor Cloudera Hadoop clusters, ensuring high availability, performance, and security, including management of HDFS, YARN, and other ecosystem components.
Performance Optimization
Tune Hadoop, Hive, and Spark jobs and configurations for optimal performance, efficiency, and resource utilization; optimize queries, manage partitions, and leverage in-memory capabilities.
Troubleshooting and Support
Diagnose and resolve issues related to Linux servers, networks, cluster health, job failures, and performance bottlenecks; provide on‑call support and collaborate with other teams to ensure smooth operations.
Security, Governance, and Secrets Management
Implement and manage security measures within the Cloudera environment, including Kerberos, Apache Ranger, and Atlas, to ensure data governance and compliance.
Set up and manage HashiCorp Vault for secure keys and secrets management.
Utilize CyberArk for privileged access management and secure administrative tasks on the cluster.
Data and Application Migration
Migrate Hadoop, Hive, and Spark data and applications to Azure cloud services such as Azure Synapse Analytics, Azure Databricks, or Snowflake, ensuring data integrity, performance tuning, and validation.
Automation and Scripting
Develop scripts (shell, Ansible, Python) for automating administrative tasks, deployments, and monitoring; work with users to develop, debug, optimize Hive/Spark/Python programs that connect to the Cloudera environment.
Documentation
Create and maintain documentation for system configurations, operational procedures, and troubleshooting knowledge bases.
Vendor Collaboration
Work closely with the Cloudera vendor to stay current with the latest releases, perform upgrades, and address vulnerabilities.
Salary Range: $110,000–$120,000 per year.
#J-18808-Ljbffr