
Senior Sales Consultant
360 Technology, Irving, TX, United States
We are seeking a highly experienced
Senior Python Developer
with deep expertise in
PySpark and distributed data processing
to lead and execute the migration of complex legacy data processing systems to scalable Python and PySpark-based services. The role requires strong banking domain knowledge and the ability to work onsite with business and technical stakeholders to modernize legacy data and processing platforms.
Key Responsibilities
Analyze and understand complex legacy data processing logic used in banking systems.
Design and implement scalable Python and
PySpark-based
data processing solutions using industry best practices.
Migrate business-critical logic related to banking operations such as payments, accounts, transactions, risk, or reporting into distributed data pipelines.
Develop batch and large-scale data processing jobs using
PySpark .
Ensure functional parity, performance optimization, and data integrity during migration.
Optimize data transformations and refactor procedural logic into modular, scalable PySpark jobs.
Collaborate onsite with data engineers, architects, business analysts, and QA teams.
Perform unit testing, integration testing, and data validation post-migration.
Document migration approaches, code logic, and technical designs.
Required Skills & Qualifications
10+ years of overall software development experience.
Strong hands-on experience with
Python (3.x) .
Extensive experience with
PySpark and distributed data processing frameworks .
Strong understanding of Spark architecture (RDD, DataFrames, Spark SQL).
Experience building and optimizing large-scale ETL/data pipelines.
Strong SQL knowledge and query optimization skills.
Experience migrating legacy data processing systems to Python/PySpark-based pipelines.
Experience working with:
REST APIs
Object-oriented and functional programming in Python
Technical Skills
10+ years of overall software development experience.
Strong hands-on experience with
Python (3.x) .
Extensive experience with
PySpark and distributed data processing frameworks .
Strong understanding of Spark architecture (RDD, DataFrames, Spark SQL).
Experience building and optimizing large-scale ETL/data pipelines.
Strong SQL knowledge and query optimization skills.
Experience migrating legacy data processing systems to Python/PySpark-based pipelines.
Experience working with:
REST APIs
Object-oriented and functional programming in Python
#J-18808-Ljbffr
Senior Python Developer
with deep expertise in
PySpark and distributed data processing
to lead and execute the migration of complex legacy data processing systems to scalable Python and PySpark-based services. The role requires strong banking domain knowledge and the ability to work onsite with business and technical stakeholders to modernize legacy data and processing platforms.
Key Responsibilities
Analyze and understand complex legacy data processing logic used in banking systems.
Design and implement scalable Python and
PySpark-based
data processing solutions using industry best practices.
Migrate business-critical logic related to banking operations such as payments, accounts, transactions, risk, or reporting into distributed data pipelines.
Develop batch and large-scale data processing jobs using
PySpark .
Ensure functional parity, performance optimization, and data integrity during migration.
Optimize data transformations and refactor procedural logic into modular, scalable PySpark jobs.
Collaborate onsite with data engineers, architects, business analysts, and QA teams.
Perform unit testing, integration testing, and data validation post-migration.
Document migration approaches, code logic, and technical designs.
Required Skills & Qualifications
10+ years of overall software development experience.
Strong hands-on experience with
Python (3.x) .
Extensive experience with
PySpark and distributed data processing frameworks .
Strong understanding of Spark architecture (RDD, DataFrames, Spark SQL).
Experience building and optimizing large-scale ETL/data pipelines.
Strong SQL knowledge and query optimization skills.
Experience migrating legacy data processing systems to Python/PySpark-based pipelines.
Experience working with:
REST APIs
Object-oriented and functional programming in Python
Technical Skills
10+ years of overall software development experience.
Strong hands-on experience with
Python (3.x) .
Extensive experience with
PySpark and distributed data processing frameworks .
Strong understanding of Spark architecture (RDD, DataFrames, Spark SQL).
Experience building and optimizing large-scale ETL/data pipelines.
Strong SQL knowledge and query optimization skills.
Experience migrating legacy data processing systems to Python/PySpark-based pipelines.
Experience working with:
REST APIs
Object-oriented and functional programming in Python
#J-18808-Ljbffr