
Resident Solutions Architect-Public Sector
Databricks Inc., Boston, MA, United States
Overview
Resident Solutions Architect-Public Sector in Boston, Massachusetts. As a Big Data Solutions Architect (Resident Solutions Architect) in our Professional Services team you will work with clients on short to medium term engagements on big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers get the most value out of their data. RSAs are billable and complete projects according to specification with excellent customer service. You will report to the regional Manager/Lead. U.S. citizenship is required for this position to comply with U.S. federal government requirements.
Responsibilities
- You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to's and productionalizing customer use cases
- Work with engagement managers to scope a variety of professional services work with input from the customer
- Guide strategic customers as they implement transformational big data projects, including end-to-end design, build and deployment of industry-leading big data and AI applications
- Consult on architecture and design; bootstrap or implement customer projects which lead to a customer’s successful understanding, evaluation and adoption of Databricks
- Provide an escalated level of support for customer operational issues
- Collaborate with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer needs
- Work with Engineering and Databricks Customer Support to provide product and implementation feedback and guide rapid resolution for engagement-specific product and support issues
Qualifications
- 6+ years experience in data engineering, data platforms & analytics
- Comfortable writing code in either Python or Scala
- Working knowledge of two or more cloud ecosystems (AWS, Azure, GCP) with expertise in at least one
- Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals
- Familiarity with CI/CD for production deployments
- Working knowledge of MLOps
- Design and deployment of performant end-to-end data architectures
- Experience with technical project delivery - managing scope and timelines
- Documentation and white-boarding skills
- Experience working with clients and managing conflicts
- Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects
- Travel to customers 20% of the time
- Databricks Certification
Pay Range Transparency
Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role are listed below and represent the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on factors such as job-related skills, depth of experience, relevant certifications and training, and specific work location. The total compensation package may also include eligibility for annual performance bonus, equity, and benefits. Zone 1 Pay Range $180,656 — $248,360 USD; Zone 2 Pay Range $180,656 — $248,360 USD; Zone 3 Pay Range $180,656 — $248,360 USD; Zone 4 Pay Range $180,656 — $248,360 USD. For more information regarding which range your location is in, please visit the company page.
About Databricks
Databricks is the data and AI company. More than 10,000 organizations worldwide rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow.
Benefits
At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all employees. For specific details on benefits offered in your region, please consult the company’s benefits resources.
Our Commitment to Diversity and Inclusion
We are committed to fostering a diverse and inclusive culture where everyone can excel. Our hiring practices are inclusive and adhere to equal employment opportunity standards. We do not discriminate on the basis of protected characteristics.
Compliance
If access to export-controlled technology or source code is required for performance of job duties, it is within the Employer’s discretion whether to apply for a U.S. government license for such positions, and the Employer may decline to proceed with an applicant on this basis alone.
Apply for this job
First Name *
Last Name *
Preferred First Name
Phone
Country *
Phone *
Resume/CV *
Attach Enter manually — Accepted file types: pdf, doc, docx, txt, rtf
Cover Letter
Attach Enter manually — Accepted file types: pdf, doc, docx, txt, rtf
LinkedIn Profile
How did you hear about this job?
Are you legally authorized to work in the country in which you are applying? *
Do you now or will you in the future need sponsorship for employment visa status in the country in which you are applying? *
Do you have hands-on experience owning end-to-end big data architectures in production, including ingestion, transformation, storage, orchestration, and downstream analytics or ML? *
How many years of hands-on Apache Spark experience do you have in production customer environments?
Note: This information will only be used to ensure compliance with U.S. sanctions and export controls.
Voluntary Self-Identification and Disability information sections follow for government reporting purposes, including gender, race/ethnicity, veteran status and disability status. Completion is voluntary and will not affect hiring decisions.
PUBLIC BURDEN STATEMENT: This survey should take about 5 minutes to complete.
#J-18808-Ljbffr