Pay: $120-125/hr
Hybrid; 2 days per week onsite
Top Skills: Databricks, Python, SQL, Spark, AWS, Trino
Must be a US Citizen
Responsibilities:
• Design, develop, monitor, and maintain data pipelines in an AWS ecosystem with Databricks, Delta Lake, Python, SQL and Starburst as the technology stack.
• Collaborate with cross-functional teams to understand data needs and translate them into effective data pipeline solutions.
• Establish data quality checks and ensure data integrity and accuracy throughout the data lifecycle.
• Automate testing of the data pipelines and configure as part of CI/CD.
• Optimize data processing and query performance for large-scale datasets within AWS and Databricks environments.
• Document data engineering processes, architecture, and configurations.
• Troubleshooting and debugging data-related issues on the AWS Databricks platform.
• Integrating Databricks with other AWS products such as SNS, SQS, and MSK.
Qualifications:
• 5 years of experience in data engineering roles, with a focus on AWS and Databricks.
• Highly proficient with Databricks, Spark, Starburst/Trino, Python, PySpark and SQL.
• Hands-on experience in Gitlab with CI/CD.
• Hands-on experience in AWS Services like S3, RDS, Lambda, SQS, SNS, MSK.
• Strong SQL skills to perform data analysis and understanding of source data.
• Experience with data pipeline orchestration tools.
24-04715
APPLY NOW
Loading...