Job Description
Job Description
Be an integral part of an agile team that's constantly pushing the envelope to enhance, build, and deliver top-notch technology products.
As a Senior Lead Software Engineer at JPMorgan Chase within the Corporate Technology, you will play a crucial role in an agile team that is dedicated to developing, enhancing, and delivering top-tier technology products in a secure, stable, and scalable manner. Your technical expertise and problem-solving skills will be instrumental in promoting significant business impact and addressing a wide range of challenges across various technologies and applications. This role involves leading and developing data pipeline application, a key application for data migrations from on-premises to Cloud.
Job Summary:
We are seeking a skilled and motivated Data Engineer to join our team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and systems to support our data-driven decision-making processes. This role requires a strong understanding of data architecture, data modeling, and ETL processes.
Job responsibilities:
- Design, develop, and maintain robust data pipelines and ETL processes to ingest, process, and store large volumes of data from various sources.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet business needs.
- Optimize and improve existing data systems for performance, scalability, and reliability.
- Implement data quality checks and validation processes to ensure data accuracy and integrity.
- Monitor and troubleshoot data pipeline issues, ensuring timely resolution and minimal disruption.
- Stay up-to-date with industry trends and best practices in data engineering and incorporate them into our processes.
Required qualifications, capabilities and skills:
- Formal training or certification in software engineering concepts and 3+ years of applied experience.
- 5+ years in application development using Java, Scala, or Python.
- Proven experience as a Data Engineer or similar role.
- Strong proficiency in SQL and relational databases (e.g., MySQL, PostgreSQL).
- Experience with big data technologies (e.g., Hadoop, Spark) and cloud platforms, especially AWS.
- Proficiency in Python, Java, or Scala.
- Experience with infrastructure as code tools, particularly Terraform.
- Experience with Airflow or AWS MWAA.
- Experience with containerization and orchestration tools, especially Kubernetes.
- Proficiency with AWS services like EKS, EMR, Lambda, DynamoDB, and ECS.
- Excellent problem-solving skills, attention to detail, and strong communication skills for team collaboration.
Preferred qualifications, capabilities and skills:
- Knowledge of Hadoop, AWS, Terraform concepts and frameworks.
- Familiarity with data warehousing solutions, especially Snowflake, and ETL tools.