Synechron Logo

Synechron

Senior Data Engineer (PySpark )

Posted 7 Days Ago
Be an Early Applicant
2 Locations
Senior level
2 Locations
Senior level
Develop and optimize data processing workflows using PySpark, manage scalable data pipelines, and ensure data quality in big data environments.
The summary above was generated by AI

Software Requirements:

  • PySpark
  • Hadoop
  • Spark
  • Python
  • Unix
  • HDFS
  • PyArrow
  • SQL

Overall Responsibilities:

  • Develop and optimize data processing workflows using PySpark.
  • Implement and manage scalable data pipelines within Hadoop and Spark ecosystems.
  • Collaborate with data scientists, analysts, and stakeholders to accurately capture data requirements and deliver effective solutions.
  • Ensure data quality and integrity across all processes and workflows.
  • Monitor and resolve performance issues in big data applications.
  • Stay updated with advancements in big data technologies.

Technical Skills:

Category-wise:

PySpark and Spark:

  • Proficiency in PySpark for sophisticated data processing tasks.
  • Experience in Spark development, including performance optimization techniques.

Hadoop and HDFS:

  • Advanced proficiency with Hadoop ecosystems.
  • Skilled in leveraging HDFS for efficient data storage and retrieval.

Programming and Libraries:

  • Strong programming foundation in Python.
  • Familiarity with supporting libraries such as PyArrow.

Unix:

  • Basic experience with Unix systems for executing data processing tasks.

Databases (SQL):

  • Solid working knowledge of SQL for database management and querying.

Experience:

  • Minimum 7 years of experience in big data environments.
  • Extensive exposure to PySpark and Spark development.
  • Significant experience in data engineering and managing large datasets.

Day-to-Day Activities:

  • Develop, test, and deploy scalable data processing pipelines using PySpark.
  • Execute data extraction, transformation, and loading (ETL) activities.
  • Collaborate with cross-functional teams to gather requirements and deliver efficient solutions.
  • Optimize and troubleshoot existing data workflows and pipelines.
  • Document processes, workflows, and data models comprehensively.
  • Continuously monitor and enhance data processing performance.

Qualifications:

  • Bachelor’s or master’s degree in computer science, Data Engineering, or a related field.
  • Relevant certifications in big data technologies are advantageous.

Soft Skills:

  • Exceptional problem-solving and analytical skills.
  • Strong communication abilities to collaborate effectively with team members and stakeholders.
  • Demonstrated ability to work independently as well as in a team setting.
  • High attention to detail with a proactive approach to identifying and solving issues.
  • Flexibility and adaptability to evolving technologies and methodologies.

S​YNECHRON’S DIVERSITY & INCLUSION STATEMENT
 

Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.

All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.

Candidate Application Notice

Top Skills

Hadoop
Hdfs
Pyarrow
Pyspark
Python
Spark
SQL
Unix

Similar Jobs

7 Days Ago
2 Locations
Senior level
Senior level
Fintech • Financial Services
Develop and maintain data processing workflows using PySpark, optimize Spark jobs, deploy data pipelines, and troubleshoot data processing issues in collaboration with cross-functional teams.
Top Skills: HadoopHdfsPyarrowPysparkSparkSQLUnix
3 Days Ago
Hybrid
Pune, Mahārāshtra, IND
Mid level
Mid level
Fintech • Payments • Software
The Relationship Associate will drive strategies to increase payer growth among Education Agents, manage operations, and resolve client issues effectively.
Top Skills: ExcelPowerPointWord
53 Minutes Ago
Pune, Mahārāshtra, IND
Senior level
Senior level
Fintech • Financial Services
The Data Analytics Senior Analyst leads data analysis projects, coaches team members, and ensures quality outputs while influencing business decisions through strategic insights and recommendations.
Top Skills: Data AnalyticsStatistical Modeling Tools

What you need to know about the Kolkata Tech Scene

When considering the industries shaping India's tech scene, gaming might not immediately come to mind. However, in the last decade, increased internet usage and greater access to mobile devices have catapulted the industry to new heights, with Kolkata-based companies like Virtualinfocom, Red Apple Technologies and Digitoonz, at the forefront, driving the design and animation of new gaming titles for players.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account