The role involves overseeing the development of data engineering solutions, managing data pipelines, optimizing data systems and infrastructure, and leading a team to achieve operational goals.
Job Purpose and Impact
Key Accountabilities
Qualifications
- The Supervisor II, Data Engineering job sets goals and objectives for the achievement of operational results for the team responsible for designing, building and maintaining robust data systems that enable data analysis and reporting. This job leads implementing the end to end process to ensure that large sets of data are efficiently processed and made accessible for decision making.
Key Accountabilities
- DATA & ANALYTICAL SOLUTIONS: Oversees the development of data products and solutions using big data and cloud based technologies, ensuring they are designed and built to be scalable, sustainable and robust.
- DATA PIPELINES: Develops and monitors streaming and batch data pipelines that facilitate the seamless ingestion of data from various data sources, transform the data into information and move to data stores like data lake, data warehouse and others.
- DATA SYSTEMS: Reviews existing data systems and architectures to lead identification of areas for improvement and optimization.
- DATA INFRASTRUCTURE: Oversees the preparation of data infrastructure to drive the efficient storage and retrieval of data.
- DATA FORMATS: Reviews and resolves appropriate data formats to improve data usability and accessibility across the organization.
- STAKEHOLDER MANAGEMENT: Partners collaboratively with multi-functional data and advanced analytic teams to capture requirements and ensure that data solutions meet the functional and non-functional needs of various partners.
- DATA FRAMEWORKS: Builds complex prototypes to test new concepts and provides guidance to implement data engineering frameworks and architectures that improve data processing capabilities and support advanced analytics initiatives.
- AUTOMATED DEPLOYMENT PIPELINES: Oversees the development of automated deployment pipelines improving efficiency of code deployments with fit for purpose governance.
- DATA MODELING: Guides the team to perform data modeling in accordance to the datastore technology to ensure sustainable performance and accessibility.
- TEAM MANAGEMENT: Manages team members to achieve the organization's goals, by ensuring productivity, communicating performance expectations, creating goal alignment, giving and seeking feedback, providing coaching, measuring progress and holding people accountable, supporting employee development, recognizing achievement and lessons learned, and developing enabling conditions for talent to thrive in an inclusive team culture.
Qualifications
- Minimum requirement of 4 years of relevant work experience. Typically reflects 5 years or more of relevant experience.
- DATA ENGINEERING: Experience with data engineering on corporate finance data is strongly preferred.
- CLOUD ENVIRONMENTS: Familiarity with major cloud platforms (AWS, GCP, Azure).
- DATA ARCHITECTURE: Experience with modern data architectures, including data lakes, data lakehouses, and data hubs, along with related capabilities such as ingestion, governance, modeling, and observability.
- DATA INGESTION: Proficiency in data collection, ingestion tools (Kafka, AWS Glue), and storage formats (Iceberg, Parquet).
- DATA STREAMING: Knowledge of streaming architectures and tools (Kafka, Flink).
- DATA MODELING: Strong background in data transformation and modeling using SQL-based frameworks and orchestration tools (dbt, AWS Glue, Airflow). Experience with modeling concepts like SCD and schema evolution.
- DATA TRANSFORMATION: Familiarity with using Spark for data transformation, including streaming, performance tuning, and debugging with Spark UI.
- PROGRAMMING: Proficient with programming in Python, Java, Scala, or similar languages. Expert-level proficiency in SQL for data manipulation and optimization.
- DEVOPS: Demonstrated experience in DevOps practices, including code management, CI/CD, and deployment strategies.
- DATA GOVERNANCE: Understanding of data governance principles, including data quality, privacy, and security considerations for data product development and consumption.
Top Skills
Airflow
AWS
Aws Glue
Azure
Dbt
Flink
GCP
Iceberg
Java
Kafka
Parquet
Python
Scala
Spark
SQL
Similar Jobs at Cargill
Food • Greentech • Logistics • Sharing Economy • Transportation • Agriculture • Industrial
The Senior Software Engineer designs and develops software applications, collaborates with cross-functional teams, leads automation efforts, and ensures high-quality code through testing and documentation.
Top Skills:
Aws Cloud ServicesCi/Cd PipelinesJava Spring BootPostgresPythonReact
Food • Greentech • Logistics • Sharing Economy • Transportation • Agriculture • Industrial
The Consultant for Data & Analytics Reporting analyzes complex datasets, creates reports and dashboards, and collaborates with teams to ensure data accuracy.
Top Skills:
DaxPower BISQL
Food • Greentech • Logistics • Sharing Economy • Transportation • Agriculture • Industrial
This role involves collecting, analyzing datasets, creating reports and dashboards, conducting statistical analysis, and ensuring data accuracy in a collaborative environment.
Top Skills:
Data Visualization ToolsPythonSQLStatistical Software
What you need to know about the Kolkata Tech Scene
When considering the industries shaping India's tech scene, gaming might not immediately come to mind. However, in the last decade, increased internet usage and greater access to mobile devices have catapulted the industry to new heights, with Kolkata-based companies like Virtualinfocom, Red Apple Technologies and Digitoonz, at the forefront, driving the design and animation of new gaming titles for players.
.png)
.png)