The Data Engineer will build and support GCP-based data pipelines, focusing on data governance and optimizing data architecture to improve operational excellence and data delivery for various stakeholders.
Job Description
Are You Ready to Make It Happen at Mondelēz International?
Join our Mission to Lead the Future of Snacking. Make It Uniquely Yours.
Support the day-to-day operations of these GCP-based data pipelines, ensuring data governance, reliability, and performance optimization. Hands-on experience with GCP data services such as Dataflow, Big Query, Data proc, Pub/Sub, and real-time streaming architectures is preferred. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives. This role requires a flexible working schedule, including potential weekend support for critical operations, while maintaining a 40-hour work week .
How you will contribute
A key aspect of the MDLZ Data Hub Google Big Query platform is handling the complexity of inbound data, which often does not follow a global design (e.g., variations in channel inventory, customer PoS, hierarchies, distribution, and promo plans). You will assist in ensuring the robust operation of pipelines that translate this varied inbound data into the standardized o9 global design. This also includes managing pipelines for different data drivers (> 6 months vs. 0-6 months), ensuring consistent input to o9.
What you will bring
A desire to drive your future and accelerate your career. You will bring experience and knowledge in:
Support the day-to-day operations of these GCP-based data pipelines, ensuring data governance, reliability, and performance optimization. Hands-on experience with GCP data services such as Dataflow, Big Query, Data proc, Pub/Sub, and real-time streaming architectures is preferred. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
More about this role
Must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives. This role requires a flexible working schedule, including potential weekend support for critical operations, while maintaining a 40-hour work week .
What you need to know about this position:
A key aspect of the MDLZ Data Hub Google Big Query platform is handling the complexity of inbound data, which often does not follow a global design (e.g., variations in channel inventory, customer PoS, hierarchies, distribution, and promo plans). You will assist in ensuring the robust operation of pipelines that translate this varied inbound data into the standardized o9 global design. This also includes managing pipelines for different data drivers (> 6 months vs. 0-6 months), ensuring consistent input to o9.
What extra ingredients you will bring:
No Relocation support available
Business Unit Summary
Headquartered in Singapore, Mondelēz International's Asia, Middle East and Africa (AMEA) region is comprised of six business units, has more than 21,000 employees and operates in more than 27 countries including Australia, China, Indonesia, Ghana, India, Japan, Malaysia, New Zealand, Nigeria, Philippines, Saudi Arabia, South Africa, Thailand, United Arab Emirates and Vietnam. Seventy-six nationalities work across a network of more than 35 manufacturing plants, three global research and development technical centers and in offices stretching from Auckland, New Zealand to Casablanca, Morocco. Mondelēz International in the AMEA region is the proud maker of global and local iconic brands such as Oreo and belVita biscuits, Kinh Do mooncakes, Cadbury, Cadbury Dairy Milk and Milka chocolate, Halls candy, Stride gum, Tang powdered beverage and Philadelphia cheese. We are also proud to be named a Top Employer in many of our markets.
Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law.
Job Type
Regular
Digital Strategy & Innovation
Technology & Digital
Are You Ready to Make It Happen at Mondelēz International?
Join our Mission to Lead the Future of Snacking. Make It Uniquely Yours.
Support the day-to-day operations of these GCP-based data pipelines, ensuring data governance, reliability, and performance optimization. Hands-on experience with GCP data services such as Dataflow, Big Query, Data proc, Pub/Sub, and real-time streaming architectures is preferred. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives. This role requires a flexible working schedule, including potential weekend support for critical operations, while maintaining a 40-hour work week .
How you will contribute
A key aspect of the MDLZ Data Hub Google Big Query platform is handling the complexity of inbound data, which often does not follow a global design (e.g., variations in channel inventory, customer PoS, hierarchies, distribution, and promo plans). You will assist in ensuring the robust operation of pipelines that translate this varied inbound data into the standardized o9 global design. This also includes managing pipelines for different data drivers (> 6 months vs. 0-6 months), ensuring consistent input to o9.
What you will bring
A desire to drive your future and accelerate your career. You will bring experience and knowledge in:
- 6+ years of overall industry experience and minimum of 6-8 years of experience building and deploying large scale data processing pipelines in a production environment
- Focus on excellence: Has practical experience of Data-Driven Approaches, Is familiar with the application of Data Security strategy, Is familiar with well know data engineering tools and platform
- Technical depth and breadth : Able to build and operate Data Pipelines, Build and operate Data Storage, Has worked on big data architecture within Distributed Systems. Is familiar with Infrastructure definition and automation in this context. Is aware of adjacent technologies to the ones they have worked on. Can speak to the alternative tech choices to that made on their projects.
- Implementation and automation of Internal data extraction from SAP BW / HANA
- Implementation and automation of External data extraction from openly available internet data sources via APIs
- Data cleaning, curation and enrichment by using Alteryx, SQL, Python, R, PySpark, SparkR
- Preparing consolidated DataMart for use by Data Scientists and managing SQL Databases
- Exposing data via Alteryx, SQL Database for consumption in Tableau
- Data documentation maintenance/update
- Collaboration and workflow using a version control system (e.g., Git Hub)
- Learning ability : Is self-reflective, Has a hunger to improve, Has a keen interest to drive their own learning. Applies theoretical knowledge to practice
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Data engineering Concepts: Experience in working with data lake, data warehouse, data mart and Implemented ETL/ELT and SCD concepts.
- ETL or Data integration tool: Experience in Talend is highly desirable.
- Analytics: Fluent with SQL, PL/SQL and have used analytics tools like Big Query for data analytics
- Cloud experience: Experienced in GCP services like cloud function, cloud run, data flow, data proc and big query.
- Data sources: Experience of working with structure data sources like SAP, BW, Flat Files, RDBMS etc. and semi structured data sources like PDF, JSON, XML etc.
- Flexible Working Hours: This role requires the flexibility to work non-traditional hours, including providing support during off-hours or weekends for critical data pipeline job runs, deployments, or incident response, while ensuring the total work commitment remains a 40-hour week.
- Data Processing: Experience in working with any of the Data Processing Platforms like Dataflow, Databricks.
- Orchestration: Experience in orchestrating/scheduling data pipelines using any of the tools like Airflow and Alteryx
Support the day-to-day operations of these GCP-based data pipelines, ensuring data governance, reliability, and performance optimization. Hands-on experience with GCP data services such as Dataflow, Big Query, Data proc, Pub/Sub, and real-time streaming architectures is preferred. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
More about this role
Must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives. This role requires a flexible working schedule, including potential weekend support for critical operations, while maintaining a 40-hour work week .
What you need to know about this position:
A key aspect of the MDLZ Data Hub Google Big Query platform is handling the complexity of inbound data, which often does not follow a global design (e.g., variations in channel inventory, customer PoS, hierarchies, distribution, and promo plans). You will assist in ensuring the robust operation of pipelines that translate this varied inbound data into the standardized o9 global design. This also includes managing pipelines for different data drivers (> 6 months vs. 0-6 months), ensuring consistent input to o9.
What extra ingredients you will bring:
- 6+ years of overall industry experience and minimum of 6-8 years of experience building and deploying large scale data processing pipelines in a production environment
- Focus on excellence: Has practical experience of Data-Driven Approaches, Is familiar with the application of Data Security strategy, Is familiar with well know data engineering tools and platform
- Technical depth and breadth : Able to build and operate Data Pipelines, Build and operate Data Storage, Has worked on big data architecture within Distributed Systems. Is familiar with Infrastructure definition and automation in this context. Is aware of adjacent technologies to the ones they have worked on. Can speak to the alternative tech choices to that made on their projects.
- Implementation and automation of Internal data extraction from SAP BW / HANA
- Implementation and automation of External data extraction from openly available internet data sources via APIs
- Data cleaning, curation and enrichment by using Alteryx, SQL, Python, R, PySpark, SparkR
- Preparing consolidated DataMart for use by Data Scientists and managing SQL Databases
- Exposing data via Alteryx, SQL Database for consumption in Tableau
- Data documentation maintenance/update
- Collaboration and workflow using a version control system (e.g., Git Hub)
- Learning ability : Is self-reflective, Has a hunger to improve, Has a keen interest to drive their own learning. Applies theoretical knowledge to practice
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Data engineering Concepts: Experience in working with data lake, data warehouse, data mart and Implemented ETL/ELT and SCD concepts.
- ETL or Data integration tool: Experience in Talend is highly desirable.
- Analytics: Fluent with SQL, PL/SQL and have used analytics tools like Big Query for data analytics
- Cloud experience: Experienced in GCP services like cloud function, cloud run, data flow, data proc and big query.
- Data sources: Experience of working with structure data sources like SAP, BW, Flat Files, RDBMS etc. and semi structured data sources like PDF, JSON, XML etc.
- Flexible Working Hours: This role requires the flexibility to work non-traditional hours, including providing support during off-hours or weekends for critical data pipeline job runs, deployments, or incident response, while ensuring the total work commitment remains a 40-hour week.
- Data Processing: Experience in working with any of the Data Processing Platforms like Dataflow, Databricks.
- Orchestration: Experience in orchestrating/scheduling data pipelines using any of the tools like Airflow and Alteryx
No Relocation support available
Business Unit Summary
Headquartered in Singapore, Mondelēz International's Asia, Middle East and Africa (AMEA) region is comprised of six business units, has more than 21,000 employees and operates in more than 27 countries including Australia, China, Indonesia, Ghana, India, Japan, Malaysia, New Zealand, Nigeria, Philippines, Saudi Arabia, South Africa, Thailand, United Arab Emirates and Vietnam. Seventy-six nationalities work across a network of more than 35 manufacturing plants, three global research and development technical centers and in offices stretching from Auckland, New Zealand to Casablanca, Morocco. Mondelēz International in the AMEA region is the proud maker of global and local iconic brands such as Oreo and belVita biscuits, Kinh Do mooncakes, Cadbury, Cadbury Dairy Milk and Milka chocolate, Halls candy, Stride gum, Tang powdered beverage and Philadelphia cheese. We are also proud to be named a Top Employer in many of our markets.
Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law.
Job Type
Regular
Digital Strategy & Innovation
Technology & Digital
Top Skills
Airflow
Alteryx
BigQuery
Dataflow
Dataproc
GCP
Pub/Sub
Pyspark
Python
R
Sparkr
SQL
Talend
Similar Jobs at Mondelēz International
Big Data • Food • Hardware • Machine Learning • Retail • Automation • Manufacturing
The role involves designing and optimizing the Semantic Layer for self-service data solutions, leading Power BI training, and consulting on project executions.
Top Skills:
Azure Data SolutionsBig QueryDaxGCPPower BI
Big Data • Food • Hardware • Machine Learning • Retail • Automation • Manufacturing
The role involves designing, implementing, and optimizing a Semantic Layer for business data self-service, while providing training and expertise in data solutions.
Top Skills:
AzureBig QueryDaxGCPPower BI
Big Data • Food • Hardware • Machine Learning • Retail • Automation • Manufacturing
The Commercial Finance Analyst will manage financial operations, ensure pricing accuracy, handle receivables, and support sales teams with trade execution and reporting to maintain compliance and smooth operations.
Top Skills:
ExcelSAP
What you need to know about the Kolkata Tech Scene
When considering the industries shaping India's tech scene, gaming might not immediately come to mind. However, in the last decade, increased internet usage and greater access to mobile devices have catapulted the industry to new heights, with Kolkata-based companies like Virtualinfocom, Red Apple Technologies and Digitoonz, at the forefront, driving the design and animation of new gaming titles for players.

