*what you will do?*
join us in the procurement execution center (pec) as a *data engineer associate *as part of a is a diverse team of data and procurement individuals. In this role, you will be responsible for supporting the end-to-end (e2e) management of our data, including: etl/elt, dw/dl, data staging, data governance, and manage the different layers of data required to enable a successful bi, reporting and analytics for the pec. This role will work with multiple types of data, spreading across multiple functional areas of expertise, including fleet, mro & energy, travel, professional services, among others.
*how you will do it?*
- deploy data ingestion processes through azure data factory to load data models as required into azure synapse.
- build, design *and support *etl/elt processes with azure data factory (adf) and/or python, which once deployed, will require to be executed daily and weekly.
- assemble large data sets that meet functional / non-functional business requirements.
- develop data models that enable dataviz, reporting and advanced data analytics, striving for optimal performance across all data models.
- maintain conceptual, logical, and physical data models along with corresponding metadata.
- utilizes the devops pipeline deployment model, including automated testing procedures.
- executes data stewardship and data quality across our data marts, to cleanse, enrich and correct issues in our data, using knowledge bases and business rules.
- performs the necessary data ingestion, cleansing, transformation, and coding of business rules to support annual procurement bidding activities.
- other data engineering duties as assigned.
*what are we looking for?*
- bachelor’s degree in related field (engineering, computer science, data science or similar)
- 1+ years of relevant experience in data analytics roles, data engineering, software engineering or other relevant data roles.
- sql knowledge and experience working with relational databases.
- knowledge in dw/dl concepts, data marts, data modeling, etl/elt, data quality/stewardship, distributed systems, and metadata management.
- data manipulation with any programming language (python + pandas, sql, etc.) and/or etl/elt development experience (1+ years, ssis or adf are preferred.)
- ability to resolve etl/elt problems by proposing and implementing tactical/strategic solutions.
- experience with object-oriented function scripting languages: python, scala, c#, etc.
- experience with nosql databases is a plus to support the transition from on-prem to cloud.
- excellent problem solving, critical thinking, and communication skills
- experience with git/repo management *is a plus*
- due to the global nature of the role, proficiency in english language is a must