Data engineer
job description:
we are seeking an experienced data engineer with expertise in etls (extract, transform, load), data modeling, and databricks to join our dynamic team. The ideal candidate must possess excellent knowledge and skills in sql and python, as they will play a crucial role in managing, optimizing, and enhancing our data infrastructure.
responsibilities:
* develop and maintain robust, efficient, and scalable etl processes for data extraction, transformation, and loading tasks.
* design, implement, and optimize data models to meet business requirements and ensure data integrity and accuracy.
* collaborate with cross-functional teams, including data scientists, software engineers, and business analysts, to understand data needs and provide data-related solutions.
* perform data analysis to identify and resolve data quality issues, inconsistencies, and performance bottlenecks.
* build and maintain the data pipeline architecture to enable data ingestion from various sources into the data warehouse or data lake.
* work closely with stakeholders to understand business objectives, identify data-related opportunities, and provide actionable insights.
* develop and maintain documentation related to data processes, data models, and system architecture.
* collaborate with the data governance team to ensure compliance with data privacy regulations and implement appropriate security measures.
skills and qualifications:
* bachelor’s or master’s degree in computer science, information systems, or a related field.
* proven work experience as a data engineer or in a similar role.
* english fluent
* strong knowledge and experience with etl processes, data modeling, and data warehousing concepts.
* proficient in sql, with the ability to write complex queries and optimize their performance.
* expertise in python programming language and its associated data libraries (e.g., pandas, numpy) for data manipulation and analysis.
* in-depth understanding of relational database systems (e.g., postgresql, mysql) and experience in query optimization techniques.
* hands-on experience with databricks for data engineering, including building and managing data pipelines, and optimizing data processing workflows.
* strong problem-solving and analytical skills, with the ability to troubleshoot and resolve data-related issues.
* excellent communication skills and the ability to effectively collaborate with cross-functional teams.
* knowledge of data governance and data security principles.
* attention to detail and ability to adhere to project timelines and deadlines.
nice to have skills:
* experience with informatica intelligent cloud services (iics) or similar cloud-based integration platforms.
* experience designing and developing data integration workflows using iics or similar tools.
* experience with data integration performance tuning and optimization.
* familiarity with cloud platform azure and experience with its data services (e.g., azure data factory).
* experience with big data technologies (e.g., hadoop, spark) and distributed computing frameworks.
#j-18808-ljbffr