You will work with our data team (mostly with data engineers and analytics engineers but also with data scientists and data analysts) together with backend engineers and sres to help us evolve and take our data platform to the next level.
what will you do
* work in a multi-cloud environment with gcp/aws with cutting-edge technologies like apache airflow, pub/sub, snowflake, and kubernetes among others, using python as your main programming language.
* build and maintain high-quality, reliable and robust data pipelines based on python and deployed on kubernetes and airflow, that convert data streams into valuable information.
* contribute to the development of new features and tools.
* support our data architecture, based on real-time streaming and batch-processing solutions, supervising and maintaining the platform and the pipelines to guarantee its optimal performance. Help to implement strategies to optimize the performance and scalability of the data systems.
* ensure that data architecture complies with security requirements.
* contribute to maintaining the continuous integration and deployment (ci/cd).
who we are looking for
* bachelor’s degree in computer science or similar
* 3-5 years of previous experience working in data or software engineering environments
* experience in nosql databases (i.e mongodb) is highly desirable.
* experience working with bash scripting, docker, kubernetes and linux environments is highly desirable.
* coding proficiency in python is a must. (required)
* experience with google cloud platform (required)
* hybrid scheme (1-2 days assistance to office near reforma) (required)
seniority level
entry level
employment type
full-time
job function
it services and it consulting
#j-18808-ljbffr