At least 1 year of overall industry experience. (working experience in a relevant field would be preferred and favourable).
Experience in creating ETL pipeline and familiar with extraction, transformation, loading, filtering, cleaning, joining, scheduling, monitoring and data-streaming.
Experience with data processing tools. (Spark, Hadoop).
Familiarity with Data warehousing tools and processes. (Snowflake, RedShift, S3, BigQuery).
Familiarity with analytics and visualization tools is preferable.
Candidates with certifications in big data tools would be preferable.
Experience in any programming language like Java, Python, Scala.
Experience with relational SQL and NoSQL databases.
Familiarity with project management processes (Sprint, KANBAN) and tools. (Jira, Asana).
Ability to work independently or in a collaborative environment with a proactive attitude.
Responsible.
Bachelor’s degree in Computer Science or equivalent.
Please attach your latest resume in PDF format while applying.
Do you like cookies? 🍪 We use cookies to ensure you get the best experience on our website. If
you continue to use this site we will assume that you are happy with it Learn more