Job Description :
- Looking for a Spark developer who knows how to fully exploit the potential of our Spark cluster. You will clean, transform, and analyze vast amounts of raw data from various systems using Spark to provide ready-to-use data to our feature developers and business analysts.
- This involves both ad-hoc requests as well as data pipelines that are embedded in our production environment.
- Experience level – Intermediate (5-8 years exp)
- Looking for a candidate strong in Spark, Scala, Java, and PL/SQL development.
- Experience in Big Data Hadoop, Hive, and Spark with hands-on expertise in the design and implementation of high data volume solutions.
- Strong in Spark Scala pipelines (both ETL & Streaming)
- Proficient in Spark architecture
- Strong coding experience in Java and Python
- Experience in high volume data environments
- Experience in data lakes, datahub implementation
Job Category: developer
Job Type: Remote