Job Description
Experience: 4 - 9 Years
Position: 1 Year Contract (Extendable)
Skill: Spark, Scala, Python, PySpark, HDFS, Mapreduce.
JOB Description:1. Deep experience in developing data processing task using PySpark such as reading data from external sources, merge data, perform data enrichment and load into target destinations
- Experience to Big Data ecosystem components (Hive, ZooKeeper, Oozie, HDFS etc.)
- Produce unit tests and document for Spark transformations and helper methods
- Preferred to have Abinitio/ ETL background
- Good knowledge in Unix Shell scripting
- CI/CD with knowledge of Git Hub and Jenkins
- Ability to learn new concepts, technologies and tools quickly
- Solid understanding of Agile Sprint methodologies
- Ability to communicate effectively in both a technical and non-technical manner
Primary Skill Set - Python Scala pyspark and Spark, Hive , Unix, SQL
Salary: Not Disclosed by Recruiter
Role Category: Programming & Design
Role: Software Developer