Selected intern's day-to-day responsibilities include:
1. Create and maintain optimal data pipeline architecture
2. Assemble large, complex data sets that meet functional/non-functional business requirements
3. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies
4. Work with stakeholders, including the Executive, Product, Data, and Design teams, to assist with data-related technical issues and support their data infrastructure needs
5. Perform data analytics and present meaningful recommendations and insights for all other teams, both on-shore and off-shore
Skills Required:
1. Basic understanding of Big Data tools: Hadoop, Spark, Kafka, etc.
2. Excellent understanding of T-SQL programming and Microsoft SQL Server or any other similar tools
3. Demonstrated knowledge of large, relational databases and data structures
4. Analytical and problem-solving skills to identify defects or inconsistencies
5. Understanding of ETL concepts
6. Communication and presentation skills with demonstrated ability to converse with all levels of staff and management
Skill(s) required
PythonSQL
Who can apply
Only those candidates can apply who:
1. are available for full time (in-office) internship
2. have relevant skills and interests
Perks
Certificate Letter of recommendation 5 days a week
Number of openings
5
Certificate: Will be provided at the end of the Internship