Apply knowledge set to fetch data from multiple online sources, cleanse it and build APIs on top of it
Develop a deep understanding of our vast data sources on the web and know exactly how, when, and which data
to scrap, parse and store
Work closely with Database Administrators to store data in SQL and NoSQL databases
Develop frameworks for automating and maintaining constant flow of data from multiple sources
Work independently with little supervision to research and test innovative solutions skills
What You Need To Work With Us
Experience with SQL and NoSQL databases
Experience with multi-processing, multi-threading, and AWS/Azure.
Strong knowledge of scraping frameworks such as Python (Request, Beautiful Soup),Web Harvest and others
Depth knowledge of algorithms and data structures & previous experience with web crawling is a must
Qualification And Experience Required:; 1 - 5 years of relevant experience
Bachelor/ Master's degree in computer science / Computer Engineering / Information Technology.