Company Overview
GEP is a diverse, creative team of people passionate about procurement. We invest ourselves entirely in our client’s success, creating strong collaborative relationships that deliver extraordinary value year after year. Our clients include market global leaders with far-flung international operations, Fortune 500 and Global 2000 enterprises, leading government and public institutions.
We deliver practical, effective services and software that enable procurement leaders to maximise their impact on business operations, strategy and financial performance. That’s just some of the things that we do in our quest to build a beautiful company, enjoy the journey and make a difference. GEP is a place where individuality is prized, and talent respected. We’re focused on what is real and effective. GEP is where good ideas and great people are recognized, results matter, and ability and hard work drive achievements. We’re a learning organization, actively looking for people to help shape, grow and continually improve us.
What You Will Do
Role Description (Roles & Responsibilities):
- Candidate will research, design and implement state-of-the-art ML systems using predictive
modelling, deep learning, natural language processing and other ML techniques to help meeting business objectives.
- Candidate will work closely with the product development/Engineering team to develop
solutions for complex business problems or product features.
- Handle Big Data scale for training and deploying ML/NLP based business modules/chatbots.
Primary Skills
What you should bring
- B.Tech/MS/PhD degree in Computer Science, Computer Engineering or related technical
discipline with 3-4 years of industry experience in Data Science.
- Proven experience of working on unstructured and textual data. Deep understanding and
expertise of NLP techniques (POS tagging, NER, Semantic role labelling etc).
- Experience working with some of the supervised/unsupervised learning ML models such as
linear/logistic regression, clustering, support vector machines (SVM),neural networks, Random
Forest, CRF, Bayesian models etc. The ideal candidate will have a wide coverage of the different
methods/models, and an in depth knowledge of some.
- Strong coding experience in Python, R and Apache Spark. Python Skills are mandatory.
- Experience with NoSQL databases, such as MongoDB, Cassandra, HBase etc.
- Experience of working with Elastic search is a plus.
- Experience of working on Microsoft Azure is a plus although not mandatory.
- Basic knowledge of Linux and related scripting like Bash/shell script.