Our Technology team is the backbone of our company: constantly creating, testing, learning and iterating to better meet the needs of our customers. If you thrive in a fast-paced, ideas-led environment, you’re in the right place.
Why this job’s a big deal?
Enjoy working with big data? We do too! In fact, data lies at the very core of our business! As a data engineer you will be working on data systems that serve and generate billions of events each day. With the most advanced big data technologies at your fingertips, you will leverage data to understand, meet, and anticipate the needs of our customers.
In This Role You Will Get To
- Work with state-of-the-art data frameworks and technologies like Kubernetes, Kafka, Terraform, Dataflow, Cloud Dataproc, and others
- You will work cross-functionally with various teams, creating solutions that deal with large volumes of data
- You will work with the team to set and maintain standards and development practices
- You will be a keen advocate of quality and continuous improvement
- You will modernize the current data systems to develop Cloud enabled Data and Analytics solutions
- Drive the development of cloud-based data lake and hybrid data warehouses & business intelligence platforms
- Improve upon the data ingestion models, ETL jobs, and alarming to maintain data integrity and data availability
- Build Data Pipelines to ingest structured and Unstructured Data
- Gain hands-on experience with new data platforms and programming languages
- Analyze and provide data supported recommendations to improve product performance and customer acquisition
- Design, Build and Support resilient production grade applications and web services
Who You Are
- Bachelor’s degree in Computer Engineering/Computer Science.
- 5+ years of work experience in Software Engineering and development.
- Experience with Data transformation, ETL and Data Analytics using Cloud Native technologies.
- Strong experience with the big data technologies like Hadoop, Spark, Hive, Presto
- Strong analytical and communication skills.
- Experience working with large, disconnected and/or unstructured datasets.
- Experience building and optimizing data pipelines , architectures and data sets using Cloud native technologies.
- Strong experience with Containers, Kubernetes, Docker.
- Experience with throughput batch processing and/or streaming systems (Kafka, Spark)
- Strong experience with Relational SQL and NoSQL databases
- Experience designing, building and maintaining RESTful APIs
- Experienced in code / design reviews and familiar with Git, Jenkins, common CI/CD tools
- Object-oriented/object function scripting languages such as Python, Java, Scala.
- Demonstrated history of living the values important to priceline: Customer, Innovation, Team, Accountability and Trust. Unquestionable integrity and ethics.
- Experienced with strategies such as indexing, machine-learned rankings, clustering and distributed concepts