Business Unit:
Cubic Corporation
Job Details:
Job Summary: Job Summary: Incumbents of position will be responsible for designing, building, and optimizing data pipelines to expand the capabilities and products delivered by our analytics platform. Must be comfortable interfacing and supporting multiple teams and working with product managers, data engineers, data scientists, business analysts, and project managers, at different regional offices across the globe. The ideal candidate has a passion for data, significant experience building production level data pipelines, and enjoys working in a self-directed semi--structured environment. Incumbents of this position will regularly exercise discretionary and substantial decision-making authority.
Essential Job Duties And Responsibilities
- Evaluating technology stacks, including data storage and databases
- Database maintenance and performance tuning; query optimizations;
- Analyzing, profiling, documenting data warehouses
- Writing technical documentation and business documents
- Implementing best practices, i.e., using code repositories, documenting procedures, refactoring code to be more robust and componentized
- Implementing complex big data projects with a focus on collecting, parsing, managing, analysing and visualizing large sets of data to turn information into insights using multiple platforms
- Building configurable, parametrized, scalable, robust pipelines/ETL workflows
- Scraping, data ingestion and data integration from multiple sources and databases, including web APIs, Protobuf feeds, SQL databases
- Data exploration with Tableau, PowerBI
- End-to-end pipeline QA
- Creating Azure cloud infrastructure and services
- Deploying applications and libraries to cloud platforms
- Feature extraction and generation
- Using Python to clean data, extract features, and build models using packages such as scikit-learn, numpy, pandas
Technologies We Use
- Azure Cloud
- Data Factory
- Synapse Analytics
- Databricks
- Analysis Services
- Azure Functions
- Azure DevOps
- PowerBI
- Oracle
- Kafka
- Angular
- JIRA/Confluence
Education
Bachelor’s degree in computer science or equivalent
Experience
- 2+ years of hands-on cloud data engineering experience
- Working Python and SQL knowledge
- Experience bringing data pipelines into production.
- Build process supporting data transformation, data structures, metadata, dependencies and workload management
- Database design, schema definition, and database optimization
- Experience building elastic, scalable APIs
- Experience developing in an Agile environment
- Familiar with cloud computing, Azure and/or AWS, PaaS, IaaS
- Experience developing Python scripts
- Experience using BitBucket or other Git tools for version control
- Experience with Linux, developing shell scripts and basic system administration functions
Knowledge, Skills And Abilities
- Strong analytical and problem solving skills, attention to details, critical thinking ability, and creativity.
- Excellent written and verbal communication skills used to effectively communicate and clearly present complex information in a manner appropriate to the audience
- Prior experience working in the transportation or logistics industries is a plus
The description provided above is not intended to be an exhaustive list of all job duties, responsibilities and requirements. Duties, responsibilities and requirements may change over time and according to business need.
Worker Type: Employee