Position: Workload Analytics Engineer
Job Responsibilities: The candidate will work on a subset of tasks specified below. These include but are not limited to the following: Analysis and characterization of data center workloads in several areas such as computeintensive, HPC, big data, AI, databases. and virtualization Performance benchmarking of DRAM, LPDRAM, HBM, GDDR, 3DXP, NVMe, SSD using both microbenchmarks and the aforementioned data center applications and benchmarks Create workload traces from data center applications and replay traces against the deepmemory/storage product portfolio Create emulation platforms to test and evaluate new and emerging technologies such as CXL to study CXL connected DRAM and other products Study DDR4/5 skews in terms of densities, data rates and configurations for different CPUs to understand TCO Create reference system architectures and compute node configurations based on a combination of Micron products for a given workload to understand TCO Study system balance ratios for DRAM/HBM/GDDR/3DXP/SSD in terms of capacity and bandwidth, memory expansion/replacement analysis visàvis DRAM/HBM/3DXP/NVMe, and study the interplay between these products and understand TCO Study memory/core, byte/FLOP and memory bandwidth/core/FLOP requirements for a variety of workloads to influence future products Study data movement between CPU, GPU and the associated memory subsystems (DDR, GDDR, HBM) in heterogeneous system architectures via connectivity such as PCIe/NVLINK/Infinity Fabric to understand the bottlenecks in data movement for different workloads (particularly AI) Analysis of system power consumption and thermal properties for the various workloads and memory/storage products Develop an automated testing framework through scripting Study of reliability properties and SBEs and DBEs on the various memory modules Customer engagements to obtain reliability logs and perform detailed analysis of failure properties Develop modeldriven provisioning software tools to devise optimal system configurations Develop software systems for the concerted use of deepmemory and storage tiers (e.g., file systems, memory object systems)
Education Requirements/ Preferences: Bachelors or Masters in Computer Science or related field Strong computer systems foundations Over three years of software development and performance analysis/engineering experience Familiarity with synthetic memory bandwidth testing applications Familiarity with and knowledge of server system memory (DRAM) and processors Experience with performance analysis Understanding of memory and storage hierarchy including HBM and NVM Experience with two or more hypervisors and SDS stacks: VMWare ESX including vSAN, Microsoft HyperV including Storage Spaces, QEMU/KVM with CEPH, Docker, Linux Containers Application level benchmark tools for server and client virtualization (VDI) - VMmark, LoginVSI, RAWC, HCIBench, Database benchmarks - HammerDB, SysBench Ability to deploy and test utilizing TPC (tpc.org) transaction processing and database benchmarks Knowledge of AI, ML/DL frameworks, ML perf and running them in CPUGPU heterogeneous architectures Configuration, deployment and operations of at least two operating systems and associated internal storage stacks: Microsoft, VMware, Linux, and UNIX Knowledge of big data and dataintensive analysis tools (e.g., Apache Hadoop, HDFS) Strong software development skills using leading scripting, programming languages and technologies (Python, C, C++) Strong systems software development skills by way of developing solutions for memory, storage, CPU and big data subsystems Familiarity with PCIe and NVLINK connectivity Familiarity with system level automation tools and processes Excellent oral communication skills Excellent written and presentation skills to document the findings We are an equal opportunity employer and value diversity at our company.