
Responsibilities
We are looking for talented individuals to join our team in 2027. As a graduate, you will get opportunities to pursue bold ideas, tackle complex challenges, and unlock limitless growth. Launch your career where inspiration is infinite at our Company. Successful candidates must be able to commit to an onboarding date by end of year 2027. Please state your availability and graduation date clearly in your resume. Team Introduction: We are the business owners of the AI Compute layer and DPU at ByteDance, and the creator and open-source maintainer of AIBrix, a Kubernetes-native control plane for large-scale LLM inference. We are building the next-generation AI-native computing stack—a vertically integrated system spanning hardware acceleration, distributed orchestration, and model serving—to power some of the largest AI workloads in the world. Our organization—AI Compute + DPU—unifies three foundational layers: - AI Compute & Orchestration Planet-scale, cloud-native scheduling and serverless systems built on Kubernetes, operating across hundreds of clusters with hundreds of millions of containers and jobs daily. - Inference Infrastructure We support Seed business workloads (including SeedDance and other ByteDance models), as well as a wide range of open-source models. Through the open-sourcing of AIBrix, we are re-architecting LLM inference around KV cache systems, multimodal serving, advanced scheduling, and disaggregated execution, pushing the frontier of performance, cost efficiency, and latency. - DPU & Hardware-Accelerated Systems Next-generation software–hardware co-design across compute, networking, and storage—covering GPU virtualization, RDMA/DPDK-based networking, high-speed interconnects, and distributed storage acceleration. Together, we operate a globally distributed, hyperscale AI infrastructure powering products such as Doubao/Dola and CapCut, with a rapidly growing fleet of accelerators and increasingly deep integration with open-source ecosystems. What Makes This Different This is not an incremental infrastructure team. We are redefining how AI systems are built and operated: - Re-architecting LLM inference for heterogeneous, disaggregated environments - Designing next-generation scheduling and caching primitives for AI workloads - Bridging research breakthroughs with production systems at scale - Building infrastructure that can self-optimize via AI (AI for systems) - Driving cost-efficiency at extreme scale (GPU, CPU, network, power) This is a system-defining opportunity to work on problems that do not yet have established solutions. Who We Are Looking For Through our Global Frontier Recruitment Program, we are selectively hiring a very small number of PhD candidates (graduating 2026–2027) who demonstrate the potential to: define technical directions, lead complex systems end-to-end, and create lasting impact in AI infrastructure. We are not looking for strong candidates. We are looking for candidates whose absence would be felt. Topic Content: With the large-scale adoption of LLMs and AI agents, traditional cloud-native infrastructure can no longer meet the ultra-high performance and elasticity requirements of AI workloads. This topic conducts systematic research across the entire AI infrastructure stack: 1. Network and Observability: Research intelligent fault localization and root cause analysis for large-scale AI clusters, combined with intelligent tuning of time-series databases to improve cluster stability. 2. Storage Systems: Develop serverless high-performance elastic file systems and storage acceleration architectures specifically for AI scenarios, explore hardware-software co-optimization for DPU, and overcome AI storage performance bottlenecks. 3. Data Center Power Scheduling: Research GPU/CPU/MEM heterogeneous collaborative scheduling technologies, build a heterogeneous power orchestration system for AI agents, and address scheduling challenges including heterogenous workloads and state dependencies. 4. Vector Retrieval: Optimize core vector retrieval technologies for LLM-powered applications, building a cloud-native distributed vector index engine to meet ultra-large-scale vector retrieval demands with low latency and low cost. 5. Intelligence and Agent Architecture: Explore automatic infrastructure optimization based on AI Agent workflows, build a self-evolvable business agent framework, and enable full-stack intelligent optimization through AI for Infra. This topic aims to build a next-generation AI-native infrastructure to support the deployment of LLMs and AI agents, improve resource utilization, reduce costs, support elastic scaling, and drive the technological evolution of AI infrastructure.
Qualifications
Minimum Qualifications: - Individuals who are completing or recently completed a PhD in Software Development, Computer Science, Computer Engineering, or a related technical discipline, with a focus on distributed and ML systems. - Proven first-author publications in top venues (e.g., OSDI, SOSP, NSDI, NeurIPS, MLSys, etc.) with clear technical contributions. - Strong system-building ability, with hands-on experience implementing or optimizing real systems beyond prototypes. Solid understanding of compute, network architecture, and operating systems. - Deep expertise in at least one of the following: LLM inference / AI-ML systems, system optimization for AI (e.g., scheduling, observability, resource management, high-performance networking), or software-hardware co-design. - Demonstrated ownership of significant technical work (research or systems), with the ability to clearly articulate the problem defined, decisions made, and why the outcome depended on their contribution. - Strong programming skills and the ability to reason about complex system design. Preferred Qualifications: - A sustained track record of high-impact research, with evidence of influence (citations, follow-up work, or adoption in real systems). - Experience taking ideas from research to production, including deployment, evaluation, and iteration in real environments. - Deep expertise in frontier areas such as large-scale model training and inference, heterogeneous computing optimization across GPUs/DPUs/accelerators, software-hardware co-design (e.g., FPGA/ASIC) across networking/storage/distributed compute, or high-performance networking (e.g., RDMA, NCCL, DPDK/SPDK) with hands-on experience in network virtualization (OVS, SR-IOV, eBPF). - Proven ability to lead technical direction, not just contribute—e.g., defining a research agenda, owning a system component, or driving cross-functional decisions. - Contributions to widely used open-source systems or infrastructure projects. - Strong industry research experience or collaboration with leading labs - Ability to reason rigorously about system-level trade-offs (latency, throughput, cost, scalability) and translate them into practical designs.
Job Information
【For Pay Transparency】Compensation Description (Annually)
The base salary range for this position in the selected city is $202160 - $368220 annually.
Compensation may vary outside of this range depending on a number of factors, including a candidate’s qualifications, skills, competencies and experience, and location. Base pay is one part of the Total Package that is provided to compensate and recognize employees for their work, and this role may be eligible for additional discretionary bonuses/incentives, and restricted stock units.
Benefits may vary depending on the nature of employment and the country work location. Employees have day one access to medical, dental, and vision insurance, a 401(k) savings plan with company match, paid parental leave, short-term and long-term disability coverage, life insurance, wellbeing benefits, among others. Employees also receive 10 paid holidays per year, 10 paid sick days per year and 17 days of Paid Personal Time (prorated upon hire with increasing accruals by tenure).
The Company reserves the right to modify or change these benefits programs at any time, with or without notice.
For Los Angeles County (unincorporated) Candidates:
Qualified applicants with arrest or conviction records will be considered for employment in accordance with all federal, state, and local laws including the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Our company believes that criminal history may have a direct, adverse and negative relationship on the following job duties, potentially resulting in the withdrawal of the conditional offer of employment:
1. Interacting and occasionally having unsupervised contact with internal/external clients and/or colleagues;
2. Appropriately handling and managing confidential information including proprietary and trade secret information and access to information technology systems; and
3. Exercising sound judgment.
About Us
Founded in 2012, ByteDance's mission is to inspire creativity and enrich life. With a suite of more than a dozen products, including TikTok, Lemon8, CapCut and Pico as well as platforms specific to the China market, including Toutiao, Douyin, and Xigua, ByteDance has made it easier and more fun for people to connect with, consume, and create content.
Why Join ByteDance
Inspiring creativity is at the core of ByteDance's mission. Our innovative products are built to help people authentically express themselves, discover and connect – and our global, diverse teams make that possible. Together, we create value for our communities, inspire creativity and enrich life - a mission we work towards every day.
As ByteDancers, we strive to do great things with great people. We lead with curiosity, humility, and a desire to make impact in a rapidly growing tech company. By constantly iterating and fostering an "Always Day 1" mindset, we achieve meaningful breakthroughs for ourselves, our Company, and our users. When we create and grow together, the possibilities are limitless. Join us.
Diversity & Inclusion
ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.
Reasonable Accommodation
ByteDance is committed to providing reasonable accommodations in our recruitment processes for candidates with disabilities, pregnancy, sincerely held religious beliefs or other reasons protected by applicable laws. If you need assistance or a reasonable accommodation, please reach out to us at https://tinyurl.com/RA-request