
Responsibilities
About the Team: Data AML is ByteDance's Machine Learning mid-platform, providing training and inference systems for recommendation, advertising, CV, speech, and NLP for businesses such as Douyin, Jinri Toutiao, and Xigua Video. It provides powerful Machine Learning computing power to internal business units within the company and conducts research on some general and innovative algorithms for issues in these businesses. At the same time, it also provides some core capabilities of Machine Learning and Recommender systems to external enterprise customers through Volcano Engine. In addition, AML also conducts some cutting-edge research in fields such as Al for Science and scientific computing. Responsibilities: 1) Optimizing resource efficiency in distributed orchestration and scheduling, through engineering means, enhances the scale of business/models supported per unit of computing power: a) Use/secondarily develop distributed scheduling frameworks around the Kubernetes/Godel ecosystem, make reasonable selections in different business scenarios, and optimize scheduling strategies for cluster utilization/uniformity based on the characteristics of different scenarios; b) Connect/extend AutoScaling for various models and business operations, as well as automatic parallelization tasks. Through the method of load modeling and analysis of different models, automatically optimize resource requests for models, optimize resource utilization efficiency at scale, and achieve global optimality; c) Responsible for the preemption/eviction function of services with different priorities; responsible for the borrowing/mixed deployment docking work among different types of resources in different clusters; responsible for the scheduling/load adaptation in scenarios of multiple data centers, multiple regions, and multiple clouds; 2) Build a training system architecture for next-generation ultra-large and ultra-deep recommendation models: a) Build a flexible and robust distributed training runtime around ultra-large-scale embedding and ultra-large-scale GPU synchronization training; Design and optimize distributed computing APis and runtime for future-oriented research paradigms of recommended advertising models (e.g., RL/finetune/distillation); c) Interface with the platform to optimize the diagnosability and usability of distributed training. 3) Construct an online orchestration architecture for the next-generation Recommender system: a) Build a robust and stable distributed model inference architecture around the online training scenario of ultra-large-scale embeddings; b) Optimize the usability of the online architecture of the recommended advertising model and the MLops process by integrating the research and experimental model of the business.
Qualifications
Minimum Qualifications: - Bachelor's degree or above in Computer Science or similar field of study. - At least 5 years of experience with proficiency in at least one programming language among Go/Python in a Linux environment, with excellent hands-on coding skills; - Familiar with some open-source distributed scheduling frameworks, such as Kubernetes (K8S), Yarn (as well as the Big data frameworks Flink and MapReduce in the Hadoop ecosystem), Mesos, Celery, and has rich practical and development experience in Machine Learning systems; - Master the principles of distributed systems, and have participated in the design, development, and maintenance of large-scale distributed systems; - Have excellent logical analysis skills, capable of reasonably abstracting and splitting business logic; - Have a strong sense of work responsibility, good learning ability, communication skills, and self-motivation, and be able to respond and act quickly; - Have good work documentation habits, and timely write and update work processes and Technical Documentation as required. Preferred Qualifications: - Familiar with at least one mainstream Machine Learning framework (PyTorch / TensorFlow); - Have experience in one of the following areas: Al Infrastructure, HW/SW Co-Design, High Performance Computing, ML Hardware Architecture (GPU, Accelerators, Networking); - Some experience in using/designing open-source training orchestration systems, such as veRL, VLLM, Ray, TFX. Those with development experience in at least one of them are preferred.
Job Information
【For Pay Transparency】Compensation Description (Annually)
The base salary range for this position in the selected city is $212800 - $450000 annually.
Compensation may vary outside of this range depending on a number of factors, including a candidate’s qualifications, skills, competencies and experience, and location. Base pay is one part of the Total Package that is provided to compensate and recognize employees for their work, and this role may be eligible for additional discretionary bonuses/incentives, and restricted stock units.
Benefits may vary depending on the nature of employment and the country work location. Employees have day one access to medical, dental, and vision insurance, a 401(k) savings plan with company match, paid parental leave, short-term and long-term disability coverage, life insurance, wellbeing benefits, among others. Employees also receive 10 paid holidays per year, 10 paid sick days per year and 17 days of Paid Personal Time (prorated upon hire with increasing accruals by tenure).
The Company reserves the right to modify or change these benefits programs at any time, with or without notice.
For Los Angeles County (unincorporated) Candidates:
Qualified applicants with arrest or conviction records will be considered for employment in accordance with all federal, state, and local laws including the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Our company believes that criminal history may have a direct, adverse and negative relationship on the following job duties, potentially resulting in the withdrawal of the conditional offer of employment:
1. Interacting and occasionally having unsupervised contact with internal/external clients and/or colleagues;
2. Appropriately handling and managing confidential information including proprietary and trade secret information and access to information technology systems; and
3. Exercising sound judgment.
About Us
Founded in 2012, ByteDance's mission is to inspire creativity and enrich life. With a suite of more than a dozen products, including TikTok, Lemon8, CapCut and Pico as well as platforms specific to the China market, including Toutiao, Douyin, and Xigua, ByteDance has made it easier and more fun for people to connect with, consume, and create content.
Why Join ByteDance
Inspiring creativity is at the core of ByteDance's mission. Our innovative products are built to help people authentically express themselves, discover and connect – and our global, diverse teams make that possible. Together, we create value for our communities, inspire creativity and enrich life - a mission we work towards every day.
As ByteDancers, we strive to do great things with great people. We lead with curiosity, humility, and a desire to make impact in a rapidly growing tech company. By constantly iterating and fostering an "Always Day 1" mindset, we achieve meaningful breakthroughs for ourselves, our Company, and our users. When we create and grow together, the possibilities are limitless. Join us.
Diversity & Inclusion
ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.
Reasonable Accommodation
ByteDance is committed to providing reasonable accommodations in our recruitment processes for candidates with disabilities, pregnancy, sincerely held religious beliefs or other reasons protected by applicable laws. If you need assistance or a reasonable accommodation, please reach out to us at https://tinyurl.com/RA-request