Call for Papers

Important Dates

• Submissions due: February 8th (AoE), 2026
• Notification of acceptance: February 22nd, 2026
• Camera-ready Deadline: March 16th, 2026
• Workshop: March 22nd, 2026

All deadlines are at 11:59pm Anywhere on Earth (AOE).

Overview

The digital transformation is happening in all industries. Running businesses on top of cloud services (e.g., SaaS, PaaS, IaaS) is becoming the core of this transformation. However, the large-scale and high complexity of cloud services bring great challenges to the industry. They require a significant amount of compute resources, domain knowledge and human effort to operate cloud services at scale. Artificial intelligence and machine learning (AI/ML) play an important role in efficiently and effectively building and operating cloud services. We envision that, with the advance of AI/ML and related technologies like Large Language Models (LLMs), the cloud industry can make significant progress in the following aspects while keeping up the sustained and exponential growth of the cloud:

• Cloud Efficiency: We have an opportunity to leverage service characteristics for optimal scaling, scheduling and packing to reduce the overall cost and carbon footprint.
• Resilient cloud services: Cloud services will have built-in capabilities of self-monitoring, self-diagnosis, and self-healing – all with minimal human intervention.
• Intelligent Ops: Users can easily use, maintain, and troubleshoot their workloads or get efficient support on top of the underlying cloud service offerings.
• AI Efficiency: In the last two years, we have seen a massive adoption of LLMs. Given the intense resource demands of LLM training and inference, cloud infrastructures (both HW and SW) are going through a massive transformation. Efficient training and inference will be key to increased adoption and long-term sustainability.

We are still at an early stage towards realizing this vision. We advocate the urgency of driving and accelerating AI/ML for efficient and manageable cloud services through collaborative efforts in multiple areas, including but not limited to artificial intelligence, machine learning, software engineering, data analytics, and systems.

This workshop provides a forum for researchers and practitioners to present the state of research and practice in AI/ML for efficient and manageable cloud services, and network with colleagues. Here are the key topics of interest:

• Resource scheduling and optimization
• Predictive capacity management
• Resource allocation and packing
• Service quality monitoring and anomaly detection
• Deployment and integration testing
• System configuration
• Hardware/software failure prediction
• Auto-diagnosis and problem localization
• Efficient ML training and Inferencing
• Using LLMs for Cloud Ops
• Incident management
• Auto service healing
• Data center management
• Customer support
• Security and Privacy in Cloud Operations

Attendance

For each accepted paper, at least one author must attend the workshop and present the paper.

Submission Instructions

The workshop invites submission of manuscripts with original research results and contributions that have not been previously published and that are not currently under review by another conference and journal. Submissions will be assessed based on their novelty, technical quality, potential impact, interest, clarity, relevance, and reproducibility. Submitted papers will be peer-reviewed and selected for oral or poster presentation. Accepted papers will be listed on the workshop's website. We invite the following types of contributions:

Technical Papers – Describing original research contributions, no more than six pages long.
Abstracts – 1-page abstract describing early-stage ideas and results, which will be presented as a lightning talk at the workshop.
Project Showcase – Describing innovative solutions, tools, and deployed systems, no more than two pages long.
Dataset Showcase – Describes relevant datasets which are publicly available and can be used by the research community, no more than two pages long.

Please indicate the submission category at the beginning of your paper's title (e.g., Technical Papers: Your Paper Title).

Submissions must be double blind, in PDF format. Papers should be formatted according to the two-column ACM proceedings style. There is no page limit for references, but appendices are not allowed. Submissions must be written in English and render without error when viewed using standard tools (e.g., Acrobat Reader). They must also print on US Letter paper. Citations should be in numeric style (e.g., [1]). Please ensure that figures and tables are legible in grayscale. Papers that exceed the length requirement or deviate from the expected format will be rejected.

This archive contains a LaTeX class file that follows the prescribed submission format. The aiops26-template.tex file in the archive has the correct defaults for AIOps 2026 submissions. Specifically, the first two lines should be:

        \documentclass[sigconf, 10pt, anonymous, nonacm]{acmart}
        \settopmatter{printfolios=true, printccs=false, printacmref=false}

Please do not modify the acmart.cls file or settings to try to sneak in additional space.

Submit your paper through the Cloud Intelligence/AIOps workshop 2026 Submission Site: https://aiops26.hotcrp.com/

Contact Us

Any questions may be directed to the PC chairs: cloudintelligenceworkshop@gmail.com.