Call for Papers

Important Dates

• Submissions due: November 11th 18th (AoE), 2024
• Notification of acceptance: December 1st, 2024
• Camera-ready Deadline: February 5th, 2025
• Workshop: TBD (During April 26th – 29th OR May 3rd - 4th, 2025)

All deadlines are at 11:59pm Anywhere on Earth (AOE).

Overview

The digital transformation is happening in all industries. Running businesses on top of cloud services (e.g., SaaS, PaaS, IaaS) is becoming the core of this transformation. However, the large-scale and high complexity of cloud services bring great challenges to the industry. They require a significant amount of compute resources, domain knowledge and human effort to operate cloud services at scale. Artificial intelligence and machine learning (AI/ML) play an important role in efficiently and effectively building and operating cloud services. We envision that, with the advance of AI/ML and related technologies like Large Language Models (LLMs), the cloud industry can make significant progress in the following aspects while keeping up the sustained and exponential growth of the cloud:

• Cloud Efficiency: We have an opportunity to leverage service characteristics for optimal scaling, scheduling and packing to reduce the overall cost and carbon footprint.
• Resilient cloud services: Cloud services will have built-in capabilities of self-monitoring, self-diagnosis, and self-healing – all with minimal human intervention.
• Intelligent Ops: Users can easily use, maintain, and troubleshoot their workloads or get efficient support on top of the underlying cloud service offerings.
• AI Efficiency: In the last two years, we have seen a massive adoption of LLMs. Given the intense resource demands of LLM training and inference, cloud infrastructures (both HW and SW) are going through a massive transformation. Efficient training and inference will be key to increased adoption and long-term sustainability.

We are still at an early stage towards realizing this vision. We advocate the urgency of driving and accelerating AI/ML for efficient and manageable cloud services through collaborative efforts in multiple areas, including but not limited to artificial intelligence, machine learning, software engineering, data analytics, and systems.

This workshop provides a forum for researchers and practitioners to present the state of research and practice in AI/ML for efficient and manageable cloud services, and network with colleagues. Here are the key topics of interest:

• Resource scheduling and optimization
• Predictive capacity management
• Resource allocation and packing
• Service quality monitoring and anomaly detection
• Deployment and integration testing
• System configuration
• Hardware/software failure prediction
• Auto-diagnosis and problem localization
• Efficient ML training and Inferencing
• Using LLMs for Cloud Ops
• Incident management
• Auto service healing
• Data center management
• Customer support
• Security and Privacy in Cloud Operations

Attendance

For each accepted paper, at least one author must attend the workshop and present the paper.

Submission Instructions

The workshop invites submission of manuscripts with original research results and contributions that have not been previously published and that are not currently under review by another conference and journal. Submissions will be assessed based on their novelty, technical quality, potential impact, interest, clarity, relevance, and reproducibility. Submitted papers will be peer-reviewed and selected for oral or poster presentation. Accepted papers will be listed on the workshop’s website. We invite the following type of contributions:

• Technical Papers – Describing original research contributions, no more than six pages long.
• Abstracts – 1 page abstract describing early-stage ideas and results which will be presented as a lightning talk at the workshop.
• Project Showcase – Describing innovative solutions, tools and deployed systems, no more than two pages long.
• Dataset Showcase – Describes relevant datasets which are publicly available and can be used by the research community, no more than two pages long.

Submissions must be double blind, in PDF format. The page limit includes all content and references. Please add the type of submission (“Technical Papers”, “Abstracts”, “Project Showcase”, “Dataset Showcase”) on the first page of the submission below the submission title.

Submissions must conform to the “ACM Primary Article Template”, which can be obtained from the ACM Proceedings Template page. LaTeX users should use the following option: <code>\documentclass[sigconf,review]{acmart}</code>

Submit your paper through the Cloud Intelligence/AIOps workshop 2025 Submission Site: https://easychair.org/conferences/?conf=aiops2025

Contact Us

Any questions may be directed to the PC chairs: cloudintelligenceworkshop@gmail.com.