Main page Research activities Publications Talks MSc thesis projects Courses Mentoring Hobby and spare time Write me This site uses
Google Analytics
Last updated on
18 March 2024

Publication details

  • G. Lanciano, M. Stein, V. Hilt, T. Cucinotta. "Analyzing Declarative Deployment Code with Large Language Models," in Proceedings of the 13th International Conference on Cloud Computing and Services Science (CLOSER 2023), April 26-28, 2023, Prague, Czech Republic.

    Abstract

    In the cloud-native era, developers have at their disposal an unprecedented landscape of services to build scalable distributed systems. The DevOps paradigm emerged as a response to the increasing necessity of better automations, capable of dealing with the complexity of modern cloud systems. For instance, Infrastructure-as-Code tools provide a declarative way to define, track, and automate changes to the infrastructure underlying a cloud application. Assuring the quality of this part of a code base is of utmost importance. However, learning to produce robust deployment specifications is not an easy feat, and for the domain experts it is time-consuming to conduct code-reviews and transfer the appropriate knowledge to novice members of the team. Given the abundance of data generated throughout the DevOps cycle, machine learning (ML) techniques seem a promising way to tackle this problem. In this work, we propose an approach based on Large Language Models to analyze declarative deployment code and automatically provide QA-related recommendations to developers, such that they can benefit of established best practices and design patterns. We developed a prototype of our proposed ML pipeline, and empirically evaluated our approach on a collection of Kubernetes manifests exported from a repository of internal projects at Nokia Bell Labs.

    Copyright by SCITEPRESS.

    Download paper


    Main page Research activities Publications Talks MSc thesis projects Courses Mentoring Hobby and spare time Write me Last updated on
    18 March 2024