MCIS

lab on Maintenance, Construction and Intelligence of Software

Email:




Software Maintenance

Software Maintenance

Releasing a software product often is seen as the end point of the software development process: everything's implemented, time to earn some cash! Of course, quite the opposite is true, since 40 to 80% of software development costs are spent during software maintenance activities after release, i.e., to fix bugs, improve the user experience and add new functionality. In the age of AI, maintenance also involves monitoring ML models for performance degradation (drift) and managing technical debt in complex engineering pipelines.

MCIS' mission is to help practitioners maintain their AI-powered software systems. For example, how can we detect and mitigate performance degradation (drift) in ML models? How do we manage technical debt in machine learning pipelines? How can we coordinate vulnerability fixes across large-scale software ecosystems? Which AI-product release-readiness checklists should be followed before deployment?

By empirical research on software development process data stored in revision control systems (Git, ...), mailing list archives, bug repositories (Jira, GitHub Issues, ...), and modern model registries like Hugging Face, MCIS addresses these challenges. Typically, the outcome of our research are models developed using data mining, statistical analysis, and advanced machine learning techniques.


Software Construction

Software Construction

Turning textual source code into an installable software product, ready to execute on the user's hardware platform, is at the heart of software construction. Practitioners need help releasing their traditional and AI-powered products faster using optimized CI/CD and Infrastructure-as-Code, significantly shortening commit turn-around time.

MCIS' mission is to help practitioners release their products faster, but without sacrificing quality. To achieve this, one needs insight into the modern release engineering process. For example, how can we reduce redundant continuous integration activity through commit grouping and skip prediction? How do we optimize build batching algorithms at scale? How healthy is our software supply chain, and why do some builds fail to be reproducible across different ecosystems? How can we prepare our development process for seamless continuous delivery?

Using the same empirical techniques as for software maintenance, we address the questions above. In addition, our lab has unique expertise and data sets on release engineering and build systems (Bazel, GitHub Actions, ...), and close contacts with industry for access to "real-life" data.


Software Intelligence

Software Intelligence

Software Intelligence forms the heart of MCIS' research. We leverage Large Language Models (LLMs) and software analytics to facilitate program comprehension and development. From boosting LLM-based code translation using transitive intermediate translations to predicting code review hot-spots, we turn software data into actionable developer intelligence.

MCIS leverages AI to help practitioners understand and develop their software systems and infrastructure to facilitate maintenance, construction and other development activities. For example, how can we enhance LLM-based code translation using transitive intermediate translations? Which files in a pull request are most likely to need comments from code reviewers? How do we effectively manage ML assets and navigate foundation model leaderboards?

Intelligence covers elements from advanced model versioning (Semantic Versioning on Hugging Face), large language models for code, and human-AI collaboration. Given that knowledge is crucial in software development, software intelligence is central to MCIS' mission.

Latest Work

Doriane Olewicki, Leuson Da Silva, Oussama Ben Sghaier, Suhaib Mujahid, Arezou Amini, Benjamin Mah, Marco Castelluccio, Sarra Habchi, Foutse Khomh and Bram Adams (2026). Impact of LLM-based Review Assistant in Practice: A Mixed Open-/Closed-source Case Study, Transactions on Software Engineering (TSE), IEEE, to appear.


Hao Li, Hicham Masri, Filipe Roseiro Côgo, Abdul Ali Bangash, Bram Adams and Ahmed E. Hassan (2026). Understanding Prompt Management in GitHub Repositories: A Call for Best Practices, IEEE Software, IEEE, to appear.