Intelligent Collaboration Tools Lab

In addition to coding, software engineers spend just as much time exchanging information and collaborating with colleagues. Collaborative work in software engineering increasingly relies on specialized tools like communication engines, issue trackers, and code review platforms.

Our Intelligent Collaboration Tools Lab is committed to gaining a deeper understanding of collaborative processes in software engineering and other creative industries, and devising novel approaches to tool support for collaborative work.

Research Directions

Analysis of Development Traces

Traces of collective development activity are a rich source of data that can be used to support the development process. To extract valuable information from the development history, we work on approaches that include recommender systems, expertise modeling, analytics engines, and question-answer techniques.

Code Review Research

We work on improving code review tools with smart features that are based on the analysis of code changes and the history of communication of developers.

Messaging Support

We build techniques and tools to support knowledge acquisition and search functionality in large messenger workspaces. This includes recommender systems and question-answer techniques.

Problems in Collective Development

We systematically investigate issues in the collective work of software engineers and research possible ways to address these challenges with tools. This process helps inform our work.

Test Generation and Crash Reproduction

We apply established techniques for test generation and crash reproduction in real-world software engineering tools and environments, and we investigate the use of large language models for these tasks.

Seminars

We host open seminars and reading club meetings where we present interesting results of our own and others.

Please join our meetup group to stay informed about upcoming sessions.

Would you like to present? Just shoot Vladimir an email.

Publications

September 2023
Egor Klimov, Muhammad Umair Ahmed, Nikolai Sviridov, Pouria Derakhshanfar, Eray Tuzun, Vladimir Kovalenko

Bus Factor Explorer

ASE 2023, Luxembourg

Bus factor (BF) is a metric that tracks knowledge distribution in a project. It is the minimal number of engineers that have to leave for a project to stall. Despite the fact that there are several algorithms for calculating the bus factor, only a few tools allow easy calculation of bus factor and convenient analysis of results for projects hosted on Git-based providers.

We introduce Bus Factor Explorer, a web application that provides an interface and an API to compute, export, and explore the Bus Factor metric via treemap visualization, simulation mode, and chart editor. It supports repositories hosted on GitHub and enables functionality to search repositories in the interface and process many repositories at the same time. Our tool allows users to identify the files and subsystems at risk of stalling in the event of developer turnover by analyzing the VCS history.

The application and its source code are publicly available on GitHub at https://github.com/JetBrains-Research/bus-factor-explorer. The demonstration video can be found on YouTube: https://youtu.be/uIoV79N14z8

June 2023
Farid Bagirov, Pouria Derakshanfar, Alexey Kalina, Elena Kartysheva, Vladimir Kovalenko

Assessing the Impact of File Ordering Strategies on Code Review Process

EASE 2023, Oulu, Finland

Popular modern code review tools (e.g. Gerrit and GitHub) sort files in a code review in alphabetical order. A prior study (on open-source projects) shows that the changed files’ positions in the code review affect the review process. Their results show that files placed lower in the order have less chance of receiving reviewing efforts than the other files. Hence, there is a higher chance of missing defects in these files. This paper explores the impact of file order in the code review of the well-known industrial project IntelliJ IDEA. First, we verify the results of the prior study on a big proprietary software project. Then, we explore an alternative to the default Alphabetical order: ordering changed files according to their code diff. Our results confirm the observations of the previous study. We discover that reviewers leave more comments on the files shown higher in the code review. Moreover, these results show that, even with the data skewed toward Alphabetical order, ordering changed files according to their code diff performs better than standard Alphabetical order regarding placing problematic files, which needs more reviewing effort, in the code review. These results confirm that exploring various ordering strategies for code review needs more exploration.

Group Members

Vladimir Kovalenko
Head of Research Lab
Pouria Derakhshanfar
Senior Researcher
Farid Bagirov
Researcher
Kirill Bochkarev
Researcher
Mikhail Evtikhiev
Researcher
Egor Klimov
Researcher
Ekaterina Koshchenko
Researcher
Elena Lyulina
Researcher
Sergey Titov
Researcher
Nikolai Sviridov
Software Developer
Arkadii Sapozhnikov
Software Developer
Vahid Haratian
Intern
Ekaterina Itsenko
Intern
Ekaterina Braun
Intern
Evgeniia Kirillova
Intern

Past Members

Elgun Jabrayilzade
Erdem Tuna
Bilkent University and Picus Security
Alexander Agroskin
Weizmann Institute of Science
Muhammad Umair Ahmed
Ruslan Salkaev