Paper Reading: Refactoring vs Refuctoring: Advancing the state of AI-automated code improvements

Event details
Paper Reading: Refactoring vs Refuctoring: Advancing the state of AI-automated code improvements event
Event description

"Refactoring vs Refuctoring" is the conclusion of a benchmark study of the most popular Large Language Models (LLMs) and their ability to generate code for refactoring tasks. This paper aims to illustrate the current standards and limitations, and seek to show new methodologies with higher confidence results.

What's in store?

In this paper reading session, we will be examining:

  • The performance of the most popular Large Language Models (LLMs) on code refactoring
  • How does an AI err?
  • A novel innovation by CodeScene for fact-checking the AI-output and augmenting the proposed refactorings by measuring the confidence levels

Companies build pretty substantial codebases over a period of time (sometimes decades). As and when the companies grow, a significant amount of time and energy are constantly invested by development teams on code refactoring (also covered in this paper).

But, how well do the existing AI tools/models actually perform this task? Are we there yet to confidently say "AI will replace software developers"? Join this session where we dive deep into the paper to find more answers.

Link to the paper: Refactoring vs Refuctoring: Advancing the state of AI automated code improvements

Event Recording: https://www.jointaro.com/lesson/OqiSLswzswIp9GHqkFpS/paper-reading-refactoring-vs-refuctoring-advancing-the-state-of-ai-automated-code-improvements/