Automated Unit Test Improvement using Large Language Models at Meta

Event details
Automated Unit Test Improvement using Large Language Models at Meta event
🎥 The event recording is currently processing.
Event description

This paper describes Meta’s TestGen-LLM tool, which uses LLMs to automatically improve existing human-written tests. TestGen-LLM verifies that its generated test classes successfully clear a set of filters that assure measurable improvement over the original test suite, thereby eliminating problems due to LLM hallucination.

We describe the deployment of TestGen-LLM at Meta test-a-thons for the Instagram and Facebook platforms. In an evaluation on Reels and Stories products for Instagram, 75% of TestGen-LLM’s test cases built correctly, 57% passed reliably, and 25% increased coverage. During Meta’s Instagram and Facebook test-a-thons, it improved 11.5% of all classes to which it was applied, with 73% of its recommendations being accepted for production deployment by Meta software engineers.

Here is the link to the paper: https://arxiv.org/abs/2402.09171

Authors:

  • Nadia Alshahwan
  • Jubin Chheda
  • Anastasia Finegenova