Repository logo
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Scholalry Output
  3. Publications
  4. LEGOBENCH: Scientific Leaderboard Generation Benchmark
 
  • Details

LEGOBENCH: Scientific Leaderboard Generation Benchmark

Source
Emnlp 2024 2024 Conference on Empirical Methods in Natural Language Processing Findings of Emnlp 2024
Date Issued
2024-01-01
Author(s)
Singh, Shruti
Alam, Shoaib
Malwat, Husain
Singh, Mayank  
DOI
10.18653/v1/2024.findings-emnlp.855
Abstract
The ever-increasing volume of paper submissions makes it difficult to stay informed about the latest state-of-the-art research. To address this challenge, we introduce LEGOBENCH, a benchmark for evaluating systems that generate scientific leaderboards. LEGOBENCH is curated from 22 years of preprint submission data on arXiv and more than 11k machine learning leaderboards on the PapersWithCode portal. We present a language model-based and four graph-based leaderboard generation task configuration. We evaluate popular encoder-only scientific language models as well as decoder-only large language models across these task configurations. State-of-the-art models showcase significant performance gaps in automatic leaderboard generation on LEGOBENCH. The code is available on GitHub<sup>1</sup> and the dataset is hosted on OSF<sup>2</sup>
Unpaywall
URI
https://d8.irins.org/handle/IITG2025/28463
IITGN Knowledge Repository Developed and Managed by Library

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify