LLaMEA, or Large Language Model Evolutionary Algorithm, is an innovative framework developed by the XAI research group at NACO with the lead of Niki van Stein. It leverages large language models (LLMs), such as GPT-4, to automate the generation and refinement of algorithms such as metaheuristic optimizers. By iteratively evolving algorithms based on performance metrics and runtime evaluations, LLaMEA streamlines the optimization process without requiring extensive prior algorithmic knowledge.
See also the introductory Youtube video.
Key features of LLaMEA include:
Automated Algorithm Generation: Utilizes GPT models to create and enhance algorithms automatically.
Performance Evaluation: Integrates with IOHexperimenter and other evaluators for real-time feedback, guiding the evolutionary process.
Customizable Evolution Strategies: Allows configuration of strategies to effectively explore algorithmic design spaces.
Extensible and Modular Design: Offers flexibility for users to incorporate other models and evaluation tools.
This framework is particularly beneficial for both research and practical applications in fields where optimization is crucial. For more details, including installation instructions and usage guidelines, please visit the project’s Github Repository. In addition, a accompanying benchmarking framework with additional real-world problems and baselines is available in the BLADE Github Repository.
The research on LLaMEA and generated algorithms from LLaMEA have won the following prestiguous awards:
🥈 Silver Award at the GECCO 2025 Humies competition
🏅 Winner of the GECCO 2025 Any-Time Performance for Affine BBOB competition
🏅 Winner of the GECCO 2024 Any-Time Performance for Affine BBOB competition