Large Language Models (LLMs) have shown notable potential in code generation for optimization algorithms, unlocking exciting new opportunities. This paper examines how LLMs, rather than creating algorithms from scratch, can improve existing ones without the need for specialized expertise. To explore this potential, we selected 10 baseline optimization algorithms from various domains (metaheuristics, reinforcement learning, deterministic, and exact methods) to solve the classic Travelling Salesman Problem. The results show that our simple methodology often results in LLM-generated algorithm variants that improve over the baseline algorithms in terms of solution quality, reduction in computational time, and simplification of code complexity, all without requiring specialized optimization knowledge or advanced algorithmic implementation skills.
Based on all evaluations presented in this paper, we can state that among the five LLMs tested, DeepSeek-R1
generally produces the best results, followed by GPT-O1. Gemini-exp-1206 performed well in certain cases, such as ACO and
ALNS; however, it underperformed in others, such as, for example, Christofides. Among all tested models,
Claude-3.5-Sonnet showed the lowest performance.
In summary, LLM-enhanced code versions clearly outperformed the original implementations in nine out of ten
cases/algorithms. Only for Q_Learning none of the models was able to improve the original code. In this case,
Llama-3.3-70b matched the performance of the original code.
Our research demonstrates that Large Language Models (LLMs) can significantly enhance the performance of 10 baseline optimization algorithms for a classic combinatorial problem: the Travelling Salesman Problem. These improvements not only resulted in higher solution quality but sometimes also in reduced computation times. By utilizing in-context prompting techniques, we were able to optimize existing code through better data structures, the incorporation of modern heuristics, and a reduction in code complexity. This approach is fully reproducible via a chatbot that is available on our project website.
Building on this foundation, future work will extend these advancements to less common optimization problems. Moreover, we plan to explore additional enhancements, such as leveraging LLMs to migrate to more efficient programming languages or integrating LLM-based agents to automate and continuously improve the enhancement process for existing algorithms.
@misc{sartori2025combinatorialoptimizationallusing,
title={Combinatorial Optimization for All: Using LLMs to Aid Non-Experts in Improving Optimization Algorithms},
author={Camilo Chacón Sartori and Christian Blum},
year={2025},
eprint={2503.10968},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2503.10968},
}