Combinatorial Optimization for All:
Using LLMs to Aid Non-Experts in Improving Optimization Algorithms

Artificial Intelligence Research Institute (IIIA-CSIC)
*Corresponding author: cchacon@iiia.csic.es
Podcast generated automatically with NotebookLM.

A non-expert user’s interaction with an LLM can enhance an existing genetic algorithm by incorporating modern techniques.

Abstract

Large Language Models (LLMs) have shown notable potential in code generation for optimization algorithms, unlocking exciting new opportunities. This paper examines how LLMs, rather than creating algorithms from scratch, can improve existing ones without the need for specialized expertise. To explore this potential, we selected 10 baseline optimization algorithms from various domains (metaheuristics, reinforcement learning, deterministic, and exact methods) to solve the classic Travelling Salesman Problem. The results show that our simple methodology often results in LLM-generated algorithm variants that improve over the baseline algorithms in terms of solution quality, reduction in computational time, and simplification of code complexity, all without requiring specialized optimization knowledge or advanced algorithmic implementation skills.


Results

Comparison of ten human-implemented algorithms with five AI-generated versions of each, applied to the Traveling Salesman Problem. Since this is a minimization problem, lower values indicate better performance.

The analyzed algorithms are categorized into four distinct groups, as detailed below.


Metaheuristics

Comparison of the metaheuristic codes generated by the five LLMs with the original codes. Remember that the TSP is a minimization problem, that is, the lower the values, the better. The y-axes are shown in a logarithmic scale.

Reinforcement Learning

Comparison of the reinforcement learning (RL) codes generated by the five LLMs with the original codes. Remember that the TSP is a minimization problem, that is, the lower the values, the better. The y-axes is shown in logarithmic scale.

Deterministic/Heuristics

Comparison of the deterministic heuristic codes generated by the five LLMs with the original codes. The bar plots show the performance gaps (in percent) relative to the original codes. Note that a positive value indicates that the LLM-generated code produces a better solution.

Exact

Comparison of the Branch & Bound (BB) codes generated by the five chosen LLMs with the original BB code (in terms of computation time). Each code was applied 30 times, and the y-axis is shown in a logarithmic scale.

Cyclomatic Complexity

Cyclomatic complexity is a metric that quantifies the number of independent paths through the source code of a program. Lower values are better.

Summary

Based on all evaluations presented in this paper, we can state that among the five LLMs tested, DeepSeek-R1 generally produces the best results, followed by GPT-O1. Gemini-exp-1206 performed well in certain cases, such as ACO and ALNS; however, it underperformed in others, such as, for example, Christofides. Among all tested models, Claude-3.5-Sonnet showed the lowest performance.

In summary, LLM-enhanced code versions clearly outperformed the original implementations in nine out of ten cases/algorithms. Only for Q_Learning none of the models was able to improve the original code. In this case, Llama-3.3-70b matched the performance of the original code.

Conclusion

Our research demonstrates that Large Language Models (LLMs) can significantly enhance the performance of 10 baseline optimization algorithms for a classic combinatorial problem: the Travelling Salesman Problem. These improvements not only resulted in higher solution quality but sometimes also in reduced computation times. By utilizing in-context prompting techniques, we were able to optimize existing code through better data structures, the incorporation of modern heuristics, and a reduction in code complexity. This approach is fully reproducible via a chatbot that is available on our project website.

Building on this foundation, future work will extend these advancements to less common optimization problems. Moreover, we plan to explore additional enhancements, such as leveraging LLMs to migrate to more efficient programming languages or integrating LLM-based agents to automate and continuously improve the enhancement process for existing algorithms.

LLMs can assist not only researchers in enhancing their optimization algorithms but also non-experts seeking quick and efficient solutions, as well as those using them for educational purposes.

BibTeX


        @misc{sartori2025combinatorialoptimizationallusing,
          title={Combinatorial Optimization for All: Using LLMs to Aid Non-Experts in Improving Optimization Algorithms}, 
          author={Camilo Chacón Sartori and Christian Blum},
          year={2025},
          eprint={2503.10968},
          archivePrefix={arXiv},
          primaryClass={cs.AI},
          url={https://arxiv.org/abs/2503.10968}, 
        }