Frontier-CS is an unsolved, open-ended, verifiable, and diverse benchmark for evaluating AI on challenging computer science problems.
Think of it as an "exam" for AI, but instead of easy textbook questions, we give problems that are genuinely difficult: ones that researchers struggle with, that have no known optimal solutions, or that require deep expertise to even attempt.
Current benchmarks are becoming too easy. Models score 90%+ on many existing coding benchmarks, but that doesn't mean they can actually do useful research or solve real-world engineering challenges.
Frontier-CS is different:
| Traditional Benchmarks | Frontier-CS | |
|---|---|---|
| Difficulty | Often saturated with evolving intelligence | Unsolved: no solution has achieved perfect scores |
| Problems | Textbook-style, known solutions | Open-ended research & optimization challenges |
| Evaluation | Binary pass-or-fail | Verifiable continuous scoring, always room to improve |
| Scope | Usually one domain | Diverse: systems, ML, algorithms, security, and more |
Leaderboard → | Browse example problems at frontier-cs.org
git clone https://github.com/FrontierCS/Frontier-CS.git
cd Frontier-CS
# Install dependencies (using uv, recommended)
uv sync
# Or with pip:
pip install -e .Here's Algorithmic Problem 0 - try to beat GPT-5!
# Start the judge server
cd algorithmic && docker compose up -d
# Run the example solution (Human Expert Solution)
frontier-eval --algorithmic 0 problems/0/examples/reference.cpp
# Run the example solution (GPT-5 Thinking Solution)
frontier-eval --algorithmic 0 problems/0/examples/gpt5.cpp
# Try you own solution!
frontier-eval --algorithmic 0 <your_solution.cpp># List all problems
frontier-eval --list
# Evaluate a generated solution locally for flash_attn problem (requires Docker)
frontier-eval flash_attn <your_solution.py>
# Evaluate on cloud (requires SkyPilot)
frontier-eval flash_attn <your_solution.py> --skypilotSee research/README.md for full documentation.
# Start the judge server
cd algorithmic && docker compose up -d
# Evaluate a solution
frontier-eval --algorithmic 1 <your_solution.cpp>Frontier-CS supports unbounded scoring for algorithmic problems, enabling open-ended evaluation compatible with algorithm evolution frameworks such as OpenEvolve.
# Get unbounded score (without clipping to 100)
frontier-eval --algorithmic --unbounded 1 <your_solution.cpp> - We currently support C++17 only for algorithmic problem solutions.
- Reference solutions and hidden tests are withheld; full evaluation and leaderboard inclusion require submission.
See algorithmic/README.md for full documentation.
from frontier_cs import FrontierCSEvaluator
evaluator = FrontierCSEvaluator()
# Evaluate a research problem
result = evaluator.evaluate("research", problem_id="flash_attn", code=my_code)
print(f"Score: {result.score}")
# Evaluate an algorithmic problem
result = evaluator.evaluate("algorithmic", problem_id=1, code=cpp_code)
print(f"Score: {result.score}")
# Get unbounded score for algorithmic problems
result = evaluator.evaluate("algorithmic", problem_id=1, code=cpp_code, unbounded=True)
print(f"Score (bounded): {result.score}")
print(f"Score (unbounded): {result.score_unbounded}")We release partial test cases so you can develop and debug locally. For full evaluation and leaderboard inclusion, please follow the instructions in SUBMIT.md and submit your solutions to [email protected], [email protected], [email protected], or [email protected].
Questions? Join our Discord
Some problems are adapted from ALE-bench and AI-Driven Research for Systems (ADRS).
If you use Frontier-CS in your research, please cite:
@misc{mang2025frontiercsevolvingchallengesevolving,
title={FrontierCS: Evolving Challenges for Evolving Intelligence},
author = {Qiuyang Mang and Wenhao Chai and Zhifei Li and Huanzhi Mao and
Shang Zhou and Alexander Du and Hanchen Li and Shu Liu and
Edwin Chen and Yichuan Wang and Xieting Chu and Zerui Cheng and
Yuan Xu and Tian Xia and Zirui Wang and Tianneng Shi and
Jianzhu Yao and Yilong Zhao and Qizheng Zhang and Charlie Ruan and
Zeyu Shen and Kaiyuan Liu and Runyuan He and Dong Xing and
Zerui Li and Zirong Zeng and Yige Jiang and Lufeng Cheng and
Ziyi Zhao and Youran Sun and Wesley Zheng and Meiyuwang Zhang and
Ruyi Ji and Xuechang Tu and Zihan Zheng and Zexing Chen and
Kangyang Zhou and Zhaozi Wang and Jingbang Chen and
Aleksandra Korolova and Peter Henderson and Pramod Viswanath and
Vijay Ganesh and Saining Xie and Zhuang Liu and Dawn Song and
Sewon Min and Ion Stoica and Joseph E. Gonzalez and
Jingbo Shang and Alvin Cheung},
year={2025},
eprint={2512.15699},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2512.15699},
}
