GameTheory
A school-friendly autonomous agent competition built around repeated steal-or-share strategy games.
Overview
GameTheory is an in-development competition framework for students to program autonomous agents that play a school-appropriate version of the prisoner's dilemma. Instead of using the usual framing, the game is presented more like a steal-or-share game show. The plan is for each student to write an agent that makes decisions over 50 rounds against an opponent, then enter the full set of agents into a round-robin competition to see who has designed the strongest strategy.
The original spark for the project came from a Veritasium video on game theory and strategy. It immediately felt like something that could work well in a Computer Science competition club, not just because the underlying ideas are interesting, but because it gives students a reason to write code that makes decisions under uncertainty.
Rather than using the classic prisoner's dilemma framing directly, the game is being adapted into a steal-or-share format that is easier to present in a school competition setting. That keeps the strategic core of the problem while making the competition more suitable for the club.
The planned structure is for each student to program an autonomous agent to play the game against another agent for 50 rounds. The agent will not get unlimited information. Only specific information will be provided each round through an API, so students will need to think carefully about what signals they can use, what they should remember from previous rounds, and how they want their strategy to adapt.
The first version of the competition is intended to focus on human-written strategies. Once that is working, the second phase would allow students to use AI to generate or assist with their code and then run a much larger combined competition with both human-written and AI-assisted bots. That creates a much more interesting question than just who can win. It lets us see how strategy changes when AI tools enter the loop, and whether any student-designed agent can still outperform the generated ones.
The broader aim is to use competition as a reason to teach APIs, agent design, state tracking, strategic thinking, and experimentation. It is not just about one game. It is about building a framework where student code has to observe, decide, and respond over time.
What inspired the project
Inspired by Veritasium's editorial video on game theory and strategic behaviour. Watch on YouTube
FEATURES.
Steal-or-Share Format
Reframes the prisoner's dilemma into a more school-appropriate game show style competition without losing the strategic depth.
Autonomous Agents
Students program bots that make decisions on their own each round rather than interacting manually.
50-Round Matches
Each head-to-head game runs across 50 rounds, which rewards strategies that can react, remember, and adapt over time.
Round-Robin Tournament
Every agent plays against every other agent so the final results reflect broader strategic strength rather than a single lucky matchup.
API-Driven Inputs
Only selected information is exposed to each agent each round, pushing students to work with constrained inputs and design better logic.
Human vs AI Phase
A planned second competition mode introduces AI-generated or AI-assisted bots so students can compare human strategy against AI-supported coding.
Tech Stack
Core language for the competition platform, agent interfaces, and tournament logic
Frontend for competition management, student submissions, and result visualisation
Controlled interface that provides only the allowed round information to each autonomous agent
Match runner for handling 50-round games, round-robin scheduling, and scoring
Architecture
At the centre of the planned project is a match engine that runs repeated games between two submitted agents. Each round, the engine would provide a limited set of inputs through an API contract, receive the agent's action, apply the scoring rules, and record the evolving state of the match.
On top of single matches would sit the tournament layer. This would run a full round robin so every submitted bot plays against every other bot, allowing the platform to produce rankings, comparative results, and strategy patterns across the whole field.
The later AI-enabled phase adds an extra layer of interest without changing the core framework. The same tournament system should be able to evaluate both human-coded and AI-assisted agents, making it possible to compare approaches fairly inside one competition structure.
Data Model
- •Agent submissions with code, metadata, and strategy identity
- •Per-round inputs and decisions for each 50-round head-to-head match
- •Match outcomes, cumulative scores, and tournament rankings
- •Competition modes separating human-written and AI-assisted entries
Challenges
- •Designing the API so students have enough information to build interesting strategies without making the task trivial
- •Keeping the game framing school-appropriate while preserving the strategic tension that makes the original dilemma interesting
- •Ensuring the tournament engine is fair, repeatable, and easy to explain to students
- •Making the second AI-assisted phase educational rather than just a shortcut to generated code
Outcomes
- •Planned as a strong launch activity for a Computer Science competition club
- •Intended to give students a practical reason to work with APIs, autonomous logic, and repeated-game strategy
- •Designed to support a compelling human-vs-AI comparison once the second competition phase is introduced
- •Aims to turn an abstract game theory idea into a concrete programming challenge students can test and iterate on