Advanced Topics in Machine Learning and Game Theory (Fall 2022)

Basic Information

Course Name: Advanced Topics in Machine Learning and Game Theory
Meeting Days, Times: MW at 10:10 a.m. — 11:30 a.m.
Location: A18A Porter Hall
Semester: Fall, Year: 2022
Units: 12, Section(s): 17599 (Undergrad), 17759 (Graduate)

Instructor Information

Name Dr. Fei Fang
Contact Info Email:
Office hours Tue at 3:00 p.m. — 4:00 p.m.

Thu at 3:30 p.m. —  4:30p.m.

Office hour location TCS 321 or Zoom, Make an appointment through Calendly to secure slots (See announcement)

TA Information

Name Steven Jecmen
Contact Info Email:
Office hours TBD
Office hour location TBD

Course Description

This course is designed to be a graduate-level course covering the topics at the intersection of machine learning and game theory. Recent years have witnessed significant advances in machine learning and their successes in detection, prediction, and decision-making problems. However, in many application domains, ranging from auction and ads bidding, to entertainment games such as Go and Poker, to autonomous driving and traffic routing, to the intelligent warehouse, to home assistants and the Internet of Things, there is more than one agent interacting with each other. Game theory provides a framework for analyzing the strategic interaction between multiple agents and can complement machine learning when dealing with challenges in these domains. Therefore, in the course, we will introduce how to integrate machine learning and game theory to tackle challenges in multi-agent systems. The course will have multiple topics as listed below

  • Basics of Machine Learning and Game Theory
    • Introduction to convex optimization, game theory, online learning, reinforcement learning
  • Learning in Games
    • Learning rules in games
    • Learning game parameters
  • Multiagent Reinforcement Learning (MARL)
    • Classical algorithms in MARL
    • Recent advances in MARL
  • Strategic Behavior in Learning
    • Adversarial Machine Learning (AML)
    • Learning from strategic data sources
  • Applications of Machine Learning and Game Theory
    • Security and sustainability, Transportation

The course will be a combination of lectures, class discussions, and student presentations. Students will be evaluated based on their class participation, paper reading assignments, paper presentations, programming assignments, and course projects. We will focus on mathematical foundations with rigorous derivations in class and the students need to write code in their programming assignments and/or course projects. The course content is designed to not have too much overlap with other AI courses offered at CMU.


Prerequisites include linear algebra, probability, algorithms, and at least one course in artificial intelligence. Familiarity with optimization is a plus but not necessary. Please see the instructor if you are unsure whether your background is suitable for the course.

Learning Objectives

At the end of the course, the students should be able to

  • Describe fundamental theoretical results in learning in games, strategic classification, and multi-agent reinforcement learning
  • Describe and implement classical and recent algorithms at the intersection of machine learning and game theory
  • Describe the applications of techniques integrating machine learning and game theory
  • Deliver a report on the course project and present the work through oral presentation

Course Schedule (Subject to Change)

    Last update: 8/27/22

# Date Topic Cover Slides and References
1 8/29 Intro to Convex Optimization Convex Optimization, Linear Programming Applied Mathematical Programming, Chp 2,4
2 8/31 Intro to Game Theory Normal-form and extensive-form games, Equilibrium Concepts (NE, SSE, CE), LP for Equilibrium Computation Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, Chp 3,4,5,6
3 9/7 Incremental Strategy Generation for Computing Equilibrium Security Games, Column generation, Constraint Generation, Double Oracle A Double Oracle Algorithm for Zero-Sum Security Games on Graphs; An Exact Double-Oracle Algorithm for Zero-Sum Extensive-Form Games with Imperfect Information;

Double-oracle sampling method for Stackelberg Equilibrium approximation in general-sum extensive-form games;

Security games with arbitrary schedules: A branch and price approach

4 9/12 Learning Game Parameters Subjective utility quantal response, Inverse Game Theory, Learning payoff in games, Quantal Response Equilibrium Improving resource allocation strategies against human adversaries in security games: An extended study; Analyzing the effectiveness of adversary modeling in security games; Learning Payoff Functions in Infinite Games; What Game Are We Playing? End-to-end Learning in Normal and Extensive Form Games
5 9/14 Intro to Reinforcement Learning (RL) MDP, Q-Learning, Policy Gradient RL Course by David Silver
6 9/19 Classical Algorithms for Multi-Agent Reinforcement Learning (MARL) Minimax-Q, Nash-Q, Team-Q, WoLF Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, Chp 7; An Analysis of Stochastic Game Theory for Multiagent Reinforcement Learning; Value-function reinforcement learning in Markov Games; Multiagent learning using a variable learning rate
7 9/21 Learning to Play in Multiagent Environment with  Individual RL Markov Game, PPO, OpenAI Five for Dota 2 Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, Chp 7; Proximal Policy Optimization; OpenAI Five
8 9/26 Multi-Agent Policy Gradient MADDPG, COMA, LOLA Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments; Counterfactual Multi-Agent Policy Gradients; Learning with opponent-learning awareness
9 9/28 Value Function Factorization in MARL VDN, QMIX, QTRAN Value-Decomposition Networks For Cooperative Multi-Agent Learning; QMIX:Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning; QTRAN: Learning to factorize with transformation for cooperative multi-agent reinforcement learning
10 10/3 Curriculum Learning and Population-Based Training in MARL Curriculum learning, population-based training Evolutionary population curriculum for scaling multi-agent reinforcement learning; Emergent Tool Use From Multi-Agent Autocurricula; Human-level performance in 3D multiplayer games with population-based reinforcement learning
11 10/5 Intro to Online Learning Online Convex Optimization, Online Classification, Regret Analysis, Follow-the-Leader, Follow-the-Regularized-Leader, Online Mirror Descent Online Learning and Online Convex Optimization, Chp 1-3
12 10/10 No-Regret Learning Rules in Games (Smooth) Fictitious Play, Regret Matching, Counterfactural Regret Minimization Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, Chp 7; Regret Minimization in Games with Incomplete Information
13 10/12 Fictitious Play in Complex Games XFP, NFSP, DeepFP Fictitious Self-Play in Extensive-Form Games; Deep Reinforcement Learning from Self-Play in Imperfect-Information Games; DeepFP for Finding Nash Equilibrium in Continuous Action Spaces
14 10/24 Learning to Play Large Games with Imperfect Information Poker AI, DeepStack, Libratus DeepStack: Expert-level artificial intelligence in heads-up no-limit poker; Superhuman AI for heads-up no-limit poker: Libratus beats top professionals; Superhuman AI for multiplayer poker; Monte Carlo sampling for regret minimization in extensive games
15 10/26 Double Oracle and League Training in MARL PSRO, DeDOL, League Training A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning; Deep Reinforcement Learning for Green Security Games with Real-Time Information; Grandmaster level in StarCraft II using multi-agent reinforcement learning
16 10/31 Learning to Play Large Zero-Sum Games with Perfect Information MCTS, AlphaGo, AlphaZero Mastering the game of go without human knowledge; Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm
17 11/2 Applications of MARL Fleet management, traffic signal control Efficient large-scale fleet management via multi-agent deep reinforcement learning; Multi-Agent Deep Reinforcement Learning for Large-Scale Traffic Signal Control
18 11/7 Introduction to Adversarial Machine Learning Fast gradient sign method Explaining and harnessing adversarial examples
19 11/9 Effective White-Box Evasion Attack White-box Attack Towards evaluating the robustness of neural networks; Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images; Adversarial examples in the physical world
20 11/14 Effective Black-Box Evasion Attack Black-box Attack Practical black-box attacks against machine learning; One pixel attack for fooling deep neural networks; Transferability in machine learning: from phenomena to black-box attacks using adversarial samples
21 11/16 Defense Against Evasion Attack Defensive distillation, Ensemble adversarial training Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks; Ensemble adversarial training: Attacks and defenses; Towards Deep Learning Models Resistant to Adversarial Attacks
22 11/21 Defense with Theoretical Guarantees TRADES, convex outer approximation, randomized smoothing Provable defenses against adversarial examples via the convex outer adversarial polytope; Theoretically Principled Trade-off between Robustness and Accuracy; Certified Adversarial Robustness via Randomized Smoothing
23 11/28 Strategic Classification Strategic classification Strategic Classification; Strategic Classification from Revealed Preferences; Actionable Recourse in Linear Classification
24 11/30 Learning with Strategic Agents Induce agents’ efforts, Social cost of countering strategic behavior The Social Cost of Strategic Classification; How Do Classifiers Induce Agents to Invest Effort Strategically?
25 12/5 Course Project Presentation – 1
26 12/7 Course Project Presentation – 2

Learning Resources

No formal textbook. References and additional resources will be provided in slides and on Canvas.


The final course grade will be calculated using the following categories:

Assessment Percentage of Final Grade
Class Participation 10 points
Paper Reading Assignment 10 points
Paper Presentation 10 points
Programming Assignments 30 points
Course Project 40 points
  • Class Participation. The grading of the class participation will be mostly based on attendance, checked by in-class polls, and asking and answering questions in class. Other factors include asking and answering questions on Piazza.
  • Paper Reading Assignment. The course will require all students to complete 5 paper reading assignments individually and provide reading summaries.
  • Paper Presentation. Each student will be asked to present 1~2 papers in class.
  • Programming Assignments: The course will have 3 programming assignments, two on multi-agent reinforcement learning and one on adversarial machine learning.
  • Course Project. The students will work in small groups (1-3 students in each group) on a course project related to machine learning and game theory. The students are required to submit a project report through Canvas and deliver an oral or poster presentation. The progress of projects will be checked through the Project Proposal, Project Progress Report, Project Presentation, and Final Project Report. The proposal and progress report will be peer-reviewed. The presentation and the final report will be evaluated by the instructor and TA directly. The final report will get a full score if it is at the same level as accepted papers at top AI conferences. For all the reports, students should use AAAI format.

Students will be assigned final letter grades according to the following table.

Grade Range of Points
A [90,100], A-: [90,93) A: [93,97) A+: [97,100]
B [80,90), B-: [80,83) B: [83,87) B+: [87,90)
C [70,80), C-: [70,73) C: [73,77) C+: [77,80)
D [60,70), D: [60,67) D+: [67,70)
R (F) [0,59)

Grading Policies

  • Late-work policy and Make-up work policy: All late submissions within a week of the due date will be weighted by 0.7. Submissions after one week of the due date will not be considered.
  • Re-grade policy: To request a re-grade, the student needs to write an email to the instructor titled “Re-grade request from [Student’s Full Name]” within one week of receiving the graded assignment.
  • Attendance and participation policy: Attendance and participation will be a graded component of the course. The grading of the class participation will be mostly based on attendance, checked by in-class polls and asking and answering questions in class. Other factors include asking and answering questions on Canvas.

Course Policies

  • Academic Integrity & Collaboration: For paper reading assignments, the student can discuss with other students, but he needs to specify the names of the students he discussed with in the submission, and complete the summary on his own. For the course project, the students can discuss and collaborate with others (including students, faculty members), but the students need to give proper credits to whoever involved, and report the contributions of each group member in the final report and presentations, which will be considered in the grading. It is allowed to use publicly available code packages but the source of code package needs to be specified in the submission. Plagiarism is not allowed. The policy is motivated by CMU policy on academic integrity which can be found here.
  • Mobile Devices: Mobile devices are allowed in class. Cellphones should be in silent mode. Students who use tablet in an upright position and laptops will be asked to sit in the back rows of the classroom.
  • Accommodations for students with disabilities: If you have a disability and require accommodations, please contact Catherine Getchell, Director of Disability Resources, 412-268-6121, If you have an accommodations letter from the Disability Resources office, I encourage you to discuss your accommodations and needs with me as early in the semester as possible. I will work with you to ensure that accommodations are provided as appropriate.
  • Statement on student wellness: As a student, you may experience a range of challenges that can interfere with learning, such as strained relationships, increased anxiety, substance use, feeling down, difficulty concentrating and/or lack of motivation. These mental health concerns or stressful events may diminish your academic performance and/or reduce your ability to participate in daily activities. CMU services are available, and treatment does work. You can learn more about confidential mental health services available on campus here. Support is always available (24/7) from Counseling and Psychological Services: 412-268-2922.
  • Classroom Expectations related to COVID-19: In order to attend class meetings in person, all students are expected to abide by all behaviors indicated in A Tartan’s Responsibility, including any timely updates based on the current conditions. In terms of specific classroom expectations, whenever the requirement to wear a facial covering is in effect on campus, students are expected to wear a facial covering throughout class. Note: the requirement to wear a facial covering is in effect for the start of the Fall 2021 semester. If you do not wear a facial covering to class, I will ask you to put one on (and if you don’t have one with you, I will direct you to a distribution location on campus, see If you do not comply, you will be referred to the Office of Community Standards and Integrity for follow up, which could include student conduct action. Finally, please note that sanitizing wipes should be available in our classroom for those who wish to use them.