Advanced Topics in Machine Learning and Game Theory (Fall 2020)

Basic Information

Course Name: Advanced Topics in Machine Learning and Game Theory
Meeting Days, Times, Location: MW at 8:00 am – 9:20 am, Fully Remote
Semester: Fall, Year: 2020
Units: 12, Section(s): 17599 (Undergrad), 17759 (Graduate)

Instructor Information

NameDr. Fei Fang
Contact InfoEmail:
Office locationWorking Remotely
Office hoursTBD

Course Description

This course is designed to be a graduate-level course covering the topics at the intersection of machine learning and game theory. Recent years have witnessed significant advances in machine learning and their successes in detection, prediction, and decision-making problems. However, in many application domains, ranging from auction and ads bidding, to entertainment games such as Go and Poker, to autonomous driving and traffic routing, to the intelligent warehouse, to home assistants and the Internet of Things, there is more than one agent interacting with each other. Game theory provides a framework for analyzing the strategic interaction between multiple agents and can complement machine learning when dealing with challenges in these domains. Therefore, in the course, we will introduce how to integrate machine learning and game theory to tackle challenges in multi-agent systems. The course will multiple topics as listed below

  • Basics of Machine Learning and Game Theory
    • Introduction to linear programming
    • Introduction to game theory
    • Introduction to online learning
    • Introduction to reinforcement learning
  • Learning in Games
    • Learning rules in games
    • Learning game parameters
  • Multiagent Reinforcement Learning (MARL)
    • Classical algorithms in MARL
    • Recent advances in MARL
  • Strategic Behavior in Learning
    • Adversarial Machine Learning (AML)
    • Learning from strategic data sources
  • Applications of Machine Learning and Game Theory
    • Learning to combat adversaries for security and sustainability
    • Learning optimal auctions from samples

The course will be a combination of lectures, class discussions, and student presentations. Students will be evaluated based on their class participation, paper reading assignments, paper presentations, programming assignments, and course projects. We will focus on mathematical foundations with rigorous derivations in class and the students need to write code in their programming assignments and/or course projects. The course content is designed to not have too much overlap with other AI courses offered at CMU.


Prerequisites include linear algebra, probability, algorithms, and at least one course in artificial intelligence. Familiarity with optimization is a plus but not necessary. Please see the instructor if you are unsure whether your background is suitable for the course.

Learning Objectives

At the end of the course, the students should be able to

  • Describe fundamental theoretical result in learning in games, strategic classification, and multi-agent reinforcement learning
  • Describe and implement classical and recent algorithms at the intersection of machine learning and game theory
  • Describe the applications of techniques integrating machine learning and game theory
  • Deliver a report of course project and present the work through oral presentation

Course Schedule (Subject to Change)

    Last update: 2/27/20

#DateTopicCoverSlides and References
18/31Intro to Convex OptimizationLP, Convex Optimization, DualityApplied Mathematical Programming, Chp 2,4
29/2Intro to Game Theory – Normal Form GamesEquilibrium Concepts (NE, SSE, CE), LP for Equilibrium ComputationMultiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, Chp 3,4
39/9Incremental Strategy Generation for Computing EquilibriumSecurity Games, Column generation, Constraint Generation, Double OracleA Double Oracle Algorithm for Zero-Sum Security Games on Graphs; An Exact Double-Oracle Algorithm for Zero-Sum Extensive-Form Games with Imperfect Information; Double-oracle sampling method for Stackelberg Equilibrium approximation in general-sum extensive-form games
49/14Intro to Game Theory – Extensive Form Games and BeyondEquilibrium Concepts and Refinement, Sequence Form Representation, LP for Equilibrium ComputationMultiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, Chp 5,6
59/16Intro to Online LearningOnline Convex Optimization, Online Classification, Regret Analysis, Follow-the-Leader, Follow-the-Regularized-Leader, Online Mirror DescentOnline Learning and Online Convex Optimization, Chp 1-3
69/21No-Regret Learning Rules in Games(Smooth) Fictitious Play, Regret Matching, Counterfactural Regret MinimizationMultiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, Chp 7; Regret Minimization in Games with Incomplete Information
79/23Learning to Play Large Games with Imperfect InformationPoker AI, DeepStack, LibratusDeepStack: Expert-level artificial intelligence in heads-up no-limit poker; Superhuman AI for heads-up no-limit poker: Libratus beats top professionals
89/28Learning Game ParametersInverse Game Theory, Learning payoff in games, Quantal Response EquilibriumLearning Payoff Functions in Infinite Games; What Game Are We Playing? End-to-end Learning in Normal and Extensive Form Games
99/30Intro to Reinforcement Learning (RL)MDP, Q-Learning, Policy GradientRL Course by David Silver
1010/5Classical Algorithms for Multi-Agent Reinforcement Learning (MARL) – 1Markov Game, Minimax-Q, Nash-QMultiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, Chp 7; An Analysis of Stochastic Game Theory for Multiagent Reinforcement Learning
1110/7Classical Algorithms for MARL – 2Hierarchical Multi-Agent Reinforcement Learning; Value-function reinforcement learning in Markov Games; Multiagent learning using a variable learning rate
1210/12Learning from Self-Play with Individual RLPPO, OpenAI Five for Dota 2Proximal Policy Optimization; OpenAI Five
1310/14Learning to Play Large Games with Perfect InformationGo AI, AlphaGo, AlphaZeroMastering the game of go without human knowledge; Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm
1410/19Deep Learning and Fictitious Play in MARLNFSP, DeepFPDeep Reinforcement Learning from Self-Play in Imperfect-Information Games; DeepFP for Finding Nash Equilibrium in Continuous Action Spaces
1510/21Double Oracle in Complex GamesPSRO, DeDOLA Unified Game-Theoretic Approach to Multiagent Reinforcement Learning; Deep Reinforcement Learning for Green Security Games with Real-Time Information
1610/26Multi-Agent Policy GradientMADDPG, M3DDPG, COMAMulti-Agent Actor-Critic for Mixed Cooperative-Competitive Environments; Robust Multi-Agent Reinforcement Learning via Minimax Deep Deterministic Policy Gradient; Counterfactual Multi-Agent Policy Gradients
1710/28MARL and Social DilemmaLOLAMulti-agent Reinforcement Learning in Sequential Social Dilemmas; Learning with opponent-learning awareness
1811/2Learning Parameters in MARLInverse RL in Multiagent SettingMulti-Agent Adversarial Inverse Reinforcement Learning
1911/4Adversarial Examples in Machine LearningFast gradient sign methodExplaining and harnessing adversarial examples
2011/9Effective White-Box Evasion AttackWhite-box AttackTowards evaluating the robustness of neural networks; Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images
2111/11Effective Black-Box Evasion AttackBlack-box AttackPractical black-box attacks against machine learning
2211/16Defense Against Evasion AttackDefensive distillation, Robust optimizationDistillation as a Defense to Adversarial Perturbations against Deep Neural Networks; Towards Deep Learning Models Resistant to Adversarial Attacks
2311/18Strategic ClassificationStrategic classificationStrategic Classification
2411/23Learning with Strategic AgentsInduce agents’ effortsHow Do Classifiers Induce Agents to Invest Effort Strategically?
2511/30Learning to combat adversaries for security and sustainabilityPAWSDeploying PAWS: Field Optimization of the Protection Assistant for Wildlife Security
2612/2Learning optimal auctions from samplesRL for AuctionReinforcement mechanism design, with applications to dynamic pricing in sponsored search auctions
2712/7Course Project Presentation – 1
2812/9Course Project Presentation – 2

Learning Resources

No formal textbook. References and additional resources will be provided in slides and on Canvas.


The final course grade will be calculated using the following categories:

AssessmentPercentage of Final Grade
Class Participation10 points
Paper Reading Assignment15 points
Paper Presentation15 points
Programming Assignments20 points
Course Project40 points
  • Class Participation. The grading of the class participation will be mostly based on attendance, checked by in-class polls and asking and answering questions in class. Other factors include asking and answering questions on Piazza.
  • Paper Reading Assignment. The course will require all students to complete 5 paper reading assignments individually and provide reading summaries.
  • Paper Presentation. Each student will be asked to present 2 papers in class.
  • Programming Assignments: The course will have 2 programming assignments, one on designing an agent to play in a multi-agent particle environment, and one on defending and attacking an image classifier.
  • Course Project. The students will work in small groups (1-3 students in each group) on a course project related to machine learning and game theory. The students are required to submit a project report through Canvas and deliver an oral or poster presentation. The progress of projects will be checked through the Project Proposal, Project Progress Report, Project Presentation, and Final Project Report. The proposal and progress report will be peer-reviewed. The presentation and the final report will be evaluated by the instructor and TA directly. The final report will get a full score if it is at the same level as accepted papers at top AI conferences. For all the reports, students should use NeurIPS 2020 Style File. For graduate-level students, novelty is an important factor in the evaluation of the course project.

Students will be assigned final letter grades according to the following table.

GradeRange of Points
A[90,100], A-: [90,93) A: [93,97) A+: [97,100]
B[80,90), B-: [80,83) B: [83,87) B+: [87,90)
C[70,80), C-: [70,73) C: [73,77) C+: [77,80)
D[60,70), D: [60,67) D+: [67,70)
R (F)[0,59)

Grading Policies

  • Late-work policy and Make-up work policy: All late submissions within a week of the due date will be weighted by 0.7. Submissions after one week of the due date will not be considered.
  • Re-grade policy: To request a re-grade, the student needs to write an email to the instructor titled “Re-grade request from [Student’s Full Name]” within one week of receiving the graded assignment.
  • Attendance and participation policy: Attendance and participation will be a graded component of the course. The grading of the class participation will be mostly based on attendance, checked by in-class polls and asking and answering questions in class. Other factors include asking and answering questions on Canvas.

Course Policies

  • Academic Integrity & Collaboration: For paper reading assignments, the student can discuss with other students, but he needs to specify the names of the students he discussed with in the submission, and complete the summary on his own. For the course project, the students can discuss and collaborate with others (including students, faculty members), but the students need to give proper credits to whoever involved, and report the contributions of each group member in the final report and presentations, which will be considered in the grading. It is allowed to use publicly available code packages but the source of code package needs to be specified in the submission. Plagiarism is not allowed. The policy is motivated by CMU policy on academic integrity which can be found here.
  • Mobile Devices: Mobile devices are allowed in class. Cellphones should be in silent mode. Students who use tablet in an upright position and laptops will be asked to sit in the back rows of the classroom.
  • Accommodations for students with disabilities: If you have a disability and require accommodations, please contact Catherine Getchell, Director of Disability Resources, 412-268-6121, If you have an accommodations letter from the Disability Resources office, I encourage you to discuss your accommodations and needs with me as early in the semester as possible. I will work with you to ensure that accommodations are provided as appropriate.
  • Statement on student wellness: As a student, you may experience a range of challenges that can interfere with learning, such as strained relationships, increased anxiety, substance use, feeling down, difficulty concentrating and/or lack of motivation. These mental health concerns or stressful events may diminish your academic performance and/or reduce your ability to participate in daily activities. CMU services are available, and treatment does work. You can learn more about confidential mental health services available on campus here. Support is always available (24/7) from Counseling and Psychological Services: 412-268-2922.