## February 04, 2016

### Vince Knight

#### Playing against a mixed strategy in class

This post mirrors this post from last year in which I described how my students and I played against various mixed strategies in a modified version of matching pennies.

This is the game we played:

I wrote a sage interact that allows for a quick visualisation of a random sample from a mixed strategy.

I handed out sheets of papers on which students would input their preferred strategies (‘H’ or ‘T’) whilst I sampled randomly from 3 different mixed strategies:

1. $\sigma_1 = (.2, .8)$
2. $\sigma_1 = (.9, .1)$
3. $\sigma_1 = (1/3, 2/3$

Based on the class notation that implies that the computer was the row player and the students the column player. The sampled strategies were (we played 6 rounds for each mixed strategy):

1. TTHTTT
2. HHHTHT
3. TTTTTH

### Round 1

This mixed strategy (recall $\sigma_1=(.2,.8)$) implies that the computer will be mainly playing T (the second strategy equivalent to the second row), and so based on the bi-matrix it is in the students interest to play H. Here is a plot of the mixed strategy played by all the students:

The mixed strategy played was $\sigma_2=(.54,.46)$. Note that in fact in this particular instance that actual best response is to play $\sigma_2=(1,0)$. This will indeed maximise the expected value of:

Indeed: the above is an increasing linear function in $x$ so the highest value is obtained when $x=1$.

The mean score for this round by everyone was: 1.695. The theoretical mean score (when playing the best response for six consecutive games is): $6(-.2\times 2+.8)=2.4$, so (compared to last year) this was quite low.

Here is a distribution of the scores:

We see that a fair number of students lost but 1 student did get the highest possible score (7).

### Round 2

Here the mixed strategy is $\sigma_1=(.9,.1)$, implying that students should play T more often than H. Here is a plot of the mixed strategy played by all the students:

The mixed strategy played was $\sigma_2=(0.329,0.671)$. Similarly to before this is not terribly close to the actual best response which is $(0,1)$ (due to the expected utility now being a decreasing linear function in $x$.

Here is a distribution of the scores:

We see that some still managed to lose this round but overall mainly winners.

### Round 3

Here is where things get interesting. The mixed strategy played by the computer is here $\sigma_1=(1/3,2/3)$, it is not now obvious which strategy is worth going for!

Here is the distribution played:

The mixed strategy is $\sigma_2=(0.58,0.42) and the mean score was 1.11. Here is what the distribution looked like: It looks like we have a few more losers than winners but not by much. ## February 01, 2016 ### Vince Knight #### Introducing Game Theory to my class Here is a blog post that mirrors this post from two years ago and this post from last year. As always, I will be using my blog to extend the class meetings my Game Theory class and I have. Here are the results of the strategies played during the first game of matching pennies (see the previously mentioned posts for details): We see that overall everyone seems to be playing randomly. After that we played a modified version of the game (the row player has more to win by playing heads): We now see that both players actually play heads less. This is perhaps easier to understand from the column player’s point of view. I will leave an explanation as to what the green and yellow lines represent for a little longer… The main point of this is to make sure that everyone understands the normal form game convention (by breaking the symmetry) and also to make it slightly more interesting (the row player now has more to win/lose by playing Heads). Looking forward to the next class meeting where we will be doing similar things as we continue to understand the basics of normal form game representations. ## January 27, 2016 ### Vince Knight #### University of Namibia Mathematics Summer School I am writing this post just after two extraordinary weeks in Namibia. This is a quick personal reflection of what has been an awesome experience. As part of Cardiff University’s Phoenix Project, Martin Mugochi, Rob Wilson and I with Cardiff PhD students Geraint Palmer and Alex MacKay worked with the University of Namibia’s faculty of Mathematics to deliver a two week summer school. The goal of this joint effort with the University of Namibia was to provide a positive experience of mathematics. As Rob said: We want them to concentrate on what they can do rather than what they cannot do. The first week involved inquiry based sessions on both mathematical topics (such as Algebra and Geometry) as well as wider skills (such as presenting and reading mathematics). This side of the course mainly involved students working on through activities and presenting them to the class which lead to discussion (and in an IBL way ultimately confirmation/verification of the conclusions). This is the point at which we (I’m sure Rob, Alex and Geraint would agree) must say that the students were awesome. Eager to learn, open to the novel pedagogic ideas, a real pleasure to work with. Here are some photos of the first week (the only local ones as I write this are from the group Alex and I took but things were pretty much the same in Rob and Geraint’s group): The second week had students work in groups on a variety of projects such as: • Mathematical paradoxes, • Patterns in Pascal’s triangle, • History of mathematics From the intensity of the first week, this lead to a stark contrast in which students came to us for support. This lead to us not being in direct contact with all the students all the time. The culmination of the whole school was a 2 hour closing ceremony in which students presented their posters. As we hadn’t seen all the groups we were slightly worried that this might fall flat on it’s face but we were very wrong and it was such a delight to see each and every group turn up to put their awesome poster on the wall. Here are some of the posters: Without any nudge on our part students starting walking around and learning from each other’s poster (I am still smiling about this now). This was followed by students giving 5 minute presentations, closing remarks from various UNAM officials, Martin Mugochi (head of the mathematics department) and ourselves. One of my most pleasant memories (of which there are too many to mention) is what happened just after that though, we (students and us) came together to thank each other for our efforts and get photos: This was such a great experience, it was fantastic to work and become good friends with Martin, get to know the students (seeing the benefits of active pedagogic methodologies) and spend two great weeks with Geraint, Alex and Rob: This is just one of many Phoenix project projects and it’s great to be involved. Now I need to put this laptop down, get a good night’s sleep and spend tomorrow working on final details for PyCon Namibia. ## January 15, 2016 ### William Stein #### Thinking of using SageMathCloud in a college course? ## SageMathCloud course subscriptions "We are college instructors of the calculus sequence and ODE’s. If the college were to purchase one of the upgrades for us as we use Sage with our students, who gets the benefits of the upgrade? Is is the individual students that are in an instructor’s Sage classroom or is it the collaborators on an instructor’s project?" If you were to purchase just the 7/month plan and apply the upgrades to *one* single project, then all collaborators on that one project would benefit from those upgrades while using that project. If you were to purchase a course plan for say 399/semester, then you could apply the upgrades (network access and members only hosting) to 70 projects that you might create for a course. When you create a course by clicking +New, then "Manage a Course", then add students, each student has their own project created automatically. All instructors (anybody who is a collaborator on the project where you clicked "Manage a course") is also added to the student's project. In course settings you can easily apply the upgrades you purchase to all projects in the course. Also I'm currently working on a new feature where instructors may choose to require all students in their course to pay for the upgrade themselves. There's a one time 9/course fee paid by the student and that's it. At some colleges (in some places) this is ideal, and at other places it's not an option at all. I anticipate releasing this very soon. ## Getting started with SageMathCloud courses You can fully use the SMC course functionality without paying anything in order to get familiar with it and test it out. The main benefit of paying is that you get network access and all projects get moved to members only servers, which are much more robust; also, we greatly prioritize support for paying customers. This blog post is an overview of using SMC courses: http://www.beezers.org/blog/bb/2015/09/grading-in-sagemathcloud/ This has some screenshots and the second half is about courses: http://blog.ouseful.info/2015/11/24/course-management-and-collaborative-jupyter-notebooks-via-sagemathcloud/ Here are some video tutorials made by an instructor that used SMC with a large class in Iceland recently: https://www.youtube.com/watch?v=dgTi11ZS3fQ https://www.youtube.com/watch?v=nkSdOVE2W0A https://www.youtube.com/watch?v=0qrhZQ4rjjg Note that the above videos show the basics of courses, then talk specifically about automated grading of Jupyter notebooks. That might not be at all what you want to do -- many math courses use Sage worksheets, and probably don't automate the grading yet. Regarding using Sage itself for teaching your courses, check out the free pdf book to "Sage for Undergraduates" here, which the American Mathematical Society just published (there is also a very nice print version for about 23): http://www.gregorybard.com/SAGE.html ## January 08, 2016 ### William Stein #### Mathematics Graduate School: preparation for non-academic employment This is about my personal experience as a mathematics professor whose students all have non-academic jobs that they love. This is in preparation for a panel at the Joint Mathematics Meetings in Seattle. My students and industry My graduated Ph.D. students: • 3 at Google • 1 at Facebook • 1 at CCR My graduating student (Hao Chen): • Applying for many postdocs • But just did summer internship at Microsoft Research with Kristin. (I’ve had four students do summer internships with Kristin) All my students: • Have done a lot of Software development, maybe having little to do with math, e.g., “developing the Cython compiler”, “transition the entire Sage project to git”, etc. • Did a thesis squarely in number theory, with significant theoretical content. • Guilt (or guilty pleasure?) spending time on some programming tasks instead of doing what they are “supposed” to do as math grad students. ## Me: academia and industry • Math Ph.D. from Berkeley in 2000; many students of my advisor (Lenstra) went to work at CCR after graduating… • Academia: I’m a tenured math professor (since 2005) – number theory. • Industry: I founded a Delaware C Corp (SageMath, Inc.) one year ago to “commercialize Sage” due to VERY intense frustration trying to get grant funding for Sage development. Things have got so bad, with so many painful stupid missed opportunities over so many years, that I’ve given up on academia as a place to build Sage. Reality check: Academia values basic research, not products. Industry builds concrete valuable products. Not understanding this is a recipe for pain (at least it has been for me). ## Advice for students from students ### Robert Miller (Google) My student Robert Miller’s post on Facebook yesterday: “I LOVE MY JOB”. Why: “Today I gave the first talk in a seminar I organized to discuss this result: ‘Graph Isomorphism in Quasipolynomial Time’. Dozens of people showed up, it was awesome!” Background: When he was my number theory student, working on elliptic curves, he gave a talk about graph theory in Sage at a Sage Days (at IPAM). His interest there was mainly in helping an undergrad (Emily Kirkman) with a Sage dev project I hired her to work on. David Harvey asked: “what’s so hard about implementing graph isomorphism”, and Robert wanted to find out, so he spent months doing a full implementation of Brendan McKay’s algorithm (the only other one). This had absolutely nothing to do with his Ph.D. thesis work on the Birch and Swinnerton-Dyer conjecture, but I was very supportive. ### Craig Citro (Google) Craig Citro did a Ph.D. in number theory (with Hida), but also worked on Sage aLOT as a grad student and postdoc. He’s done a lot of hiring at Google. He says: “My main piece of advice to potential google applicants is ‘start writing as much code as you can, right now.’ Find out whether you’d actually enjoyworking for a company like Google, where a large chunk of your job may be coding in front of a screen. I’ve had several friends from math discover (the hard way) that they don’t really enjoy full-time programming (any more than they enjoy full-time teaching?).” “Start throwing things on github now. Potential interviewers are going to check out your github profile; having some cool stuff at the top is great, but seeing a regular stream of commits is also a useful signal.” ### Robert Bradshaw (Google) “A lot of mathematicians are good at (and enjoy) programming. Many of them aren’t (and don’t). Find out. Being involved in Sage is significantly more than just having taken a suite of programming courses or hacking personal scripts on your own: code reviews, managing bugs, testing, large-scale design, working with others’ code, seeing projects through to completion, and collaborating with others, local and remote, on large, technical projects are all important. It demonstrates your passion.” ### Rado Kirov (Google) Robert Bradshaw said it before me, but I have to repeat. Large scale software development requires exposure to a lot of tooling and process beyond just writing code - version control, code reviews, bug tracking, code maintenance, release process, coordinating with collaborators. Contributing to an active open-source project with a large number of contributors like Sage, is a great way to experience all that and see if you would like to make it your profession. A lot of mathematicians write clever code for their research, but if less than 10 people see it and use it, it is not a realistic representation of what working as a software engineer feels like. The software industry is in large demand of developers and hiring straight from academia is very common. Before I got hired by Google, the only software development experience on my resume was the Sage graph editor. Along with solid understanding of algorithms and data structures that was enough to get in." ### David Moulton (Google) “Google hires mathematicians now as quantitative analysts = data engineers. Google is very flexible for a tech company about the backgrounds of its employees. We have a long-standing reading group on category theory, and we’re about to start one on Babai’s recent quasi- polynomial-time algorithm for graph isomorphism. And we have a math discussion group with lots of interesting math on it.” ## My advice for math professors Obviously, encourage your students to get involved in open source projects like Sage, even if it appears to be a waste of time or distraction from their thesis work (this will likely feel very counterintuitive you’ll hate it). At Univ of Washington, a few years ago I taught a graduate-level course on Sage development. The department then refused to run it again as a grad course, which was frankly very frustrating to me. This is exactly the wrong thing to do if you want to increase the options of your Ph.D. students for industry jobs. Maybe quit trying to train our students to be only math professors, and instead give them a much wider range of options. ## December 15, 2015 ### Vince Knight #### The Dilemma of giving Christmas Gifts This post gives a game theoretic explanation as to why we exchange gifts. On twitter @alexip tweeted “‘Let’s agree not to give each other presents for Christmas’ is just another case of the prisoner’s dilemma #gametheory”. This post builds on that and investigates the premise fully in an evolutionary context investigating different values of how good it feels to give and receive a gift :) To illustrate this consider the situation where Alex and Camille are approaching Christmas: Alex: How about we don’t buy Christmas present for each other this year? Camille: Sounds great. Let us describe how this situation corresponds to a prisoner’s dilemma. • If Alex and Camille cooperate and indeed keep their promise of not getting gifts than let us assume they both get a utility of \(R$ (reward).
• If Alex cooperates but Camille decides to defect and nonetheless give a gift then Alex will feel a bit bad and Camille will feel good, so Alex gets a utility of $S$ (sucker) and Camille a utility of $T$ (temptation).
• Vice versa if Camille cooperates but Alex decides to give a gift.
• If both Alex and Camille go against their promise then they both get a utility of $P$ (punishment).

This looks something like:

If we assume that we feel better when we give gifts and will be keen to ‘cheat’ a promise of not giving then that corresponds to the following inequality of utilities:

In this case we see that if Camille chooses to cooperate then Alex’s best response is to play defect (as $T>R$):

If Camille is indeed going to not give a gift, then Alex should give a gift.

Also if Camille chooses to defect then Alex’s best response is to defect once again (as $P>S$):

If Camille is going to ‘break the promise’ then Alex should give a gift.

So no matter what happens: Alex should defect.

In game theory this is what is called a dominating strategy, and indeed this situation is referred to as a Prisoner’s Dilemma and is what Alex was referring to in his original tweet.

## How does reputation effect gift giving?

So far all we are really modelling is a SINGLE exchange of gifts. If we were to exchange gifts every year we would perhaps learn to trust each other, so that when Camille says they are not going to give a gift Alex has reason to believe that they will indeed not do so.

This is called an iterated Prisoner’s dilemma and has been the subject of a great amount of academic work.

Let us consider two types of behaviour that Camille and Alex could choose to exhibit, they could be:

• Alternator: give gifts one year and not give gifts the next.
• TitForTat: do whatever the other does the previous year.

Let us assume that Alex and Camille will be faced with this situation for 10 years. I’m going to use the Python Axelrod library to illustrate things:

>>> import axelrod as axl
>>> alex, camille = axl.Alternator(), axl.TitForTat()
>>> match = axl.Match([alex, camille], 10)
>>> _ = match.play()
>>> print(match.sparklines(c_symbol='😀', d_symbol='🎁'))
😀🎁😀🎁😀🎁😀🎁😀🎁
😀😀🎁😀🎁😀🎁😀🎁😀

We see that Alex and Camille never actually exchange gifts the same year (the 😀 means that the particular player cooperates, the 🎁 that they don’t and give a gift).

Most of the ongoing Iterated Prisoner’s Dilemma research is directly due to a computer tournament run by Robert Axelrod in the 1980s. In that work Axelrod invited a variety of computer strategies to be submitted and they then played against each other. You can read more about that here: axelrod.readthedocs.org/en/latest/reference/description.html but the important thing is that there are a bunch of ‘behaviours’ that have been well studied and that we will look at here:

3. Alternator: give gifts one year and not give gifts the next.
4. TitForTat: do whatever the other does the previous year.
5. TwoTitForTat: will start by not giving a gift but if the other player gives a gift will give a gift the next two years.
6. Grudger: start by not giving gifts but if at any time someone else goes against the promise: give a gift no matter what.

What we will now do is see how much utility (how people feel about their gift giving behaviour) if we have a situation where 6 people exchange gifts for 50 years and each person acts according to one of the above behaviours.

For our utility we will use the following values of $R, P, S, T$:

Here is how we can do this in python:

>>> family = [axl.Cooperator(),
...           axl.Defector(),
...           axl.Alternator(),
...           axl.TitForTat(),
...           axl.TwoTitsForTat(),
...           axl.Grudger()]
>>> christmas = axl.Tournament(family, turns=50, repetitions=1)
>>> results = christmas.play()
>>> results.scores
[[525], [562], [417], [622], [646], [646]]

We see that the people that do the best are the last two: TwoTitForTat and Grudger. These are people who are quick enough to react to people who won’t keep their promise but that do give hope to people who will!

## At a population level: evolution of gift giving

We can consider this in an evolutionary context where we see how the behaviour is allowed to evolve amongst a whole population of people. This particular type of game theoretic analysis is concerned not in micro interactions but long term macro stability of the system.

Here is how we can see this using Python:

>>> evo = axl.Ecosystem(results)
>>> evo.reproduce(100)
>>> plot = axl.Plot(results)
>>> plot.stackplot(evo)

What we see is that over time, the population evolves to only Cooperator, TitForTat, Grudger and TwoTitsForTat, but of course in a population with only those strategies everyone is keeping their promise, cooperating and not giving gifts.

Let us see how this changes for different values of $R, P, S, T$.

To check if not giving presents is evolutionary stable we just need to see what the last population numbers are for the Alternator and Defector. Here is a Python function to do this:

>>> def check_if_end_pop_cooperates(r=3, p=1, s=0, t=5,
...                                 digits=5, family=family, turns=10000):
...    """Returns a boolean and the last population vector"""
...    game = axl.Game(r=r, p=p, s=s, t=t)
...    christmas = axl.Tournament(family, turns=50, repetitions=1, game=game)
...    results = christmas.play()
...    evo = axl.Ecosystem(results)
...    evo.reproduce(turns)
...    last_pop = [round(pop, digits) for pop in evo.population_sizes[-1]]
...    return last_pop[1] == last_pop[2] == 0, last_pop

We see that for the default values of $R, P, S, T$ we have:

>>> check_if_end_pop_cooperates(r=3, p=1, s=0, t=5)
(True, [0.16576, 0.0, 0.0, 0.26105, 0.28659, 0.28659])

As already seen we have that for these values we end up with everyone keeping to the promise. Let us increase the value of $T$ by a factor of 100:

>>> check_if_end_pop_cooperates(r=3, p=1, s=0, t=500)
(False, [0.0, 1.0, 0.0, 0.0, 0.0, 0.0])

We here see, that if the utility of giving a gift when the receiver is not giving one in return is very large, the overall population will always give a gift:

## Seeing the effect of how good giving gifts makes us feel

The final piece of analysis I will carry out is a parameter sweep of the above:

• $5\leq T \leq 100$
• $3\leq R < T$
• $1\leq P < R$
• $0\leq S < P$

All of this data sweep is in this csv file. Here is the distribution of parameters for which everyone gives a gift (reneging on the promise):

Here is the distribution of parameters for which everyone keeps their promise and does not give gifts:

We see that people keep their promise if the $T$ utility (the utility of being tempted to break the promise) is very high compared to all other utilities.

Carrying out a simple logistic regression we see the coefficients of each of the variables as follows:

• $P$: 3.121547
• $R$: -2.942909
• $S$: 0.007738
• $T$: -0.107386

The parameters that have a positive effect on keeping the promise is $R$ and $S$ which is the reward for the promise being kept and for not giving a gift but receiving one.

## TLDR

Agreeing to not give gifts at Christmas can be an evolutionary stable strategy, but this is only in the specific case where the utility of ‘giving’ is less than the utility of ‘not giving’. Given that in practice this promise is almost always broken (that’s my personal experience anyway) this would suggest that people enjoy giving gifts a lot more than receiving them.

Merry christmas 🎄🎁⛄️.

## December 05, 2015

### Vince Knight

#### I think this is how to crawl the history of a git repository

This blog post is a direct application of Cunningham’s Law: which is that “the best way to get the right answer on the Internet is not to ask a question, it’s to post the wrong answer”. With the other core developers of the Axelrod library we’re writing a paper and I wanted to see the evolution of a particular property of the library through the 2000+ commits (mainly to include a nice graph in the paper). This post will detail how I’ve cycled through all the commits and recorded the particular property I’m interested in. EDIT: thanks to Mario for the comments: see the edits in bold to see the what I didn’t quite get right.

The Axelrod library is a collaborative project that allows anyone to submit strategies for the iterated prisoner’s dilemma via pull request (read more about this here: axelrod.readthedocs.org/en/latest/). When the library was first put on github it had 6 strategies, it currently has 118. This figure can be obtained by simply running:

>>> import axelrod
>>> len(axelrod.strategies)
118

The goal of this post is to obtain the plot below:

EDIT: here is the correct plot:

Here is how I’ve managed that:

1. Write a script that imports the library and throws the required data in to a file.
2. Write another script that goes through the commits and runs the previous script.

So first of all here’s the script that gets the number of strategies:

import axelrod
from sys import argv

try:
if type(len(axelrod.strategies)) is int:
print argv[1], len(axelrod.strategies), argv[2]
except:
pass

The (very loose) error handling is because any given commit might or might not be able to run at all (for a number of reasons). The command line arguments are so that my second script can pass info about the commits (date and hash).

Here is the script that walks the github repository:

from git import Repo  # This imports the necessary class from the gitPython package
import os
import subprocess
import time

path_to_repo = "~/src/Axelrod/"
repo = Repo(path_to_repo)
all_commits = [c for c in repo.iter_commits()]  # Get all the commits
git = repo.git  # This creates an object that I can just use basic git commands with

git.checkout('master')  # Make sure I start at master

time.sleep(10)  # Need to give time for files write

try:
os.remove('data')  # Delete the data file if it already exists
except OSError:
pass

for c in sorted(all_commits, key=lambda x:x.committed_date):  # Go through all commits
for rubbish in [".DS_Store",
"axelrod/.DS_Store",
"axelrod/tests/.DS_Store",
"axelrod/strategies/.DS_Store"]:  # Having to delete some files that were not in gitignore at the time of the commit
try:
os.remove(path_to_repo + rubbish)
except OSError:
pass

git.checkout(c)  # Checkout the commit
time.sleep(10)  # Need to let files write

f = open('data', "a")
subprocess.call(['python2', 'number_of_strategies.py', str(c.committed_date), c.hexsha], stdout=f)  # Call the other script and output to the data file
f.close()

git.checkout('master')  # Go back to HEAD of master

Now, I am not actually sure if I need the 10 seconds of sleep in there but it seems to make things a little more reliable (this is where I’m hoping some knowledgeable kind soul will point out something isn’t quite right).

Here is an animated gif of the library as the script checks through the commits (I used a sleep of 0.1 second here, and cut if off at the beginning):

(You can find a video version of the above at the record.it site.)

The data set from above looks like this:

...
1424259799 6 35db17958a93e66cc09a7e7b865127b8d20acd85
1424261483 6 79c03291a1f0211925755962411d28c932150aaa
1424264425 7 f4be6bcbe9e122eb036a141f48f5acbf03b9290c
1424264540 7 6f28c9f8653e39b496c872351bce5a420e474c17
1424264950 7 456d9d25dbc44e29dde6b39455d10314824479bb
1424264958 7 0c01b14b5c3180d9e4016b09e532410cafd53992
1424265660 7 3eeec928cb7261af797044ac3bde1b26e11a7897
1424266926 7 cf506116005acd5a450894ca67eb0b670d5fd597
1424268080 8 87aa895089cdb105471280a0c374623ee7f6c9ba
1424268969 7 d0c36795fd6a69f9a1558b0b1e738d7633eb1b8e
1424270889 8 d487a97c9327235c4c334b23684583a116cc407a
1424272151 8 e9cd655661d3cef0a6df20cc509ae5ac2431f896
...

That’s all great and then the plot above can be drawn straightforwardly. The thing is: I’m not convinced it’s worked as I had hoped. Indeed: c7dc2d22ff2e300098cd9b29cd03080e01d64879 took place on the 18th of June and added 3 strategies but it’s not in the data set (or indeed in the plot).

Also, for some reason the data set gets these lines at some point (here be gremlins…) ?????:

...
<class 'axelrod.strategies.alternator.Alternator'>
<class 'axelrod.strategies.titfortat.AntiTitForTat'>
<class 'axelrod.strategies.titfortat.Bully'>
<class 'axelrod.strategies.cooperator.Cooperator'>
<class 'axelrod.strategies.cycler.CyclerCCCCCD'>
<class 'axelrod.strategies.cycler.CyclerCCCD'>
<class 'axelrod.strategies.cycler.CyclerCCD'>
<class 'axelrod.strategies.defector.Defector'>
<class 'axelrod.strategies.gobymajority.GoByMajority'>
<class 'axelrod.strategies.titfortat.SuspiciousTitForTat'>
<class 'axelrod.strategies.titfortat.TitForTat'>
<class 'axelrod.strategies.memoryone.WinStayLoseShift'>
...

What’s more confusing is that it’s not completely wrong because that does overall look ‘ok’ (correct number of strategies at the beginning, end and various commits are right there). So does anyone know why the above doesn’t work properly?

I’m really hoping this xkcd comic kicks in and someone tells me what’s wrong with what I’ve done:

EDIT: Big thanks to Mario Wenzel below in the comments for figuring out everythig that wasn’t quite right.

Here’s the script to count the strategies (writing to file instead of piping and also with correct error catching to deal with changes within the library):

from sys import argv
import csv

try:
import axelrod

strategies = []

try:
if type(axelrod.strategies) is list:
strategies += axelrod.strategies
except (AttributeError, TypeError):
pass

try:
if type(axelrod.ordinary_strategies) is list:
strategies += axelrod.ordinary_strategies
except (AttributeError, TypeError):
pass

try:
if type(axelrod.basic_strategies) is list:
strategies += axelrod.basic_strategies
except (AttributeError, TypeError):
pass

try:
if type(axelrod.cheating_strategies) is list:
strategies += axelrod.cheating_strategies
except (AttributeError, TypeError):
pass

count = len(set(strategies))

f = open('data', 'a')
csvwrtr = csv.writer(f)
csvwrtr.writerow([argv[1], count, argv[2]])
f.close()

except:
pass

Here is the modified script to roll through the commits (basically the same as before but it calls the other script with the -B flag (to avoid importing compiled files) and also without the need to sleep:

from git import Repo
import axelrod
import os
import subprocess
import time
import csv

path_to_repo = "~/src/Axelrod"
repo = Repo(path_to_repo)

all_commits = [c for c in repo.iter_commits()]

git = repo.git

number_of_strategies = []
dates = []
git.checkout('master')

try:
os.remove('data')
except OSError:
pass

for c in sorted(all_commits, key=lambda x:x.committed_date):

for rubbish in [".DS_Store",
"axelrod/.DS_Store",
"axelrod/tests/.DS_Store",
"axelrod/strategies/.DS_Store"]:  # Having to delete some files that were not in gitignore at the time of the commit
try:
os.remove(path_to_repo + rubbish)
except OSError:
pass

git.checkout(c)

try:
subprocess.call(['python2', '-B', 'number_of_strategies.py', str(c.committed_date), c.hexsha])
dates.append(c.committed_date)

except ImportError:
pass

git.checkout('master')

It looks like you should delete all pyc files from the repository in question and run the second script with the -B tag.

Thanks again Mario!

## November 30, 2015

### Sébastien Labbé

#### slabbe-0.2.spkg released

These is a summary of the functionalities present in slabbe-0.2.spkg optional Sage package. It works on version 6.8 of Sage but will work best with sage-6.10 (it is using the new code for cartesian_product merged the the betas of sage-6.10). It contains 7 new modules:

• finite_word.py
• language.py
• lyapunov.py
• matrix_cocycle.py
• mult_cont_frac.pyx
• ranking_scale.py
• tikz_picture.py

Cheat Sheets

The best way to have a quick look at what can be computed with the optional Sage package slabbe-0.2.spkg is to look at the 3-dimensional Continued Fraction Algorithms Cheat Sheets available on the arXiv since today. It gathers a handful of informations on different 3-dimensional Continued Fraction Algorithms including well-known and old ones (Poincaré, Brun, Selmer, Fully Subtractive) and new ones (Arnoux-Rauzy-Poincaré, Reverse, Cassaigne).

Installation

sage -i http://www.slabbe.org/Sage/slabbe-0.2.spkg    # on sage 6.8
sage -p http://www.slabbe.org/Sage/slabbe-0.2.spkg    # on sage 6.9 or beyond


Examples

Computing the orbit of Brun algorithm on some input in $\mathbb{R}^3_+$ including dual coordinates:

sage: from slabbe.mult_cont_frac import Brun
sage: algo = Brun()
sage: algo.cone_orbit_list((100, 87, 15), 4)
[(13.0, 87.0, 15.0, 1.0, 2.0, 1.0, 321),
(13.0, 72.0, 15.0, 1.0, 2.0, 3.0, 132),
(13.0, 57.0, 15.0, 1.0, 2.0, 5.0, 132),
(13.0, 42.0, 15.0, 1.0, 2.0, 7.0, 132)]


Computing the invariant measure:

sage: fig = algo.invariant_measure_wireframe_plot(n_iterations=10^6, ndivs=30)
sage: fig.savefig('a.png')


Drawing the cylinders:

sage: cocycle = algo.matrix_cocycle()
sage: t = cocycle.tikz_n_cylinders(3, scale=3)
sage: t.png()


Computing the Lyapunov exponents of the 3-dimensional Brun algorithm:

sage: from slabbe.lyapunov import lyapunov_table
sage: lyapunov_table(algo, n_orbits=30, n_iterations=10^7)
30 succesfull orbits    min       mean      max       std
+-----------------------+---------+---------+---------+---------+
$\theta_1$              0.3026    0.3045    0.3051    0.00046
$\theta_2$              -0.1125   -0.1122   -0.1115   0.00020
$1-\theta_2/\theta_1$   1.3680    1.3684    1.3689    0.00024


Dealing with tikzpictures

Since I create lots of tikzpictures in my code and also because I was unhappy at how the view command of Sage handles them (a tikzpicture is not a math expression to put inside dollar signs), I decided to create a class for tikzpictures. I think this module could be usefull in Sage so I will propose its inclusion soon.

I am using the standalone document class which allows some configurations like the border:

sage: from slabbe import TikzPicture
sage: g = graphs.PetersenGraph()
sage: s = latex(g)
sage: t = TikzPicture(s, standalone_configs=["border=4mm"], packages=['tkz-graph'])


The repr method does not print all of the string since it is often very long. Though it shows how many lines are not printed:

sage: t
\documentclass[tikz]{standalone}
\standaloneconfig{border=4mm}
\usepackage{tkz-graph}
\begin{document}
\begin{tikzpicture}
%
\useasboundingbox (0,0) rectangle (5.0cm,5.0cm);
%
\definecolor{cv0}{rgb}{0.0,0.0,0.0}
...
... 68 lines not printed (3748 characters in total) ...
...
\Edge[lw=0.1cm,style={color=cv6v8,},](v6)(v8)
\Edge[lw=0.1cm,style={color=cv6v9,},](v6)(v9)
\Edge[lw=0.1cm,style={color=cv7v9,},](v7)(v9)
%
\end{tikzpicture}
\end{document}


There is a method to generates a pdf and another for generating a png. Both opens the file in a viewer by default unless view=False:

sage: pathtofile = t.png(density=60, view=False)
sage: pathtofile = t.pdf()


Compare this with the output of view(s, tightpage=True) which does not allow to control the border and also creates a second empty page on some operating system (osx, only one page on ubuntu):

sage: view(s, tightpage=True)


One can also provide the filename where to save the file in which case the file is not open in a viewer:

sage: _ = t.pdf('petersen_graph.pdf')


Another example with polyhedron code taken from this Sage thematic tutorial Draw polytopes in LateX using TikZ:

sage: V = [[1,0,1],[1,0,0],[1,1,0],[0,0,-1],[0,1,0],[-1,0,0],[0,1,1],[0,0,1],[0,-1,0]]
sage: P = Polyhedron(vertices=V).polar()
sage: s = P.projection().tikz([674,108,-731],112)
sage: t = TikzPicture(s)
sage: t
\documentclass[tikz]{standalone}
\begin{document}
\begin{tikzpicture}%
[x={(0.249656cm, -0.577639cm)},
y={(0.777700cm, -0.358578cm)},
z={(-0.576936cm, -0.733318cm)},
scale=2.000000,
...
... 80 lines not printed (4889 characters in total) ...
...
\node[vertex] at (1.00000, 1.00000, -1.00000)     {};
\node[vertex] at (1.00000, 1.00000, 1.00000)     {};
%%
%%
\end{tikzpicture}
\end{document}
sage: _ = t.pdf()


## November 28, 2015

### Vince Knight

#### Survival of the fittest: Experimenting with a high performing strategy in other environments

A common misconception about evolution is that “The fittest organisms in a population are those that are strongest, healthiest, fastest, and/or largest.” However, as that link indicates, survival of the fittest is implied at the genetic level: and implies that evolution favours genes that are most able to continue in the next generation for a given environment. In this post, I’m going to take a look at a high performing strategy from the Iterated Prisoner’s dilemma that was obtained through an evolutionary algorithm. I want to see how well it does in other environments.

## Background

This is all based on the Python Axelrod package which makes iterated prisoner dilemma research straightforward and really is just taking a look at Martin Jones’s blog post which described the evolutionary analysis performed to get a strategy (EvolvedLookerUp) that is currently winning the overall tournament for the Axelrod library (with 108 strategies):

The strategy in question is designed to do exactly that and as you can see does it really well (with a substantial gap between it’s median score and the runner up: DoubleCrosser).

There are some things lacking in the analysis I’m going to present (which strategies I’m looking at, number of tournaments etc…) but hopefully the numerical analysis is still interesting. In essence I’m taking a look at the following question:

If a strategy is good in a big environment, how good is it in any given environment?

From an evolutionary point of view this is kind of akin to seeing how good a predator a shark would be in any random (potentially land based) environment…

## Generating the data

Thanks to the Axelrod, library it’s pretty straightforward to quickly experiment with a strategy (or group of strategies) in a random tournament:

import axelrod as axl  # Import the axelrod library

def rank(strategies, test_strategies, repetitions=10, processes=None):
"""Return the rank of the test_strategy in a tournament with given
strategiess"""
for s in test_strategies:
strategies.append(s())
nbr = len(test_strategies)
tournament = axl.Tournament(strategies,
repetitions=repetitions,
processes=processes)
results = tournament.play()
return results.ranking[-nbr:], results.wins[-nbr:]

This runs a tournament and returns the rankings and wins for the input strategies. For example, let’s see how Cooperator and Defector do in a random tournament with 2 other strategies:

>>> import axelrod as axl
>>> import random
>>> random.seed(1)  # A random seed
>>> strategies = random.sample([s() for s in axl.strategies], 2)
>>> strategies  # Our 2 random strategies
[Tricky Defector, Prober 3]

We can then use the above function to see how Cooperator and Defector do:

>>> rank(strategies, [axl.Cooperator(), axl.Defector()])
([3, 2], [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [2, 2, 2, 2, 2, 2, 2, 2, 2, 2]])

We see that cooperator ranks last (getting no wins), and defector just before last (getting 2 wins). This is confirmed by the actual tournament results:

The idea is to reproduce the above for a variety of tournament sizes, repeating random samples for each size and looking at the wins and ranks for the strategies we’re interested in.

This script generates our data:

import axelrod as axl
import csv
import random
import copy

max_size = 25  # Max size of tournaments considered (maximum size of the sample)
tournaments = 20  # Number of tournaments of each size to run (number of samples)
repetitions = 10  # Number of repetitions of each tournament (for a given sample)

test_strategies = [axl.EvolvedLookerUp, axl.TitForTat, axl.Cooperator, axl.Defector, axl.DoubleCrosser]
strategies = [s() for s in axl.strategies if axl.obey_axelrod(s) and s not in test_strategies]

def rank(strategies, test_strategies=test_strategies, repetitions=10, processes=None):
"""Return the rank of the test_strategy in a tournament with given
strategiess"""
for s in test_strategies:
strategies.append(s())
nbr = len(test_strategies)
tournament = axl.Tournament(strategies, repetitions=repetitions, processes=processes)
results = tournament.play()
return results.ranking[-nbr:], results.wins[-nbr:]

f = open('combined-data', 'w')
csvwrtr = csv.writer(f)
f_lookerup = open('data-lookerup.csv', 'w')
csvwrtr_lookerup = csv.writer(f_lookerup)
f_titfortat = open('data-titfortat.csv', 'w')
csvwrtr_titfortat = csv.writer(f_titfortat)
f_cooperator = open('data-cooperator.csv', 'w')
csvwrtr_cooperator = csv.writer(f_cooperator)
f_defector = open('data-defector.csv', 'w')
csvwrtr_defector = csv.writer(f_defector)
f_doublcrosser = open('data-doublecrosser.csv', 'w')
csvwrtr_doublcrosser = csv.writer(f_doublcrosser)

data = []
ind_data = [[], [], [], [], []]

for size in range(1, max_size + 1):

row = [size]
ind_row = [copy.copy(row) for _ in range(5)]

for k in range(tournaments):

s = random.sample(strategies, size)
strategy_labels = ";".join([str(st) for st in s])

trnmt_s = copy.copy(s)
results = rank(copy.copy(s), test_strategies=test_strategies, repetitions=repetitions)
row.append([strategy_labels, results[0]] + results[1])
for i, ts in enumerate(test_strategies):
trnmt_s = copy.copy(s)
results = rank(copy.copy(s), test_strategies=[ts], repetitions=repetitions)
ind_row[i].append([strategy_labels, results[0]] + results[1])

data.append(row)
csvwrtr.writerow(row)

csvwrtr_lookerup.writerow(ind_row[0])

csvwrtr_titfortat.writerow(ind_row[1])

csvwrtr_cooperator.writerow(ind_row[2])

csvwrtr_defector.writerow(ind_row[3])

csvwrtr_doublcrosser.writerow(ind_row[4])

f.close()
f_lookerup.close()
f_titfortat.close()
f_cooperator.close()
f_defector.close()
f_doublcrosser.close()

The above creates tournaments up to a size of 25 other strategies, with 20 random tournaments for each size, creating six data files:

## Analysing the data

I then used this Jupyter notebook to analyse the data.

Here is what we see for the EvolvedLookerUp strategy:

The line is fitted to the median rank and number of wins (recall for each number of strategies, 20 different sampled tournaments are considered) We see that (as expected) as the number of strategies increases both the median rank and wins increases, but what is of interest is the rate at which that increase happens.

Below is the fitted lines for all the considered strategies:

Here are the fits (and corresponding plots) for the ranks:

• EvolvedLookerUp: $y=0.49x-0.10$ plot
• TitForTat: $y=0.53-0.45$ plot
• Cooperator: $y=0.42x+1.40$ plot
• Defector: $y=0.75x-0.33$ plot
• DoubleCrosser: $y=0.51x-0.47$ plot

Here are the fits (and corresponding plots) for the wins:

• EvolvedLookerUp: $y=0.28x+0.06$ plot
• TitForTat: $y=0.00x+0.00$ plot
• Cooperator: $y=0.00x+0.00$ plot
• Defector: $y=0.89x+0.14$ plot
• DoubleCrosser: $y=0.85-0.10$ plot

It seems that the EvolvedLookerUp strategy does continue to do well (with a low coefficient of 0.49) in these random environments. However what’s interesting is that the simple Cooperator strategy also seems to do well (this might indicate that the random samples are creating ‘overly nice’ conditions).

All of the above keeps the 5 strategies considered separated from each, here is the analysis repeated when combining the strategies with the random sample:

Below is the fitted lines for all the considered strategies:

Here are the fits (and corresponding plots) for the ranks:

• EvolvedLookerUp: $y=0.42x+2.05$ plot
• TitForTat: $y=0.44+1.95$ plot
• Cooperator: $y=0.64+0.00$ plot
• Defector: $y=0.47x+1.87$ plot
• DoubleCrosser: $y=0.63x+1.88$ plot

Here are the fits (and corresponding plots) for the wins:

• EvolvedLookerUp: $y=0.28x+0.05$ plot
• TitForTat: $y=0.00x+0.00$ plot
• Cooperator: $y=0.00x+0.00$ plot
• Defector: $y=0.89x+4.14$ plot
• DoubleCrosser: $y=0.85+2.87$ plot

## Conclusion

It looks like the EvolvedLookerUp strategy continues to perform well in environments that are not the ones it evolved in.

The Axelrod library makes this analysis possible as you can quickly create tournaments from a wide library of strategies. You could also specify the analysis further by considering strategies of a particular type. For example you could sample only from strategies that act deterministically (no random behaviour):

>>> strategies = [s() for s in axl.strategies if not s().classifier['stochastic']]

It would probably be worth gathering even more data to be able to make substantial claims about the performances as well as considering other test strategies but ultimately this gives some insight in to the performances of the strategies in other environments.

## For fun

The latest release of the library (v0.0.21) includes the ability to draw sparklines that give a visual representation of the interactions between pairs of strategies. If you’re running python 3 you can include emoji so here goes the sparklines for the test strategies considered:

>>> from itertools import combinations

>>> test_strategies = [axl.EvolvedLookerUp, axl.TitForTat, axl.Cooperator, axl.Defector, axl.DoubleCrosser]
>>> matchups = [(match[0](), match[1]()) for match in combinations(test_strategies, 2)]

>>> for matchup in matchups:
....    match = axl.Match(matchup, 10)
....    _ = match.play()
....    print(matchup)
....    print(match.sparklines(c_symbol=' 😀 ', d_symbol=' 😡 '))
...
(EvolvedLookerUp, Tit For Tat)
😀  😀  😀  😀  😀  😀  😀  😀  😀  😀
😀  😀  😀  😀  😀  😀  😀  😀  😀  😀
(EvolvedLookerUp, Cooperator)
😀  😀  😀  😀  😀  😀  😀  😀  😀  😀
😀  😀  😀  😀  😀  😀  😀  😀  😀  😀
(EvolvedLookerUp, Defector)
😀  😀  😡  😡  😡  😡  😡  😡  😡  😡
😡  😡  😡  😡  😡  😡  😡  😡  😡  😡
(EvolvedLookerUp, DoubleCrosser)
😀  😀  😡  😡  😀  😀  😀  😀  😀  😀
😀  😡  😡  😡  😡  😡  😡  😡  😡  😡
(Tit For Tat, Cooperator)
😀  😀  😀  😀  😀  😀  😀  😀  😀  😀
😀  😀  😀  😀  😀  😀  😀  😀  😀  😀
(Tit For Tat, Defector)
😀  😡  😡  😡  😡  😡  😡  😡  😡  😡
😡  😡  😡  😡  😡  😡  😡  😡  😡  😡
(Tit For Tat, DoubleCrosser)
😀  😀  😡  😡  😡  😡  😡  😡  😡  😡
😀  😡  😡  😡  😡  😡  😡  😡  😡  😡
(Cooperator, Defector)
😀  😀  😀  😀  😀  😀  😀  😀  😀  😀
😡  😡  😡  😡  😡  😡  😡  😡  😡  😡
(Cooperator, DoubleCrosser)
😀  😀  😀  😀  😀  😀  😀  😀  😀  😀
😀  😡  😡  😡  😡  😡  😡  😡  😡  😡
(Defector, DoubleCrosser)
😡  😡  😡  😡  😡  😡  😡  😡  😡  😡
😀  😡  😡  😡  😡  😡  😡  😡  😡  😡

## Overview

Barry Mazur and I spent over a decade writing a popular math book "Prime Numbers and the Riemann Hypothesis", which will be published by Cambridge Univeristy Press in 2016.  The book involves a large number of illustrations created using SageMath, and was mostly written using the LaTeX editor in SageMathCloud.

This post is meant to provide a glimpse into the writing process and also content of the book.

Intended Audience: Research mathematicians! Though there is no mathematics at all in this post.

The book is here: http://wstein.org/rh/
Download a copy before we have to remove it from the web!

Goal: The goal of our book is simply to explain what the Riemann Hypothesis is really about. It is a book about mathematics by two mathematicians. The mathematics is front and center; we barely touch on people, history, or culture, since there are already numerous books that address the non-mathematical aspects of RH.  Our target audience is math-loving high school students, retired electrical engineers, and you.

## Clay Mathematics Institute Lectures: 2005

The book started in May 2005 when the Clay Math Institute asked Barry Mazur to give a large lecture to a popular audience at MIT and he chose to talk about RH, with me helping with preparations. His talk was entitled "Are there still unsolved problems about the numbers 1, 2, 3, 4, ... ?"

See http://www.claymath.org/library/public_lectures/mazur_riemann_hypothesis.pdf

Barry Mazur receiving a prize:

Barry's talk went well, and we decided to try to expand on it in the form of a book. We had a long summer working session in a vacation house near an Atlantic beach, in which we greatly refined our presentation. (I remember that I also finally switched from Linux to OS X on my laptop when Ubuntu made a huge mistake pushing out a standard update that hosed X11 for everybody in the world.)

## Classical Fourier Transform

Going beyond the original Clay Lecture, I kept pushing Barry to see if he could describe RH as much as possible in terms of the classical Fourier transform applied to a function that could be derived via a very simple process from the prime counting function pi(x). Of course, he could. This led to more questions than it answered, and interesting numerical observations that are more precise than analytic number theorists typically consider.

Our approach to writing the book was to try to reverse engineer how Riemann might have been inspired to come up with RH in the first place, given how Fourier analysis of periodic functions was in the air. This led us to some surprisingly subtle mathematical questions, some of which we plan to investigate in research papers. They also indirectly play a role in Simon Spicer's recent UW Ph.D. thesis. (The expert analytic number theorist Andrew Granville helped us out of many confusing thickets.)

In order to use Fourier series we naturally have to rely heavily on Dirac/Schwartz distributions.

## SIMUW

University of Washington has a great program called SIMUW: "Summer Institute for Mathematics at Univ of Washington.'' It's for high school; admission is free and based on student merit, not rich parents, thanks to an anonymous wealthy donor!  I taught a SIMUW course one summer from the RH book.  I spent one very intense week on the RH book, and another on the Birch and Swinnerton-Dyer conjecture.

The first part of our book worked well for high school students. For example, we interactively worked with prime races, multiplicative parity, prime counting, etc., using Sage interacts. The students could also prove facts in number theory. They also looked at misleading data and tried to come up with conjectures. In algebraic number theory, usually the first few examples are a pretty good indication of what is true. In analytic number theory, in contrast, looking at the first few million examples is usually deeply misleading.

## Reader feedback: "I dare you to find a typo!"

In early 2015, we posted drafts on Google+ daring anybody to find typos. We got massive feedback. I couldn't believe the typos people found. One person would find a subtle issue with half of a bibliography reference in German, and somebody else would find a different subtle mistake in the same reference. Best of all, highly critical and careful non-mathematicians read straight through the book and found a large number of typos and minor issues that were just plain confusing to them, but could be easily clarified.

Now the book is hopefully not riddled with errors. Thanks entirely to the amazingly generous feedback of these readers, when you flip to a random page of our book (go ahead and try), you are now unlikely to see a typo or, what's worse, some corrupted mathematics, e.g., a formula with an undefined symbol.

## Designing the cover

Barry and Gretchen Mazur, Will Hearst, and I designed a cover that combined the main elements of the book: title, Riemann, zeta:

Then designers at CUP made our rough design more attractive according their tastes. As non-mathematician designers, they made it look prettier by messing with the Riemann Zeta function...

## Publishing with Cambridge University Press

Over years, we talked with people from AMS, Springer-Verlag and Princeton Univ Press about publishing our book. I met CUP editor Kaitlin Leach at the Joint Mathematics Meetings in Baltimore, since the Cambridge University Press (CUP) booth was directly opposite the SageMath booth, which I was running. We decided, due to their enthusiasm, which lasted more than for the few minutes while talking to them (!), past good experience, and general frustration with other publishers, to publish with CUP.

### What is was like for us working with CUP

The actual process with CUP has had its ups and downs, and the production process has been frustrating at times, being in some ways not quite professional enough and in other ways extremely professional. Traditional book publication is currently in a state of rapid change. Working with CUP has been unlike my experiences with other publishers.

For example, CUP was extremely diligent putting huge effort into tracking down permissions for every one of the images in our book. And they weren't satisfy with a statement on Wikipedia that "this image is public domain", if the link didn't work. They tracked down alternatives for all images for which they could get permissions (or in some cases have us partly pay for them). This is in sharp contrast to my experience with Springer-Verlag, which spent about one second on images, just making sure I signed a statement that all possible copyright infringement was my fault (not their's).

The CUP copyediting and typesetting appeared to all be outsourced to India, organized by people who seemed far more comfortable with Word than LaTeX. Communication with people that were being contracted out about our book's copyediting was surprisingly difficult, a problem that I haven't experienced before with Springer and AMS. That said, everything seems to have worked out fine so far.

On the other hand, our marketing contact at CUP mysteriously vanished for a long time; evidently, they had left to another job, and CUP was recruiting somebody else to take over. However, now there are new people and they seem extremely passionate!

## The Future

I'm particularly excited to see if we can produce an electronic (Kindle) version of the book later in 2016, and eventually a fully interactive complete for-pay SageMathCloud version of the book, which could be a foundation for something much broader with publishers, which addresses the shortcoming of the Kindle format for interactive computational books. Things like electronic versions of books are the sort of things that AMS is frustratingly slow to get their heads around...

## Conclusions

1. Publishing a high quality book is a long and involved process.
2. Working with CUP has been frustrating at times; however, they have recruited a very strong team this year that addresses most issues.
3. I hope mathematicians will put more effort into making mathematics accessible to non-mathematicians.
4. Hopefully, this talk will give provide a more glimpse into the book writing process and encourage others (and also suggest things to think about when choosing a publisher and before signing a book contract!)

### Harald Schilly

#### Sage Days 70 - Berkeley - November 2015

+Dennis Stein brought his cameras and tools to Sage Days 70:

## November 15, 2015

### Vince Knight

#### Visualising Markov Chains with NetworkX

I’ve written quite a few blog posts about Markov chains (it occupies a central role in quite a lot of my research). In general I visualise 1 or 2 dimensional chains using Tikz (the LaTeX package) sometimes scripting the drawing of these using Python but in this post I’ll describe how to use the awesome networkx package to represent the chains.

For all of this we’re going to need the following three imports:

from __future__ import division  # Only for how I'm writing the transition matrix
import networkx as nx  # For the magic
import matplotlib.pyplot as plt  # For plotting

Let’s consider the Markov Chain I describe in this post about waiting times in a tandem queue. You can see an image of it (drawn in Tikz) below:

As is described in that post, we’re dealing with a two dimensional chain and without going in to the details, the states are given by:

states = [(0, 0),
(1, 0),
(2, 0),
(3, 0),
(4, 0),
(0, 1),
(1, 1),
(2, 1),
(3, 1),
(4, 1),
(0, 2),
(1, 2),
(2, 2),
(3, 2),
(4, 2),
(0, 3),
(1, 3),
(2, 3),
(3, 3),
(0, 4),
(1, 4),
(2, 4)]

and the transition matrix $Q$ by:

Q = [[-5, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1/2, -6, 5, 0, 0, 1/2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, -7, 5, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, -7, 5, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, -2, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1/5, 0, 0, 0, 0, -26/5, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1/5, 0, 0, 0, 1/2, -31/5, 5, 0, 0, 1/2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1/5, 0, 0, 0, 1, -36/5, 5, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1/5, 0, 0, 0, 1, -36/5, 5, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1/5, 0, 0, 0, 1, -11/5, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 2/5, 0, 0, 0, 0, -27/5, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 2/5, 0, 0, 0, 1/2, -32/5, 5, 0, 0, 1/2, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 2/5, 0, 0, 0, 1, -37/5, 5, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 2/5, 0, 0, 0, 1, -37/5, 5, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 2/5, 0, 0, 0, 1, -12/5, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2/5, 0, 0, 0, 0, -27/5, 5, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2/5, 0, 0, 0, 1/2, -32/5, 5, 0, 1/2, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2/5, 0, 0, 0, 1/2, -32/5, 5, 0, 1/2, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2/5, 0, 0, 0, 1/2, -7/5, 0, 0, 1/2],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2/5, 0, 0, 0, -27/5, 5, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2/5, 0, 0, 0, -27/5, 5],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2/5, 0, 0, 0, -2/5]]

To build the networkx graph we will use our states as nodes and have edges labeled by the corresponding values of $Q$ (ignoring edges that would correspond to a value of 0). The neat thing about networkx is that it allows you to have any Python instance as a node:

G = nx.MultiDiGraph()
labels={}
edge_labels={}

for i, origin_state in enumerate(states):
for j, destination_state in enumerate(states):
rate = Q[i][j]
if rate > 0:
destination_state,
weight=rate,
label="{:.02f}".format(rate))
edge_labels[(origin_state, destination_state)] = label="{:.02f}".format(rate)

Now we can draw the chain:

plt.figure(figsize=(14,7))
node_size = 200
pos = {state:list(state) for state in states}
nx.draw_networkx_edges(G,pos,width=1.0,alpha=0.5)
nx.draw_networkx_labels(G, pos, font_weight=2)
nx.draw_networkx_edge_labels(G, pos, edge_labels)
plt.axis('off');

You can see the result here:

As you can see in the networkx documentatin:

Yes, it is ugly but drawing proper arrows with Matplotlib this way is tricky.

So instead I’m going to write this to a .dot file:

nx.write_dot(G, 'mc.dot')

Once we’ve done that we have a standard network file format, so we can use the command line to convert that to whatever format we want, here I’m creating the png file below:

## Searching for a Funding Model

Sage is open source and freely available to all, so it is of potential huge value to the community by being owned by everybody and changeable. However, those who fund Magma (either directly or indirectly) haven't funded Sage at the same level for some reason. I can't make Sage closed source and copy that very successful funding model. I've tried everything I can think of given the time and resources I have, and the only model left that seems able to support open source is having a company that does something else well and makes money, then using some of the profit to fund open source (Intel is the biggest contributor to Linux).

## SageMath, Inc.

Since I failed to find any companies that passionately care about Sage like Intel/Google/RedHat/etc. care about Linux, I started one. I've been working on SageMathCloud extremely hard for over 3 years now, with the hopes that at least it could be a way to fund Sage development.

## Jim Simons

Jim Simons is a mathematician who left academia to start a hedge fund that beat the stock market. He contributes back to the mathematical community through the Simons Foundation, which provides an enormous amount of support to mathematicians and physicists, and has many outreach programs.

SageMath is a large software package for mathematics that I started in 2005 with the goal of creating a free open source viable alternative to Magma, Mathematica, Maple, and Matlab. People frequently tell me I should approach the Simons Foundation for funding to support Sage. For example:
Jim Simons, after retiring from Renaissance Technologies with a cool 15 billion, has spent the last 10 years giving grants to people in the pure sciences. He's a true academic at heart [...] Anyways, he's very fond of academics and gives MacArthur-esque grants, especially to people who want to change the way mathematics is taught. Approach his fund. I'm 100% sure he'll give you a grant on the spot.

## The National Science Foundation

Last month the http://sagemath.org website had 45,114 monthly active users. However, as far as I know, there is no NSF funding for Sage in the United States right now, and development is mostly done on a shoestring in spare time. We have recently failed to get several NSF grants for Sage, despite there being Sage-related grants in the past from NSF. I know that funding is random, and I will keep trying. I have two proposals for Sage funding submitted to NSF right now.

## Several million dollars per year

I was incredibly excited in 2012 when David Eisenbud invited me to a meeting at the Simons Foundation headquarters in New York City with the following official description of their goals:
The purpose of this round table is to investigate what sorts of support would facilitate the development, deployment and maintenance of open-source software used for fundamental research in mathematics, statistics and theoretical physics. We hope that this group will consider what support is currently available, and whether there are projects that the Simons Foundation could undertake that would add significantly to the usefulness of computational tools for basic research. Modes of support that duplicate or marginally improve on support that is already available through the universities or the federal government will not be of interest to the foundation. Questions of software that is primarily educational in nature may be useful as a comparison, but are not of primary interest.  The scale of foundation support will depend upon what is needed and on the potential scientific benefit, but could be substantial, perhaps up to several million dollars per year.
Current modes of funding for research software in mathematics, statistics and physics differ very significantly. There may be correspondingly great differences in what the foundation might accomplish in these areas. We hope that the round table members will be able to help the foundation understand the current landscape  (what are the needs, what is available, whether it is useful, how it is supported) both in general and across the different disciplines, and will help us think creatively about new possibilities.
I flew across country to this the meeting, where we spent the day discussing ways in which "several million dollars per year" could revolutionize "the development, deployment and maintenance of open-source software used for fundamental research in mathematics...".

In the afternoon Jim Simons arrived, and shook our hands. He then lectured us with some anecdotes, didn't listen to what we had to say, and didn't seem to understand open source software. I was frustrated watching how he treated the other participants, so I didn't say a word to him. I feel bad for failing to express myself.

## The Decision

In the backroom during a coffee break, David Eisenbud told me that it had already been decided that they were going to just fund Magma by making it freely available to all academics in North America. WTF? I explained to David that Magma is closed source and that not only does funding Magma not help open source software like Sage, it actively hurts it. A huge motivation for people to contribute to Sage is that they do not have access to Magma (which was very expensive).

I wandered out of that meeting in a daze; things had gone so differently than I had expected. How could a goal to "facilitate the development, deployment and maintenance of open-source software... perhaps up to several million dollars per year" result in a decision that would make things possibly much worse for open source software?

That day I started thinking about creating what would become SageMathCloud. The engineering work needed to make Sage accessible to a wider audience wasn't going to happen without substantial funding (I had put years of my life into this problem but it's really hard, and I couldn't do it by myself). At least I could try to make it so people don't have to install Sage (which is very difficult). I also hoped a commercial entity could provide a more sustainable source of funding for open source mathematics software. Three years later, the net result of me starting SageMathCloud and spending almost every waking moment on it is that I've gone from having many grants to not, and SageMathCloud itself is losing money. But I remain cautiously optimistic and forge on...

## We will not fund Sage

Prompted by numerous messages recently from people, I wrote to David Eisenbud this week. He suggested I write to Yuri Schinkel, who is the current director of the Simons Foundation:
Dear William,
Before I joined the foundation, there was a meeting conducted by David Eisenbud to discuss possible projects in this area, including Sage.
After that meeting it was decided that the foundation would support Magma.
Please keep me in the loop regarding developments at Sage, but I regret that we will not fund Sage at this time.
Best regards, Yuri
The Simons Foundation, the NSF, or any other foundation does not owe the Sage project anything. Sage is used by a lot of people for free, who together have their research and teaching supported by hundreds of millions of dollars in NSF grants. Meanwhile the Sage project barely hobbles along. I meet people who have fantastic development or documentations projects for Sage that they can't do because they are far too busy with their fulltime teaching jobs. More funding would have a massive impact. It's only fair that the US mathematical community is at least aware of a missed opportunity.
Funding in Europe for open source math software is much better.

Hacker News discussion

### Vince Knight

#### Picking a good Vainglory jungler with game theory and sagemath

I’ve recently been playing a really cool video game: Vainglory. This is described as a MOBA which I must admit I had never heard off until this year when my students mentioned it to me, but basically it’s an online multi player game in which players form two teams of 6 heroes and fight each other. The choice of the heroes is very important as the composition of a team can make or break a match. This seems to have a bit of a cult following (so no doubt just like for my post about clash of clans I might annoy people again) and there is a great wiki that gives guides for the play of each player. In this post I’ll describe using Python to scrape that wiki to get data that feeds in to a game theoretic model which I then analyse using Sagemath to give some insight about the choice of hero.

Here’s the map where this all takes place:

So first of all, my understanding is that there are generally three types of playing strategy in the game:

• Lane: a hero that occupies and tries to take over the main route between the two bases.
• Jungle: a hero that goes ‘off road’ and kills monsters, gets gold etc…
• Roam: a hero who roams in between the two and whose main job is to support the other two players.

My personal strategy is to pick a roamer/protector: Ardan (pic below),

I generally help out the jungler in my team and try my best to not be a liability by dying.

The wiki has a bunch of information for players. If you google something like ‘vainglory best strategy’ it comes up. If you look up each hero you get a collection of guides ranked by votes each with all sorts of information which includes the where each and every other hero sits on a threat level (from 1 to 10). Here is the threat meter for Ardan from the top guide:

So from that guide it looks like if your opponent is going to be isolated with Ardan then you should pick HERO. In some guides the threat meter does not list all the heros. This is particularly important as it’s these threat meters that I’ve used as a source of data for how good a given hero is against other heros.

This is where the keener player/reader will note that the threat meter only describes the threat to a single player and not any information about how this fits within a team dynamic. This is an important admission on my part: as indicated by the title of this post aims to use data and game theory to give an indication as to how to choose heros for isolated combat against other single heros. So one application of this is choosing a jungler/laner when you expect to go up against another team that is playing a single jungler/laner.

Scraping the data

First things first: I used Python with the BeautifulSoup and requests library. For example here is how I got the lists of all the heroes (and the url to their own respective page on the wiki):

>>> page = requests.get('http://www.vaingloryfire.com/vainglory/wiki/heroes')
>>> soup = BeautifulSoup(page.text, 'html.parser')
>>> root = '/vainglory/wiki/heroes'
>>> heroes = {url[len(root) + 1:]:url for url in urls[2:] if url.startswith(root + '/')}
>>> del heroes['skye'] # Removing skye as she is brand new
u'ardan': u'/vainglory/wiki/heroes/ardan',
u'catherine': u'/vainglory/wiki/heroes/catherine',
u'celeste': u'/vainglory/wiki/heroes/celeste',
u'fortress': u'/vainglory/wiki/heroes/fortress',
u'glaive': u'/vainglory/wiki/heroes/glaive',
u'joule': u'/vainglory/wiki/heroes/joule',
u'koshka': u'/vainglory/wiki/heroes/koshka',
u'krul': u'/vainglory/wiki/heroes/krul',
u'petal': u'/vainglory/wiki/heroes/petal',
u'ringo': u'/vainglory/wiki/heroes/ringo',
u'rona': u'/vainglory/wiki/heroes/rona',
u'saw': u'/vainglory/wiki/heroes/saw',
u'skaarf': u'/vainglory/wiki/heroes/skaarf',
u'taka': u'/vainglory/wiki/heroes/taka',
u'vox': u'/vainglory/wiki/heroes/vox'}

(Note there that I’m removing a brand new hero: Skye as she was released pretty much at the same time as I was writing this post.)

You can see the Jupyter notebook which shows the code. The main technicality is that I only scraped guides from the front page for each hero. As I’ll describe later, I ran my analysis taking the average threats for a variety of cases: only taking the first guide, only taking the first 2 guides, the first 3 guides etc…

Here for example is the threats data for Adagio if you only look at this first guide:

[0, 0, 0, 0, 0, 0, 4, 0, 0, 0, 4, 0, 0, 7, 0, 0]

Cross referencing that with the order given by the list of heroes above we see that Skaarf ranks a 7 on the threat meter to Adagio, and Ringo and Joule a 4. All the 0s are what I’ve decided to do when a threat meter does not include a given hero: indicating that that hero is not a threat to that hero. I don’t really like this as a solution but it’s probably the least worst way to deal with it (if anyone has a better way of handling this please let me know in the comments).

Here is the threats data for Krul:

[6, 3, 4, 3, 6, 4, 3, 7, 5, 5, 4, 0, 6, 6, 5, 0]

We see that in this case the only heroes that pose no threat to Krul are Fortress and Rona. Thus if your opponent is playing those heroes Krul is a best response.

As will be described in the next section, we need to build up a matrix of these rows which basically shows how well a given hero does against others. Here is the matrix of this when considering the row players and taking the opposite of the threats when using just the top guide:

If you consider a column (that corresponds to a hero) of that matrix, the row player aims to find the row that gives the highest score, which because we’ve taken the opposite of the threat score corresponds to minimising the threat posed by the column hero. This is in essence a risk averse approach, at the very end I’ll comment on what happens to the results when players aim to maximise the threat they pose.

Now that I’ve described the data (you can find all the data written to specific csv files here) I’ll go on to talk about the game theory used to try and see what the equilibrium choice of strategies should/could be.

Game theoretic analysis

All of this has been done using Sagemath, a great open source mathematics package that offers an alternative to Maple, Mathematica etc…

If you’re not familiar with game theory, this video might help (it shows the basics of game theory and how Sagemath can be used to find Nash equilibria):

Before talking about equilibria let’s just look at best response dynamics.

Using Sage we can first of all build up the normal form game for a given number of guides:

sage: def build_game(row_player_file, col_player_file):
....:    """Import the bi matrices and create the game object"""
....:    bi_matrices = []
....:    for fle in [row_player_file, col_player_file]:
....:        f = open(fle, 'r')
....:        bi_matrices.append(-matrix([[float(ele) for ele in row] for row in csvrdr]))
....:        f.close()
....:
....:    return NormalFormGame(bi_matrices)
sage: g = build_game("A-01.csv", "B-01.csv")

Using this and the best_response method on Sagemath NormalFormGames we can build up all the best responses (according to a given number of guides) go each player. The cool thing is that Sagemath has some awesome graph theory written in there so we can transform that in to a nice picture (again: all the code for this can be found here):

That plot confirms what we have seen earlier, we see that Krul is a best response to Fortress or Rona. Sadly, because there are so many zeros when just using the first guide, there are a bunch of heros that are not considered a threat to any of the players so they have multiple best responses and our graph is messy.

Here is the best response graph when taking the mean threats over all front page guides:

Note that Game Theory assumes that everyone know that everyone know that everyone knows… all this. So for example if two players both player Adagio we are at an equilibrium. However if one player plays Saw then the graph indicates that the opponent should play Koshka, which means that the first player should then deviate and play Fortress which is then also an equilibrium (bot players are playing best responses to each other).

From here on I will continue the analysis using the average utility from all the guides (I’ll come back to this at the end).

So we can use Sagemath to compute all the equilibria for us. A Nash equilibria need not be a pure strategy and so will at times be a probability vector indicating how players should randomly pick a hero. Here for example is the 4th equilibrium computed by Sagemath:

sage: g.obtain_nash(algorithm='lrs')[3]
[(0, 0, 0, 0, 0, 0, 0, 0, 3947/17781, 0, 3194/17781, 0, 8795/17781, 0, 0, 615/5927),
(0, 0, 0, 0, 0, 0, 0, 0, 3947/17781, 0, 3194/17781, 0, 8795/17781, 0, 0, 615/5927)]

This particular equilibria has both players playing a mix of: Fortress, Glaive, Petal and Koshka.

Here is the mean probability distribution for both players, while the particular values should be ignored what is of interest is the heroes that are not played at all. In essence these heroes, accross all the equilibria are not deemed playable:

We see that this confirms how the previous graph was colored showing the heroes that should be played in blue.

Note that the number of guides and the reliability of all this has a huge effect of the conclusions made. Here are two gifs that show the effect of the number of guides used:

and here is a plot of the number of equilibria for each guide:

Up until now all the results are for when players aim to minimise the threat posed to them. In the next section I’ll invert that (python wise it’s a minor swapping around of some inputs) and consider the situation where you want to pick a hero that is aims to be the most threatening.

Seeking to be a threat

First of all here is the best response graph:

Here is the average of the NE:

Those 3 players have certainly been able to rip through me on more than one occasion…

Finally here are the Nash equilibria for when a threatening player (plotted in black) is playing against a threat averse player (plotted in grey):

Conclusion

The main thing that needs to be highlighted before concluding is that this analysis has two weaknesses:

• The data: what comes out of mathematical models is only as good as what goes in. Scraping the wiki data is a cool thing to do (from a Python point of view) but I’m blindly grabbing guides that might have poor information/opinions in them. This is worth remembering. If someone where to come up with their own threat/performance measures then this work could just be used on that. Ultimately the data available here is better than no data.
• I am not taking in to account team dynamics. I’m just looking at perceived threats from one hero to another. There are mathematical approaches that could be used to find the best combination of teams and I might get to that in other post one day. Nonetheless this has been a fun application of game theory and still has value I believe.

So to conclude, basing things on the data available to me, I’d suggest that (when both players are acting in a risk averse way) the choice of heros for an isolated job like jungling and/or laneing is in fact reduced to a set from:

If you and your opponent aim to be threatening, the choice is from:

Finally if you aim to be threatening, playing against a player aiming to be risk averse the choice is from:

and vice versa:

(Interestingly for this last type of game there were in general just 1 equilibrium.)

Based on all of this, I would suggest (looking across all of that summary) that (disclaimer: based on the wiki threat data) the best Vainglory hero is Glaive. Again though, this does not take in to account any of the very important team dynamics. I plan to keep on being a protector with Ardan and just doing my best to stay alive…

Another point is that this shows that vainglory is perhaps not immediately balanced. A perfectly balanced game (like Rock Paper Scissor for example) has a Nash Equilibria that evenly plays all strategies:

sage: g = game_theory.normal_form_games.RPS()
sage: g.obtain_nash()
[[(1/3, 1/3, 1/3), (1/3, 1/3, 1/3)]]

Please do take a look at all the code/data at this repository.

This was a fun application of mathematical modelling, I also learnt how to scrape with BeautifulSoup but I mainly look forward to using this in my game theory class this year. I might even suggest we spend 25 minutes of one class having a game on the big screen assuming there are 5 players of Vainglory in my class.

## September 01, 2015

### William Stein

#### React, Flux, RethinkDB and SageMathCloud -- Summer 2015 update

I've been using databases and doing web development for over 20 years, and I've never really loved any database before and definitely didn't love any web development frameworks either. That all changed for me this summer...

### SageMathCloud

SageMathCloud is a web application in which you collaboratively use Python, LaTeX, Markdown, Sage worksheets (sophisticated mathematics), task lists, R, Jupyter Notebooks, manage courses, write C programs, make chatrooms, and more. It is hosted on Google Compute Engine, but is also entirely open source and there is a pre-made Virtual Machine that you can download. A project in SMC is a Linux account, with resources constrained using cgroups and quotas. Many SMC users can collaborate on the same project, and have equal privileges in that project. Interaction with all file types (including Jupyter notebooks, task lists and course managements) is synchronized in realtime, like Google docs. There is also a global notifications feed that shows all editing activity on all files in all projects on which the user collaborates, which is a sort of highly technical version of Facebook's feed.

### Rewrite motivation

I originally wrote the SageMathCloud frontend using progressive-refinement jQuery (no third-party framework beyond that) and the Cassandra database. These were reasonable choices when I started. There are much better approaches now, which are critical to dramatically improving the user experience with SMC, and also growing the developer base. So far SMC has had no nontrivial outside contributions, probably due to the difficulty of understanding the code. In fact, I think nobody besides me has ever even installed SMC, despite these install notes.

We (me, Jon Lee, Nicholas Ruhland) are currently completely rewriting the entire frontend of SMC using React.js, Flux, and RethinkDB. We started this rewrite in June 2015, with Jon being supported by Google Summer of Code (2015), Nich being supported some by NSF grants from Randy Leveque and Rekha Thomas, and with me being unemployed.

### Terrible funding situation

I'm living on credit cards -- I have no NSF grant support anymore, and SageMathCloud is still losing a lot of money every month, and I'm unhappy about this situation. It was either completely quit working on SMC and instead teach or consult a lot, or lose tens of thousands of dollars. I am doing the latter right now. I was very caught off guard, since this is my first summer ever to not have NSF support since I got my Ph.D. in 2000, and I didn't expect to have my grant proposals all denied (which happened in June). There is some modest Angel investment in SageMath, Inc., but I can't bring myself to burn through that money on salary, since it would run out quickly, and I don't want to have to shut down the site due to not being able to pay the hosting bill. I've failed to get any significant free hosting, due to already getting free hosting in the past, and SageMath, Inc. not being in any incubators. For example, we tried very hard to get hosting from Google, but they flatly refused for these two reasons (they gave $60K in hosting to UW/Sage project in 2012). I'm clearly having trouble transitioning from an academic to an industry funding model. But if there are enough paying customers by January 2016, things will turn around. Jon, Nich, and I have been working on this rewrite for three months, and hope to finish it by the end of September, when Jon and Nich will become busy with classes again. However, it seems unlikely we'll be able to finish at the current rate. Fortunately, I don't start teaching fulltime again until January, and we put a lot of work into doing a release in mid-August that fully uses RethinkDB and partly uses React.js, so that we can finish the second stage of the rewrite iteratively, without any major technical surprises. ### RethinkDB Cassandra is an excellent database for many applications, but it is not the right database for SMC and I'm making no further use of Cassandra. SMC is a realtime application that does a lot more reading than writing to the database, and SMC greatly benefits from realtime push updates from the database. I've tried quite hard in the past to build an appropriate architecture for SMC on top of Cassandra, but it is the wrong tool for the job. RethinkDB scales up linearly (with sharding and replication), and has high availability and automatic failover as of version 2.1.2. See https://github.com/rethinkdb/rethinkdb/issues/4678 for my painful path to ensuring RethinkDB actually works for me (the RethinkDB developers are incredibly helpful!). ### React.js I learned about React.js first from some "random podcast", then got more interested in it when Chris Swenson gave a demo at a Sage Days workshop in San Diego in May 2015. React (+Flux) is a web development framework that actually has solid ideas behind it, backed by an implementation that has been optimized and tested by a highly nontrivial real world application: namely the Facebook website. Even if I were to have the idea of React, implementing in a way that is actually usable would be difficult. The key idea of React.js is that -- surprisingly -- it is possible to write efficient client-side code that describes how to render the application purely as a function of its state. React is different than jQuery. With jQuery, you write lots of code explaining how to transform the user interface of your application from one complicated state (that you might never have anticipated happening) to another complicated state. When using React.js you don't write code about how your application's visible state changes -- instead you write code to answer the question: "given this state, what should the application look like". For me, it's a game changer. This is like what one does when writing video games; the innovation is that some people at Facebook figured out how to practically program this way in a client side web browser application, then tuned their implementation based on huge amounts of real world data (Facebook has users). Oh, and they open sourced the result and ran several conferences explaining React. React.js reminds me of when Andrew Wiles proved Fermat's Last Theorem in the mid 1990s. Wiles (and Ken Ribet) had genuine new ideas, which dramatically reshaped the landscape of number theory. The best number theorists quickly realized this and adopted to the new world, pushing the envelope of Wiles work far beyond what I expected could happen. Other people pretended like Wiles didn't exist and continued studying Fibonnaci numbers. I browsed the web development section of Barnes and Noble last night and there were dozens of books on jQuery and zero on React.js. I feel for anybody who tries to learn client-side web development by reading books at Barnes and Noble. ### IPython/Jupyter and PhosphorJS I recently met with Fernando Perez, who founded IPython/Jupyter. He seemed to tell me that currently 9 people are working fulltime on rewriting the Jupyter web notebook using the PhosphorJS framework. I tried to understand PhosphorJS based on the github page, but couldn't, except to deduce that it is mostly the work of one person from Bloomberg/Continuum Analytics. Fernando told me that they chose PhosphorJS since it very fast, and that their main motivation is to (1) make Jupyter better use their huge high-resolution monitors on their new institute at Berkeley, and (2) make it easier for developers like me to integrate/extend Jupyter into their applications. I don't understand (2), because PhosphorJS is perhaps the least popular web framework I've ever heard of (is it a web framework -- I can't tell?). I pushed Fernando to explain why they made that design choice, but didn't really understand the answer, except that they had spent a lot of time investigating alternatives (like React first). I'm intimidated by their resources and concerned that I'm making the wrong choice; however, I just can't understand why they have made what seems to me to be the wrong choice. I hope to understand more at the joint Sage/Jupyter Days 70 that we are organizing together in Berkeley, CA in November. (Edit: see https://github.com/ipython/ipython/issues/8239 for a discussion of why IPython/Jupyter uses PhosphorJS.) ### Tables and RethinkDB Our rewrite of SMC is built on Tables, Flux and React. Tables are client-side technology I wrote inspired by Facebook's GraphQL/Relay technology (and Meteor, Firebase, etc.); they synchronize data between clients and the backend database in realtime. Tables are defined by a JSON schema file, which specifies the fields in the table, and explains what get and set queries are allowed. A table is a subset of a much larger table in the database, with the subset defined by conditions that are relative to the user making the query. For example, the projects table has one entry for each project that the user is a collaborator on. Tables are automatically synchronized between the user and the database whenever the database changes, using RethinkDB changefeeds. RethinkDB's innovation is to build realtime updates -- triggered when the result of a query to the database changes -- directly into the database at the lowest level. Of course it is possible to build something that looks the same from the outside using either a message queue (say using RabbitMQ or ZeroMQ), or by watching the replication stream from the database and triggering actions based on that (like Meteor does using MongoDB). RethinkDB's approach seems better to me, putting the abstraction at the right level. That said, based on mailing list traffic, searches, etc., it seems that very, very few people get RethinkDB yet. Also, despite years of development, RethinkDB only became "production ready" a few months ago, and only got automatic failover a few weeks ago. That said, after ironing out some kinks, I'm now using it with heavy traffic in production and it works very well. ## Flux Once data is automatically synchronized between the database and web browsers in realtime, we can build everything else on top of this. Facebook also introduced an architecture pattern that they call Flux, which works well with React. It's very different than MVC-style two-way binding frameworks, where objects are directly linked to UI elements, with an object changing causing the UI element to change and vice versa. In SMC each major part of the system has two objects associated to it: Actions and Stores. We think of them in terms of the classical CQRS pattern -- command query responsibility segregation. Actions are commands -- they are Javascript "functions" that get stuff done, but they do not return values; instead, they impact the state of the store. The store has functions that allow one to query for the state of the store, but they do not change the state of the store. The store functions must only be functions of the internal state of the store and nothing else. They might cache their results and format their output to be very convenient for rendering. But that's it. Actions usually cause the corresponding store (or stores) to change. When a store changes, it emit a change event, which causes any React components that depend on the store to be updated, which in many cases means they are re-rendered. There are optimizations one can introduce to reduce the amount of re-rendering, which if one isn't careful leads to subtle bugs; pretty much the only subtle React UI bugs one hits are caused by such optimizations. When the UI re-renders, the user sees their view of the world change. The user then clicks buttons, types, etc., which triggers actions, which in turn update stores (and tables, hence propogating changes to the ultimate source of truth, which is the RethinkDB database). As stores update, the UI again updates, etc. ### Status So far, we have completely (re-)written the project listing, file manager, help/status page, new file page, project log, file finder, project settings, course management system, account settings, billing, project upgrade system, and file use notifications using React, Flux, and Tables, and the result works well. Bugs are much easier to fix, and it is easy (possible?) to understand the state of the system, since it is defined by the state of the database and the corresponding client-side stores. We've completely rethought everything about the UI in doing the rewrite of the above components, and it has taken several months. Also, as mentioned above, I completely rewrote most of the backend to use RethinkDB instead of Cassandra. There were also the weeks of misery for me after we made the switch over. Even after weeks of thinking/testing/wondering "what could go wrong?", we found out all kinds of surprising little things within hours of pushing everything into production, which took more than a week of sleep deprived days to sort out. What's left? We have to rewrite the file editor tabs system, the project tabs system, and all the applications (except course management): editing text files using Codemirror, task lists (which are suprisingly complicated!), color xterm terminals, Jupyter notebooks (which will still use an iframe for the notebook itself), Sage worksheets (with complicated html output embedded in codemirror), compressed file de-archiver, the LaTeX editor, the wiki and markdown editors, and file chat. We hope to find a clean way to abstract away the various SMC applications as plugins, so that other people can easily write their own applications/plugins that will run inside of SMC. There will be a rich collection of example plugins to build on, namely the ones listed above, which are all driven by critical-to-us real world applications. Discussion about this blog post on Hacker News. ## August 28, 2015 ### Vince Knight #### Natural language processing of new jokes from 2015 This is a brief update to a previous post: “Python, natural language processing and predicting funny”. In that post I carried out some basic natural language processing with Python to predict whether or not a joke is funny. In this post I just update that with some more data from this year’s Edinburgh Fringe festival. Take a look at the ipython notebook which shows graphics and outputs of all the jokes. Interestingly this year’s winning joke is not deemed funny by the basic model :) but overall was 60% right this year (which is pretty good compared to last year). Here is a summary plot of the classifiers for different thresholds of ‘funny’: The corresponding plot this year (with the new data): Take a look at the notebook file and by all means grab the csv file to play (but do let me know how you get on :)). ## August 22, 2015 ### Benjamin Hackl #### Google Summer of Code 2015: Conclusion The “Google Summer of Code 2015” program has ended yesterday, on the 21. of August at 19.00 UTC. This blog entry shall provide a short wrap-up of our GSoC project. The aim of our project was to implement a basic framework that enables us to do computations with asymptotic expressions in SageMath — and I am very happy to say that we very much succeeded to do so. An overview of all our developments can be found at meta ticket #17601. Although we did not really follow the timeline suggested in my original proposal (mainly because the implementation of the Asymptotic Ring took way longer than originally anticipated) we managed to implement the majority of ideas from my proposal — with the most important part being that our current prototype is stable. In particular, this means that we do not expect to make major design changes at this point. Every detail of our design is well-discussed and can be explained. Of course, our “Asymptotic Expressions” project is far from finished, and we will continue to extend the functionality of our framework. For example, although working with exponential and logarithmic terms is currently possible, it is not very convenient because the$\log$,$\exp$, and power functions are not fully implemented. Furthermore, it would be interesting to investigate the performance-gain obtained by cythonizing the central parts of this framework (e.g. parts of the MutablePoset…) — and so on… To conclude, I want to thank Daniel Krenn for his hard work and helpful advice als my mentor, as well as the SageMath community for giving me the opportunity to work on this project within the Google Summer of Code program! ## August 17, 2015 ### Benjamin Hackl #### Asymptotic Expressions: Current Developments Since my last blog entry on the status of our implementation of Asymptotic Expressions in SageMath quite a lot of improvements have happened. Essentially, all the pieces required in order to have a basic working implementation of multivariate asymptotics are there. The remaining tasks within my Google Summer of Code project are: • Polish the documentation of our minimal prototype, which consists of #17716 and the respective dependencies. Afterwards, we will set this to needs_review. • Open a ticket for the multivariate asymptotic ring and put together everything that we have written so far there. In this blog post I want to give some more examples of what can be done with our implementation right now and what we would like to be able to handle in the future. ### Status Quo After I wrote my last blog entry, we introduced a central idea/interface to our project: short notations. By using the short notation factory for growth groups (introduced in #18930) it becomes very simple to construct the desired growth group. Essentially, monomial growth groups (cf. #17600), i.e. groups that contain elements of the form variable^power (for a fixed variable and powers from some base ring, e.g. the Integer Ring or even the Rational Field) are represented by variable^base , where the base ring is also specified via its shortened name. The short notation factory then enables us to do the following: sage: from sage.groups.asymptotic_growth_group import GrowthGroup sage: G = GrowthGroup('x^ZZ'); G Growth Group x^ZZ sage: G.an_element() x sage: G = GrowthGroup('x^QQ'); G Growth Group x^QQ sage: G.an_element() x^(1/2) Naturally, this interface carries over to the generation of asymptotic rings: instead of the (slightly dubious) "monomial" keyword advertised in my last blog entry, we can now actually construct the growth group by specifying the respective growth group via its short representation: sage: R.<x> = AsymptoticRing('x^ZZ', QQ); R Asymptotic Ring <x^ZZ> over Rational Field sage: (x^2 + O(x))^50 x^100 + O(x^99) Recently, we also implemented another type of growth group: exponential growth groups (see #19028). These groups represent elements of the form base^variable where the base is from some multiplicative group. For example, we could do the following: sage: G = GrowthGroup('QQ^x'); G Growth Group QQ^x sage: G.an_element() (1/2)^x sage: G(2^x) * G(3^x) 6^x sage: G(5^x) * G((1/7)^x) (5/7)^x Note: unfortunately, we did not implement a function that allows taking some element from some growth group (e.g. x from a monomial growth group) as the variable in an exponential growth group yet. Implementing some way to “change” between growth groups by taking the log or the exponential function is one of our next steps. We also made this short notation a central interface for working with cartesian products. This is implemented in #18587. For example, this allows to construct growth groups containing elements like$2^x \sqrt[5]{x^2} \log(x)^2$: sage: G = GrowthGroup('QQ^x * x^QQ * log(x)^ZZ'); G Growth Group QQ^x * x^QQ * log(x)^ZZ sage: G.an_element() (1/2)^x * x^(1/2) * log(x) sage: G(2^x * x^(2/5) * log(x)^2) 2^x * x^(2/5) * log(x)^2 Simple parsing from the symbolic ring (and from strings) is implemented. Like I have written above, operations like 2^G(x) or log(G(x)) are one of the next steps on our roadmap. ### Further Steps Of course, having an easy way to generate growth groups (and thus also asymptotic rings) is nice — however, it would be even better if the process of finding the correct parent would be even more automated. Unfortunately, this requires some non-trivial effort regarding the pushout construction — which will certainly not happen within the GSoC project. As soon as we have an efficient way to “switch” between factors of a growth group (e.g. by taking the logarithm or the exponential function), this has to be carried over up to the asymptotic ring. Operations like sage: 2^(x^2 + O(x)) 2^(x^2) * 2^(O(x)) where the output could also be 2^(x^2) * O(x^g) , where$g$is determined by series_precision() . Division of asymptotic expressions can be realized with just about the same idea, for example: $\frac{1}{x^2 + O(x)} = \frac{1}{x^2} \frac{1}{1 + O(1/x)} = x^{-2} + O(x^{-3}),$ and so on. If an infinite series occurs, it will have to be cut using an$O$-Term, most likely somehow depending on series_precision() as well. Ultimately, we would also like to incorporate, for example, Stirling’s approximation of the factorial such that we could do something like sage: n.factorial() sqrt(2*pi) * e^(n*log(n)) * (1/e)^n * n^(1/2) + ... which then can be used to obtain asymptotic expansions of binomial coefficients like$\binom{2n}{n}$: sage: (2*n).factorial() / (n.factorial()^2) 1/sqrt(pi) * 4^n * n^(-1/2) + ... As you can see, there is still a lot of work within our “Asymptotic Expressions” project — nevertheless, with the minimal working prototype and the ability to create cartesian products of growth groups, the fundament for all of this is already implemented! ## August 16, 2015 ### Michele Borassi #### Conclusion of the Main Part of the Project Hi! In this post, I will summarize the results obtained with the inclusion in Sage of Boost and igraph libraries. This was the main part of my Google Summer of Code project, and it was completed yesterday, when ticket 19003 was closed. We have increased the number of graph algorithms available in Sage from 66 to 98 (according to the list used in the initial comparison of the graph libraries [1]). Furthermore, we decreased the running-time of several Sage algorithms: in some cases, we have been able to improve the asymptotic running-time, obtaining up to 10000x improvements in our tests. Finally, during the inclusion of external algorithms, we have refactored and cleaned some of Sage source code, like the shortest path routines: we have standardized the input and the output of 15 routines related to shortest paths, and we have removed duplicate code as much as possible. More specifically, the first part of the project was the inclusion of Boost graph library: since the library is only available in C++, we had to develop an interface. This interface lets us convert easily a Sage graph into a Boost graph, and run algorithms on the converted graph. Then, we have written routines to re-translate the output into a Sage-readable format: this way, the complicated Boost library is "hidden", and users can interact with it as they do with Sage. In particular, we have interfaced the following algorithms: • Edge connectivity (trac.sagemath.org/ticket/18564); • Clustering coefficient (trac.sagemath.org/ticket/18811); • Cuthill-McKee and King vertex orderings (trac.sagemath.org/ticket/18876); • Minimum spanning tree (trac.sagemath.org/ticket/18910); • Dijkstra, Bellman-Ford, Johnson shortest paths (trac.sagemath.org/ticket/18931); All these algorithms were either not available in Sage, or quite slow, compared to the Boost routines. As far as we know, Boost does not offer other algorithms that improve Sage algorithms: however, if such algorithms are developed in the future, it will be very easy to include them, using the new interface. In the second part of the project, we included igraph: since this library already offers a Python interface, we decided to include it as an optional package (before it becomes a standard package, at least an year should pass [2]). To install the package, it is enough to type the following instruction from the Sage root folder: sage -i igraph # To install the igraph C core sage -i python_igraph # To install the Python interface Then, we can easily interact with igraph: for a list of available routines, it is enough to type "igraph." and click tab twice. To convert a Sage graph g_sage into an igraph graph it is enough to type g_igraph = g_sage.igraph_graph(), while a Sage graph can be instantiated from an igraph graph through g_sage=Graph(g_igraph) or g_sage=DiGraph(g_igraph). This way, all igraph algorithms are now available in Sage. Furthermore, we have included the igraph maximum flow algoritm inside the Sage corresponding function, obtaining significant improvements (for more information and benchmarks, we refer to ticket 19003 [3]). In conclusion, I think the project reached its main goal, the original plan was followed very closely, and we have been able to overcome all problems. Before closing this post, I would like to thank many people that helped me with great advices, and who provided great solutions to all the problems I faced. First of all, my mentor David Coudert: he always answered very fast to all my queries, and he gave me great suggestions to improve the quality of the code I wrote. Then, a very big help came from Nathann Cohen, who often cooperated with David in reviewing my code and proposing new solutions. Moreover, I have to thank Martin Cross, who gave me good suggestions with Boost graph library, and Volker Braun, who closed all my ticket. Finally, I have to thank the whole Sage community for giving me this great opportunity! [1] https://docs.google.com/spreadsheets/d/1Iu1hkQtRn9J-sgfZbQTu2RoXzyjoMEWP5-cm3nAwnWE/edit?usp=sharing [2] http://doc.sagemath.org/html/en/developer/coding_in_other.html [3] http://trac.sagemath.org/ticket/19003 ## August 09, 2015 ### Vince Knight #### Why I am a paying member of cloud.sagemath If you are not familiar with Sagemath it is a free open source mathematics package that does simple things like expand algebraic expressions as well as far more complex things (optimisation, graph theory, combinatorics, game theory etc…). Cloud.sagemath is a truly amazing tool not just for Sage bu for scientific computation in general and it’s free. Completely 100% free. In this post I’ll explain why I pay for it. A while ago, a colleague and I were having a chat about the fact that our site Maple license hadn’t been renewed fast enough (or something similar to that). My colleague was fairly annoyed by this saying something like: ‘We are kind of like professional athletes, if I played soccer at a professional club I would have the best facilities available to me. There would not be a question of me having the best boots.’ Now I don’t think we ever finished this conversation (or at least I don’t really remember what I said) but this is something that’s stayed with me for quite a while. First of all: I think there are probably a very large proportion of professional soccer players who do not play at the very top level and so do not enjoy having access to the very best facilities (I certainly wouldn’t consider myself the Ronaldo of mathematics…). Secondly: Mathematicians are (in some ways) way cooler than soccer players. We are somewhat like magicians, in the past we have not needed much more than a pencil and some paper to work our craft. Whilst a chemist/physicist/medical research needs a lab and/or other things we can pretty much work just with a whiteboard. We are basically magicians. We can make something from nothing. Since moving to open source software for all my research and teaching this is certainly how I’ve felt. Before discovering open source tools I needed to make sure I had the correct licence or otherwise before I could work but this is no longer the case. I just need a very basic computer (I bought a thinkpad for £60 the other day!) and I am just as powerful as I could want to be. This is even more true with cloud.sagemath. Anyone can use a variety of scientific computing tools for no cost whatsoever (not even a cost associated with the time spent installing software): it just works. I have used this to work on sage source code with students, carry out research and also to deliver presentations: it’s awesome. So, why do I pay$7 month to use it?

Firstly because it gives me the ability to move some projects to servers that are supposedly more robust. I have no doubt that they are more robust but in all honesty I can’t say I’ve seen problems with the ‘less’ robust servers (150 of my students used them last year and will be doing so again in the Autumn).

The main reason I pay to use cloud.sagemath is because I can afford to.

This was put in very clear terms to me during the organisation of DjangoCon Europe. The principle at Python conferences is that everyone pays to attend. This in turn ensures that funds are available for people who cannot afford to pay to attend.

I am in a lucky enough financial position that for about the price of two fancy cups of coffee a month I can help support an absolutely amazing project that helps everyone and anyone have the same powers a magician does. This helps (although my contribution is obviously a very small part of it) ensure that students and anyone else who cannot afford to help support the project, can use Sage.

## August 01, 2015

### Vince Knight

#### Simulating continuous Markov chains

In a blog post I wrote in 2013, I showed how to simulate a discrete Markov chain. In this post we’ll (written with a bit of help from Geraint Palmer) show how to do the same with a continuous chain which can be used to speedily obtain steady state distributions for models of queueing processes for example.

A continuous Markov chain is defined by a transition rate matrix which shows the rates at which transitions from 1 state to an other occur. Here is an example of a continuous Markov chain:

This has transition rate matrix $Q$ given by:

The diagonals have negative entries, which can be interpreted as a rate of no change. To obtain the steady state probabilities $\pi$ for this chain we can solve the following matrix equation:

if we include the fact that the sum of $\pi$ must be 1 (so that it is indeed a probability vector) we can obtain the probabilities in Sagemath using the following:

You can run this here (just click on ‘Evaluate’):

Q = matrix(QQ, [[-3, 2, 1], [1, -5, 4], [1, 8, -9]])
(transpose(Q).stack(vector([1,1,1])).solve_right(vector([0,0,0,1])))

This returns:

(1/4, 1/2, 1/4)

Thus, if we were to randomly observe this chain:

• 25% of the time it would be in state 1;
• 50% of the time it would be in state 2;
• 25% of the time it would be in state 3.

Now, the markov chain in question means that if we’re in the first state the rate at which a change happens to go to the second state is 2 and the rate at which a change happens that goes to the third state is 1.

This is analagous to waiting at a bus stop at the first city. Buses to the second city arrive randomly 2 per hour, and buses to the third city arrive randomly 1 per hour. Everyone waiting for a bus catches the first one that arrives. So at steady state the population will be spread amongst the three cities according to $\pi$.

Consider yourself at this bus stop. As all this is Markovian we do not care what time you arrived at the bus stop (memoryless property). You expect the bus to the second city to arrive 1/2 hours from now, with randomness, and the bus to the third city to arrive 1 hour from now, with randomness.

To simulate this we can sample two random numbers from the exponential distribution and find out which bus arrives first and ‘catch that bus’:

import random
[random.expovariate(2), random.expovariate(1)]

The above returned (for this particular instance):

[0.5003491524841699, 0.6107995795458322]

So here it’s going to take .5 hours for a bus to the second city to arrive, whereas it would take .61 hours for a bus to the third. So we would catch the bust to the second city after spending 0.5 hours at the first city.

We can use this to write a function that will take a transition rate matrix, simulate the transitions and keep track of the time spent in each state:

def sample_from_rate(rate):
import random
if rate == 0:
return oo
return random.expovariate(rate)

def simulate_cmc(Q, time, warm_up):
Q = list(Q)  # In case a matrix is input
state_space = range(len(Q))  # Index the state space
time_spent = {s:0 for s in state_space}  # Set up a dictionary to keep track of time
clock = 0  # Keep track of the clock
current_state = 0  # First state
while clock < time:
# Sample the transitions
sojourn_times = [sample_from_rate(rate) for rate in Q[current_state][:current_state]]
sojourn_times += [oo]  # An infinite sojourn to the same state
sojourn_times += [sample_from_rate(rate) for rate in Q[current_state][current_state + 1:]]

# Identify the next state
next_state = min(state_space, key=lambda x: sojourn_times[x])
sojourn = sojourn_times[next_state]
clock += sojourn
if clock > warm_up:  # Keep track if past warm up time
time_spent[current_state] += sojourn
current_state = next_state  # Transition

pi = [time_spent[state] / sum(time_spent.values()) for state in state_space]  # Calculate probabilities
return pi

Here are the probabilities from the same Markov chain as above:

which gave (on one particular run):

[0.25447326473556037, 0.49567517998307603, 0.24985155528136352]

This approach was used by Geraint Palmer who is doing a PhD with Paul Harper and I. He used this to verify that calculations were being carried out correctly when he was trying to fit a model. James Campbell and I are going to try to use this to get an approximation for bigger chains that cannot be solved analytically in a reasonable amount of time. In essence the simulation of the Markov chain makes sure we spend time calculating probabilities in states that are common.

## July 27, 2015

### Michele Borassi

#### Including igraph Library

Hello!
In this new blog post, I would like to discuss the inclusion of igraph library inside Sage.
Up to now, I have interfaced Sagemath with Boost graph library, in order to run Boost algorithms inside Sage. Now, I want to do the same with igraph, the other major C++ graph library, which stands out because it contains 62 routines, 29 of which are not available in Sage. Moreover, igraph library is very efficient, as shown in [1] and in the previous post on library comparison.

This inclusion of igraph in Sage is quite complicated, because we have to include a new external library [2] (while in the Boost case we already had the sources). We started this procedure through ticket 18929: unfortunately, after this ticket is closed, igraph will only be an optional package, and we will have to wait one year before it becomes standard. The disadvantage of optional packages is that they must be installed before being able to use them; however, the installation is quite easy: it is enough to run Sage with option -i python_igraph.

After the installation, the usage of igraph library is very simple, because igraph already provides a Python interface, that can be used in Sage. To transform the Sagemath network g_sage into an igraph network g_igraph, it is enough to type g_igraph=g_sage.igraph_graph(), while to create a Sagemath network from an igraph network it is enough to type g_sage = Graph(g_igraph) or g_sage=DiGraph(g_igraph). After this conversion, we can use all routines offered by igraph!
For instance, if we want to create a graph through the preferential attachment model, we can do it with the Sagemath routine, or with the igraph routine:

sage: G = graphs.RandomBarabasiAlbert(100, 2)
sage: G.num_verts()
100
sage: G = Graph(igraph.Graph.Barabasi(100, int(2)))
sage: G.num_verts()
100

The result is the same (apart from randomness), but the time is very different:

sage: import igraph
sage: %timeit G = Graph(igraph.Graph.Barabasi(10000000, int(2)))
1 loops, best of 3: 46.2 s per loop

sage: G = graphs.RandomBarabasiAlbert(10000000, 2)
Stopped after 3 hours.

Otherwise, we may use igraph to generate graphs with Forest-Fire algorithm, which is not available in Sagemath:

sage: G = Graph(igraph.Graph.Forest_Fire(10, 0.1))
sage: G.edges()
[(0, 1, None), (0, 2, None), (1, 7, None), (2, 3, None), (2, 4, None), (3, 5, None), (3, 8, None), (4, 6, None), (8, 9, None)]

We may also do the converse: transform a Sage network into an igraph network and apply an igraph algorithm. For instance, we can use label propagation to find communities (a task which is not implemented in Sage):

sage: G = graphs.CompleteGraph(5)+graphs.CompleteGraph(5)
sage: com = G.igraph_graph().community_label_propagation()
sage: len(com)
2
sage: com[0]
[0, 1, 2, 3, 4]
sage: com[1]
[5, 6, 7, 8, 9]

The algorithm found the two initial cliques as communities.

I hope that these examples are enough to show the excellent possibilities offered by igraph library, and that these features will soon be available in Sagemath!

[2] http://doc.sagemath.org/html/en/developer/packaging.html

### Vince Knight

#### A talk on computational game theory in Sagemath

Today, Cardiff University, School of Mathematics students: James Campbell, Hannah Lorrimore as well as Google Summer of Code student Tobenna P. Igwe (PhD student at the University of Liverpool) as well as I presented the current game theoretic capabilities of Sagemath.

This talk happened as part of a two day visit to see Dima Pasechnik to work on the stuff we’ve been doing and the visit was kindly supported by CoDiMa (an EPSRC funded project to support the development of GAP and Sagemath)

Here is the video of the talk:

Here is a link to the sage worksheet we used for the talk.

Here are some photos I took during the talk:

and here are some I took of us working on code afterwards:

Here is the abstract of the talk:

Game Theory is the study of rational interaction and is getting increasingly important in CS. Ability to quickly compute a solution concept for a nontrivial (non-)cooperative game helps a lot in practical and theoretic work, as well as in teaching. This talk will describe and demonstrate the game theoretic capabilities of Sagemath (http://www.sagemath.org/), a Python library, described as having the following mission: ‘Creating a viable free opensource alternative to Magma, Maple, Mathematica and Matlab’.

The talk will describe algorithms and classes that are implemented for the computation of Nash equilibria in bimatrix games. These include:

• A support enumeration algorithm;
• A reverse search algorithm through the lrs library;
• The Lemke-Howson algorithm using the Gambit library (https://github.com/gambitproject/gambit).

In addition to this, demonstrations of further capabilities that are actively being developed will also be given:

• Tests for degeneracy in games;
• A class for extensive form games which include the use of the graph theoretic capabilities of Sage.

The following two developments which are being carried out as part of a Google Summer of Code project will also be demonstrated:

• An implementation of the Lemke-Howson algorithm;
• Extensions to N player games;

Demonstrations will use the (free) online tool cloud.sagemath which allows anyone with connectivity to use Sage (and solve game theoretic problems!). Cloud.sagemath also serves as a great teaching and research tool with access to not only Sage but Jupyter (Ipython) notebooks, R, LaTeX and a variety of other software tools.

The talk will concentrate on strategic non-cooperative games but matching games and characteristic function games will also be briefly discussed.

## July 23, 2015

### Vince Knight

#### Using the two thirds of the average game in class

This past week I have been delighted to have a short pedagogic paper accepted for publication in MSOR Connections. The paper is entitled: “Playing Games: A Case Study in Active Learning Applied to Game Theory”. The journal is open access and you can see a pre print here. As well as describing some literature on active learning I also present some data I’ve been collecting (with the help of others) as to how people play two subsequent plays of the two thirds of the average game (and talk about another game also).

In this post I’ll briefly put up the results here as well as mention a Python library I’m working on.

If you’re not familiar with it, the two thirds of the average game asks players to guess a number between 0 and 100. The closest number to 2/3rds of the average number guessed is declared the winner.

I use this all the time in class and during outreach events. I start by asking participants to play without more explanation than the basic rules of the game. Following this, as a group we go over some simple best response dynamics that indicate that the equilibrium play for the game is for everyone to guess 0. After this explanation, everyone plays again.

Below you can see how this game has gone as a collection of all the data I’ve put together:

You will note that some participants actually increase their second guess but in general we see a possible indication (based on two data points, so obviously this is not meant to be a conclusive statement) of convergence towards the theoretic equilibria.

Here is a plot showing the relationship between the first and second guess (when removing the guesses that increase, although as you can see in the paper this does not make much difference):

The significant linear relationship between the guesses is given by:

So a good indication of what someone will guess in the second round is that it would be a third of their first round guess.

Here is some Sage code that produces the cobweb diagram assuming the following sequence represents each guess (using code by Marshall Hampton):

that plot shows the iterations of the hypothetical guesses if we were to play more rounds :)

The other thing I wanted to point at in this blog post is this twothirds library which will potentially allow anyone to analyse these games quickly. I’m still working on it but if it’s of interest please do jump in :) I have put up a Jupyter notebook demoing what it can do so far (which is almost everything but with some rough edges). If you want to try it out, download that notebook and run:

$pip install twothirds I hope that once the library is set up anyone who uses it could simply send over data of game plays via PR which would help update the above plots and conclusions :) ## July 16, 2015 ### Benjamin Hackl #### Computing with Asymptotic Expressions It has been quite some time since my last update on the progress of my Google Summer of Code project, which has two reasons. On the one hand, I have been busy because of the end of the semester, as well as because of the finalization of my Master’s thesis — and on the other hand, it is not very interesting to write a post on discussing and implementing rather technical details. Nevertheless, Daniel Krenn and myself have been quite busy in order to bring asymptotic expressions to SageMath. Fortunately, these efforts are starting to become quite fruitful. In this post I want to discuss our current implementation roadmap (i.e. not only for the remaining Summer of Code, but also for the time afterwards), and give some examples for what we are currently able to do. ### Strutcture and Roadmap An overview of the entire roadmap can be found at here (trac #17601). Recall that the overall goal of this project is to bring asymptotic expressions like$2^n + n^2 \log n + O(n)$to Sage. Our implementation (which aims to be as general and expandable as possible) tackles this problem with a three-layer approach: • GrowthGroups and GrowthElements (trac #17600). These elements and parents manage the growth (and just the growth!) of a summand in an asymptotic expression like above. The simplest cases are monomial and logarithmic growth groups. For example, their elements are given by$n^r$and$\log(n)^r$where the exponent$r$is from some ordered ring like$\mathbb{Z}$or$\mathbb{Q}$. Both cases (monomial and logarithmic growth groups) can be handled in the current implementation — however, growth elements like$n^2 \log n$are intended to live in the cartesian product of a monomial and a logarithmic growth group (in the same variable). Parts of this infrastructure are already prepared (see trac #18587). • AsymptoticTerms and TermMonoids (trac #17715). While GrowthElements only represent the growth, AsymptoticTerms have more information: basically, they represent a summand in an asymptotic expression. There are different classes for each type of asymptotic term (e.g. ExactTerm and OTerm — with more to come). Additionally to a growth element, some types of asymptotic terms (like exact terms) also possess a coefficient. • AsymptoticExpression and AsymptoticRing (trac #17716). This is what we are currently working on, and we do have a running prototype! The version that can be found on trac is only missing some doctests and a bit of documentation. Asymptotic expressions are the central objects within this project, and essentially they are sums of several asymptotic terms. In the background, we use a special data structure (“mutable posets“, trac #17693) in order to model the (partial) order induced by the various growth elements belonging to an asymptotic expression. This allows to perform critical operations like absorption (when an $O$-term absorbs “weaker” terms) efficiently and in a simple way. The resulting minimal prototype can, in some sense, be compared to Sage’s PowerSeriesRing: however, we also allow non-integer exponents, and extending this prototype to work with multivariate expressions should not be too hard now, as the necessary infrastructure is there. Following the finalization of the minimal prototype, there are several improvements to be made. Here are some examples: • Besides addition and multiplication, we also want to divide asymptotic expressions, and higher-order operations like exponentiation and taking the logarithm would be interesting as well. • Also, conversion from, for example, the symbolic ring is important when it comes to usability of our tools. We will implement and enhance this conversion gradually. ### Examples An asymptotic ring (over a monomial growth group with coefficients and exponents from the rational field) can be created with sage: R.<x> = AsymptoticRing('monomial', QQ); R Asymptotic Ring over Monomial Growth Group in x over Rational Field with coefficients from Rational Field Note that we marked the code as experimental, meaning that you will see some warnings regarding the stability of the code. Now, as we have an asymptotic ring, we can do some calculations. For example, take$ (2\sqrt{x} + O(1))^{15}$: sage: (2*x^(1/2) + O(x^0))^15 O(x^7) + 32768*x^(15/2) We can also have a look at the underlying structure: sage: expr = (x^(3/7) + 2*x^(1/5)) * (x + O(x^0)); expr O(x^(3/7)) + 2*x^(6/5) + 1*x^(10/7) sage: expr.poset poset(O(x^(3/7)), 2*x^(6/5), 1*x^(10/7)) sage: print expr.poset.full_repr() poset(O(x^(3/7)), 2*x^(6/5), 1*x^(10/7)) +-- null | +-- no predecessors | +-- successors: O(x^(3/7)) +-- O(x^(3/7)) | +-- predecessors: null | +-- successors: 2*x^(6/5) +-- 2*x^(6/5) | +-- predecessors: O(x^(3/7)) | +-- successors: 1*x^(10/7) +-- 1*x^(10/7) | +-- predecessors: 2*x^(6/5) | +-- successors: oo +-- oo | +-- predecessors: 1*x^(10/7) | +-- no successors As you might have noticed, the “O”-constructor that is used for the PowerSeriesRing and related structures, can also be used here. In particular,$O(\mathit{expr})$acts exactly as expected: sage: expr O(x^(3/7)) + 2*x^(6/5) + 1*x^(10/7) sage: O(expr) O(x^(10/7)) Of course, the usual rules for computing with asymptotic expressions hold: sage: O(x) + O(x) O(x) sage: O(x) - O(x) O(x) So far, so good. Our next step is making the multivariate growth groups usable for the AsymptoticRing and then improving the overall user interface of the ring. ## July 09, 2015 ### Michele Borassi #### New Boost Algorithms Hello! My Google Summer of Code project is continuing, and I am currently trying to include more Boost algorithms in Sage. In this post, I will make a list of the main algorithms I'm working on. #### Clustering Coefficient If two different people have a friend in common, there is a high chance that they will become friends: this is the property that the clustering coefficient tries to capture. For instance, if I pick two random people, very probably they will not know each other, but if I pick two of my acquaintances, very probably they will know each other. In this setting, the clustering coefficient of a person is the probability that two random acquaintances of this person know each other. In order to quantify this phenomenon, we can formalize everything in terms of graphs: people are nodes and two people are connected if they are acquaintances. Hence, we define the clustering coefficient of a vertex $v$ in a graph $G=(V,E)$ as: $$\frac{2|\{(x,y) \in E:x,y \in N_v\}|}{\deg(v)(\deg(v)-1)}$$ where $N_v$ is the set of neighbors of $v$ and $\deg(v)$ is the number of neighbors of $v$. This is exactly the probability that two random neighbors of $v$ are linked with an edge. My work has included in Sagemath the Boost algorithm to compute the clustering coefficient, which is more efficient that the previous algorithm, which was based on NetworkX: sage: g = graphs.RandomGNM(20000,100000) sage: %timeit g.clustering_coeff(implementation='boost') 10 loops, best of 3: 258 ms per loop sage: %timeit g.clustering_coeff(implementation='networkx') 1 loops, best of 3: 3.99 s per loop But Nathann did better: he implemented a clustering coefficient algorithm from scratch, using Cython, and he managed to outperform the Boost algorithm, at least when the graph is dense. Congratulations, Nathann! However, when the graph is sparse, Boost algorithm still seems to be faster. #### Dominator tree Let us consider a road network, that is, a graph where vertices are street intersections, and edges are streets. The question is: if I close an intersection, where am I still able to go, assuming I am at home? The answer to this question can be summarized in a dominator tree. Assume that, in order to go from my home to my workplace, I can choose many different paths, but all these paths pass through the café, then they pass through the square (that is, if either the café or the square is closed, then there is no way I can go to work). In this case, in the dominator tree, the father of my workplace is the square, the father of the square is the café, and the father of the café is my home, that is also the root of the tree. More formally, given a graph $G$, the dominator tree of $G$ rooted at a vertex $v$ is defined by connecting each vertex $x$ with the last vertex $y \neq x$ that belongs to each path from $v$ to $x$ (note that this vertex always exists, because $v$ belongs to each path from $v$ to $x$). Until now, Sagemath did not have a routine to compute the dominator tree: I have been able to include the Boost algorithm. Unfortunately, due to several suggestions and improvements in the code, the ticket is not closed, yet. Hopefully, it will be closed very soon! #### Cuthill-McKee ordering / King ordering Let us consider a graph $G=(V,E)$: a matrix $M$ of size $|V|$ can be associated to this graph, where $M_{i,j}=1$ if and only if there is an edge between vertices $i$ and $j$. In some cases, this matrix can have specific properties, that can be exploited for many purposes, like speeding-up algorithms. One of this properties is bandwidth, which measures how far the matrix is from a diagonal matrix: it is defined as $\max_{M_{i,j} \neq 0}|i-j|$. A small bandwidth might help in computing several properties of the graph, like eigenvalues and eigenvectors. Since the bandwidth depends on the order of vertices, we can try to permute them in order to obtain a smaller value: in Sage, we have a routine that performs this task. However, this routine is very slow, and it is prohibitive even for very small graphs (in any case, finding an optimal ordering is NP-hard). Hence, researchers have developed heuristics to compute good orderings: the most important ones are Cuthill-McKee ordering and King ordering. Boost contains both routines, but Sage does not: for this reason, I would like to insert these two functions. The code is almost ready, but part of it depends on the code of the dominator tree: as soon as the dominator tree is reviewed, I will open a ticket on these two routines! #### Dijkstra/Bellman-Ford/Johnson shortest paths Let us consider again a road network. In this case, we are building a GPS software, which has to compute the shortest path between the place where we are and the destination. The textbook algorithm that performed this task is Dijkstra algorithm, which computes the distance between the starting point and any other reachable point (of course, there are more efficient algorithms involving a preprocessing, but Dijkstra is the most simple, and its running-time is asymptotically optimal). This algorithm is already implemented in Sagemath. Let's spice things up: what if that there are some streets with negative length? For instance, we like a street so much that we are willing to drive 100km more just to pass from that street, which is 50km long. It is like that street is -50km long! First of all, under these assumptions, a shortest path might not exist: if there is a cycle with negative length, we may drive along that cycle all the times we want, decreasing more and more the distance to the destination. At least, we have to assume that no negative cycle exists. Even with this assumption, Dijkstra algorithm does not work, and we have to perform Bellman-Ford algorithm, which is less efficient, but more general. Now, assume that we want something more: we are trying to compute the distance between all possible pairs of vertices. The first possibility is to run Bellman-Ford algorithm $n$ times, where $n$ is the number of nodes in the graph. But there is a better alternative: it is possible to perform Bellman-Ford algorithm only once, and then to modify the lengths of edges, so that all lengths are positive, and shortest paths are not changed. This way, we run Dijkstra algorithm $n$ times on this modified graph, obtaining a better running time. This is Johnson algorithm. Both Bellman-Ford and Johnson algorithms are implemented in Boost and not in Sagemath. As soon as I manage to create weighted Boost graphs (that is, graphs where edges have a length), I will include also these two algorithm! ## June 25, 2015 ### Michele Borassi #### Edge Connectivity through Boost Graph Library After two weeks, we have managed to interface Boost and Sagemath! However, the interface was not as simple as it seemed. The main problem we found is the genericity of Boost: almost all Boost algorithms work with several graph implementations, which differ in the data structures used to store edges and vertices. For instance, the code that implements breadth-first search works if the adjacency list of a vertex v is a vector, a list, a set, etc. This result is accomplished by using templates [1]. Unfortunately, the only way to interface Sagemath with C++ code is Cython, which is not template-friendly, yet. In particular, Cython provides genericity through fused types [2], whose support is still experimental, and which do not offer full integration with templates [3-5]. After a thorough discussion with David, Nathann, and Martin (thank you very much!), we have found a solution: for the input, we have defined a fused type "BoostGenGraph", including all Boost graph implementations, and all functions that interface Boost and Sagemath use this fused type. This way, for each algorithm, we may choose the most suitable graph implementation. For the output, whose type might be dependent on the input type, we use C++ to transform it into a "standard" type (vector, or struct). We like this solution because it is very clean, and it allows us to exploit Boost genericity without any copy-paste. Still, there are some drawbacks: 1) Cython fused types do not allow nested calls of generic functions; 2) Boost graphs cannot be converted to Python objects: they must be defined and deleted in the same Cython function; 3) No variable can have a generic type, apart from the arguments of generic functions. These drawbacks will be overcome as soon as Cython makes templates and generic types interact: this way, we will be able create a much stronger interface, by writing a graph backend based on Boost, so that the user might create, convert, and modify Boost graphs directly from Python. However, for the moment, we will implement all algorithms using the current interface, which already provides genericity, and which has no drawback if the only goal is to "steal" algorithms from Boost. As a test, we have computed the edge connectivity of a graph through Boost: the code is available in ticket 18564 [6]. Since the algorithm provided by Sagemath is not optimal (it is based on linear programming), the difference in the running time is impressive, as shown by the following tests: sage: G = graphs.RandomGNM(100,1000) sage: %timeit G.edge_connectivity() 100 loops, best of 3: 1.42 ms per loop sage: %timeit G.edge_connectivity(implementation="sage") 1 loops, best of 3: 11.3 s per loop sage: G = graphs.RandomBarabasiAlbert(300,3) sage: %timeit G.edge_connectivity(implementation="sage") 1 loops, best of 3: 9.96 s per loop sage: %timeit G.edge_connectivity() 100 loops, best of 3: 3.33 ms per loop Basically, on a random Erdos-Renyi graph with 100 vertices and 1000 edges, the new algorithm is 8,000 times faster, and on a random Barabasi-Albert graph with 300 nodes and average degree 3, the new algorithm is 3,000 times faster! This way, we can compute the edge connectivity of much bigger graphs, like a random Erdos-Renyi graph with 5,000 vertices and 50,000 edges: sage: G = graphs.RandomGNM(5,000, 50,000) sage: %timeit G.edge_connectivity() 1 loops, best of 3: 16.2 s per loop The results obtained with this first algorithm are very promising: in the next days, we plan to interface several other algorithms, in order to improve both the number of available routines and the speed of Sagemath graph library! [1] https://en.wikipedia.org/wiki/Template_%28C%2B%2B%29 [2] http://docs.cython.org/src/userguide/fusedtypes.html [3] https://groups.google.com/forum/#!topic/cython-users/qQpMo3hGQqI [4] https://groups.google.com/forum/#!searchin/cython-users/fused/cython-users/-7cHr6Iz00Y/Z8rS03P7-_4J [5] https://groups.google.com/forum/#!searchin/cython-users/fused$20template/cython-users/-7cHr6Iz00Y/Z8rS03P7-_4J
[6] http://trac.sagemath.org/ticket/18564

### Vince Knight

#### On testing degeneracy of bi-matrix games

We (James Campbell and Vince Knight are writing this together) have been working on implementing code in Sage to test if a game is degenerate or not. In this post we’ll prove a simple result that is used in the algorithm that we are/have implemented.

## Bi-Matrix games

For a general overview of these sorts of things take a look at this post from a while ago on the subject of bi-matrix games in Sage. A bi-matrix is a matrix of tuples corresponding to payoffs for a 2 player Normal Form Game. Rows represent strategies for the first player and columns represent strategies for the second player, and each tuple of the bi-matrix corresponds to a tuple of payoffs. Here is an example:

We see that if the first player plays their third row strategy and the second player their second column strategy then the first player gets a utility of 6 and the second player a utility of 1.

This can also be written as two separate matrices. A matrix $A$ for Player 1 and $B$ for Player 2.

Here is how this can be constructed in Sage using the NormalFormGame class:

sage: A = matrix([[3,3],[2,5],[0,6]])
sage: B = matrix([[3,2],[2,6],[3,1]])
sage: g = NormalFormGame([A, B])
sage: g
Normal Form Game with the following utilities: {(0, 1): [3, 2], (0, 0): [3, 3],
(2, 1): [6, 1], (2, 0): [0, 3], (1, 0): [2, 2], (1, 1): [5, 6]}

Currently, within Sage, we can obtain the Nash equilibria of games:

sage: g.obtain_nash()
[[(0, 1/3, 2/3), (1/3, 2/3)], [(4/5, 1/5, 0), (2/3, 1/3)], [(1, 0, 0), (1, 0)]]

We see that this game has 3 Nash equilibria. For each, we see that the supports (the number of non zero entries) of both players’ strategies are the same size. This is, in fact, a theoretical certainty when games are non degenerate.

If we modify the game slightly:

sage: A = matrix([[3,3],[2,5],[0,6]])
sage: B = matrix([[3,3],[2,6],[3,1]])
sage: g = NormalFormGame([A, B])
sage: g.obtain_nash()
[[(0, 1/3, 2/3), (1/3, 2/3)], [(1, 0, 0), (2/3, 1/3)], [(1, 0, 0), (1, 0)]]

We see that the second equilibrium has supports of different sizes. In fact, if the first player did play $(1,0,0)$ (in other words just play the first row) the second player could play any mixture of strategies as a best response and not particularly $(2/3,1/3)$. This is because the game in consideration is now degenerate.

(Note that both of the games above are taken from Nisan et al. 2007 [pdf].)

## What is a degenerate game

A bimatrix game is called nondegenerate if the number of pure best responses to a mixed strategy never exceeds the size of its support. In a degenerate game, this definition is violated, for example if there is a pure strategy that has two pure best responses (as in the example above), but it is also possible to have a mixed strategy with support size $k$ that has $k+1$ strategies that are a best response.

Here is an example of this:

If we consider the mixed strategy for player 2: $y=(1/2,1/2)$, then the utility to player 1 is given by:

We see that there are 3 best responses to $y$ and as $y$ has support size 2 this implies that the game above is degenerate.

## What does the literature say about degenerate games

The original definition of degenerate games was given in Lemke, Howson 1964 [pdf] and their definition was dependent on the labeling polytope that they used for their famous algorithm for the computation of equilibria (which is currently being implemented in Sage!). Further to this Stengel 1999 [ps] offers a nice overview of a variety of equivalent definitions.

Sadly, all of these definitions require finding a particular mixed strategy profile $(x, y)$ for which a particular condition holds. To be able to implement a test for degeneracy based on any of these definitions would require a continuous search over possible mixed strategy pairs.

In the previous example (where we take $y=(1/2,1/2)$ we could have identified this $y$ by looking at the utilities for each pure strategy for player 1 against $y=(y_1, 1-y_1)$:

($r_i$ denotes row strategy $i$ for player 1.) A plot of this is shown:

We can (in this instance) quickly search through values of $y_1$ and identify the point that has the most best responses which gives the best chance of passing the degeneracy condition ($y_1=1/2$). This is not really practical from a generic point of view which leads to this blog post: we have identified what the particular $x, y$ is that is sufficient to test.

## A sufficient mixed strategy to test for degeneracy

The definition of degeneracy can be written as:

Def. A Normal Form Game is degenerate iff:

There exists $x\in \Delta X$ such that $|S(x)| < |\sigma_2|$ where $\sigma_2$ is the support such that $(xB)_j = \max(xB)$, for all $j$ in $\sigma_2$.

OR

There exists $y\in \Delta Y$ such that $|S(x)| < |\sigma_1|$ where $\sigma_1$ is the support such that $(Ay)_i = \max(Ay)$, for all $i$ in $\sigma_1$.

($X$ and $Y$ are the pure strategies for player 1 and 2 and $\Delta X, \Delta Y$ the corresponding mixed strategies spaces.

The result we are implementing in Sage aims to remove the need to search particular mixed strategies $x, y$ (a continuous search) and replace that by a search over supports (a discrete search).

Theorem. A Normal Form Game is degenerate iff:

There exists $\sigma_1 \subseteq X$ and $\sigma_2 \subseteq Y$ such that $|\sigma_1| < |\sigma_2|$ and $S(x^*) = \sigma_1$ where $x^*$ is a solution of $(xB)_j = \max(xB)$, for all $j$ in $\sigma_2$ (note that a valid $x^*$ is understood to be a mixed strategy vector).

OR

There exists $\sigma_1 \subseteq X$ and $\sigma_2 \subseteq Y$ such that $|\sigma_1| > |\sigma_2|$ and $S(y^*) = \sigma_2$ where $y^*$ is a solution of $(Ay)_i = \max(Ay)$, for all $i$ in $\sigma_1$.

Using the definition given above the proof is relatively straightforward but we will include it below (mainly to try and convince ourselves that we haven’t made a mistake).

We will only consider the first part of each condition (the ones for the first player). The result follows in the same way for the second player.

Proof $\Leftarrow$

Assume a game defined by $A, B$ is degenerate, by the above definition without loss of generality this implies that there exists an $x\in \Delta X$ such that $|S(x)| < |\sigma_2|$ where $\sigma_2$ is the support such that $(xB)_j = \max(xB)$, for all $j$ in $\sigma_2$.

If we denote $S(x)$ by $\sigma_1$ then the definition implies that $|\sigma_1| < |\sigma_2|$ and further more that $(xB)_j = \max(xB)$, for all $j$ in $\sigma_2$ as required.

Proof $\Rightarrow$

If we now assume that we have $\sigma_1, \sigma_2, x^*$ as per the first part of the theorem then we have $|\sigma_1|<|\sigma_2|$ and taking $x=x^*$ implies that $|S(x)|<|\sigma_2|$. Furthermore as $x^*$ is a solution of $(xB)_j = \max(xB)$ the result follows (by the definition given above).

## Implementation

This result implies that we simply need to consider all potential pairs of supports. Depending on the relative size of the supports we can use one of the two conditions of the result. If we ordered the supports by size the situation for the two player game looks somewhat like this:

Note that for an $m\times n$ game there are $(2^m-1)$ potential supports for player 1 (the size of the powerset of strategy set without the empty set) and $(2^n-1)$ potential supports of for player 2. Thus the rectangle drawn above has dimension $(2^m-1)\times(2^n-1)$. Needless to say that our implementation will not be efficient (testing degeneracy is after all an NP complete problem in linear programming (see Chandrasekaran 1982 - [pdf]) but at least we have identified exactly which mixed strategy we need to test for each support pair.

## References

• Chandrasekaran, R., Santosh N. Kabadi, and Katta G. Murthy. “Some NP-complete problems in linear programming.” Operations Research Letters 1.3 (1982): 101-104. [pdf]
• Lemke, Carlton E., and Joseph T. Howson, Jr. “Equilibrium points of bimatrix games.” Journal of the Society for Industrial & Applied Mathematics 12.2 (1964): 413-423. [pdf]
• N Nisan, T Roughgarden, E Tardos, VV Vazirani Vol. 1. Cambridge: Cambridge University Press, 2007. [pdf]
• von Stengel, B. “Computing equilibria for two person games.” Technical report. [ps]

## June 14, 2015

### Vince Knight

#### Python, natural language processing and predicting funny

Every year there is a big festival in Edinburgh called the fringe festival. I blogged about this a while ago, in that post I did a very basic bit of natural language processing aiming to try and identify what made things funny. In this blog post I’m going to push that a bit further by building a classification model that aims to predict if a joke is funny or not. (tldr: I don’t really succeed but but that’s mainly because I have very little data - having more data would not necessarily guarantee success either but the code and approach is what’s worth taking from this post… 😪).

If you want to skip the brief description and go straight to look at the code you can find the ipython notebook on github here and on cloud.sagemath here.

The data comes from a series of BBC articles which reports (more or less every year since 2011?) the top ten jokes at the fringe festival. This does in fact only give 60 odd jokes to work with…

Here is the latest winner (by Tim Vine):

I decided to sell my Hoover… well it was just collecting dust.

After cleaning it up slightly I’ve thrown that all in a json file here. So in order to import the data in to a panda data frame I just run:

import pandas
df = pandas.read_json('jokes.json') # Loading the json file

Pandas is great, I’ve been used to creating my own bespoke classes for handling data but in general just using pandas does the exact right job. At this point I basically follow along with this post on sentiment analysis of twitter which makes use of the ridiculously powerful nltk library.

We can use the nltk library to ‘tokenise’ and get rid of common words:

commonwords = [e.upper() for e in set(nltk.corpus.stopwords.words('english'))] # <- Need to download the corpus: import nltk; nltk.download()
commonwords.extend(['M', 'VE'])  # Adding a couple of things that need to be removed
tokenizer = nltk.tokenize.RegexpTokenizer(r'\w+')  # To be able to strip out unwanted things in strings
string_to_list = lambda x: [el.upper() for el in tokenizer.tokenize(x) if el.upper() not in commonwords]
df['Joke'] = df['Raw_joke'].apply(string_to_list)

Note that this requires downloading one of the awesome corpuses (thats apparently the right way to say that) from nltk.

Here is how this looks:

joke = 'I decided to sell my Hoover... well it was just collecting dust.'
string_to_list(joke)

which gives:

['DECIDED', 'SELL', 'HOOVER', 'WELL', 'COLLECTING', 'DUST']

## We can now get started on building a classifier

Here is the general idea of what will be happening:

First of all we need to build up the ‘features’ of each joke, in other words pull the words out in to a nice easy format.

To do that we need to find all the words from our training data set, another way of describing this is that we need to build up our dictionary:

df['Year'] = df['Year'].apply(int)

def get_all_words(dataframe):
"""
A function that gets all the words from the Joke column in a given dataframe
"""
all_words = []
for jk in dataframe['Joke']:
all_words.extend(jk)
return all_words

all_words = get_all_words(df[df['Year'] <= 2013])  # This uses all jokes before 2013 as our training data set.

We then build something that will tell us for each joke which of the overall words is in it:

def extract_features(joke, all_words):
words = set(joke)
features = {}
for word in words:
features['contains(%s)' % word] = (word in all_words)
return features

Once we have done that, we just need to decide what we will call a funny joke. For this purpose We’ll use a funny_threshold and any joke that ranks above the funny_threshold in any given year will be considered funny:

funny_threshold = 5
df['Rank'] = df['Rank'].apply(int)
df['Funny'] = df['Rank'] <= funny_threshold

Now we just need to create a tuple for each joke that puts the features mentioned earlier and a classification (if the joke was funny or not) together:

df['Labeled_Feature'] = zip(df['Features'],df['Funny'])

We can now (in one line of code!!!!) create a classifier:

classifier = nltk.NaiveBayesClassifier.train(df[df['Year'] <= 2013]['Labeled_Feature'])

This classifier will take into account all the words in a given joke and spit out if it’s funny or not. It can also give us some indication as to what makes a joke funny or not:

classifier.show_most_informative_features(10)

Here is the output of that:

Most Informative Features
contains(GOT) = True   False : True   =  2.4 : 1.0
contains(KNOW) = True    True : False  =  1.7 : 1.0
contains(PEOPLE) = True   False : True   =  1.7 : 1.0
contains(SEX) = True   False : True   =  1.7 : 1.0
contains(NEVER) = True   False : True   =  1.7 : 1.0
contains(RE) = True    True : False  =  1.6 : 1.0
contains(FRIEND) = True    True : False  =  1.6 : 1.0
contains(SAY) = True    True : False  =  1.6 : 1.0
contains(BOUGHT) = True    True : False  =  1.6 : 1.0
contains(ONE) = True    True : False  =  1.5 : 1.0

This immediately gives us some information:

• If your joke is about SEX is it more likely to not be funny.
• If your joke is about FRIENDs is it more likely to be funny.

That’s all very nice but we can now (theoretically - again, I really don’t have enough data for this) start using the mathematical model to tell you if something is funny:

joke = 'Why was 10 afraid of 7? Because 7 8 9'
classifier.classify(extract_features(string_to_list(joke), get_all_words(df[df['Year'] <= 2013])))

That joke is apparently funny (the output of above is True). The following joke however is apparently not (the output of below if False):

joke = 'Your mother is ...'
print classifier.classify(extract_features(string_to_list(joke), get_all_words(df[df['Year'] <= 2013])))

As you can see in the ipython notebook it is then very easy to measure how good the predictions are (I used the data from years before 2013 to predict 2014).

## Results

Here is a plot of the accuracy of the classifier for changing values of funny_threshold:

You’ll notice a couple of things:

• When the threshold is 0 or 1: the classifier works perfectly. This makes sense: all the jokes are either funny or not so it’s very easy for the classifier to do well.
• There seems to be a couple of regions where the classifier does particularly poorly: just after a value of 4. Indeed there are points where the classifier does worse than flipping a coin.
• At a value of 4, the classifier does particularly well!

Now, one final thing I’ll take a look at is what happens if I start randomly selecting a portion of the entire data set to be the training set:

Below are 10 plots that correspond to 50 repetitions of the above where I randomly sample a ratio of the data set to be the training set:

Finally (although it’s really not helpful), here are all of those on a single plot:

First of all: all those plots are basically one line of seaborn code which is ridiculously cool. Seaborn is basically magic:

sns.tsplot(data, steps)

Second of all, it looks like the lower bound of the classifiers is around .5. Most of them start of at .5, in other words they are as good as flipping a coin before we let them learn from anything, which makes sense. Finally it seems that the threshold of 4 classifier seems to be the only one that gradually improves as more data is given to it. That’s perhaps indicating that something interesting is happening there but that investigation would be for another day.

All of the conclusions about the actual data should certainly not be taken seriously: I simply do not have enough data. But, the overall process and code is what is worth taking away. It’s pretty neat that the variety of awesome python libraries lets you do this sort of thing more or less out of the box.

Please do take a look at this github repository but I’ve also just put the notebook on cloud.sagemath so assuming you pip install the libraries and get the data etc you can play around with this right in your browser:

## June 09, 2015

### Michele Borassi

#### Performance Comparison of Different Graph Libraries

As promised in the last post, I have compared the performances of several graph libraries, in order to choose which ones should be deployed with Sagemath. Here, I provide the main results of this analysis, while more details are available on my website (see also the links below).
The libraries chosen are the most famous graph libraries written in Python, C, or C++ (I have chosen these languages because they are easier to integrate in Sagemath, using Cython). Furthermore, I have excluded NetworkX, which is already deployed with Sagemath.
First of all, I have to enforce that no graph library comparison can be completely fair, and also this comparison can be criticized, due to the large amount of available routines, to the constant evolution of libraries, and to many small differences in the outputs (for instance, one library might compute the value of a maximum s-t flow, another library might actually compute the flow, and a third one might compute all maximum flows). Despite this, I have tried to be as fair as possible, through a deeper and more detailed analysis than previous comparisons (https://graph-tool.skewed.de/performance, http://www.programmershare.com/3210372/, http://arxiv.org/pdf/1403.3005.pdf).
The first comparison deals with the number of algorithms implemented. I have chosen a set of 107 possible algorithms, trying to cover all possible tasks that a graph library should perform (avoiding easy tasks that are common to all libraries, like outputting the number of nodes, the number of edges, the neighbors of a node, etc). In some cases, two tasks were collapsed in one, if the algorithms solving these tasks are very similar (for instance, computing a maximum flow and computing a minimum cut, computing vertex betweenness and edge betweenness, etc).
The number of routines available for each library is plotted in the following chart, and a table containing all features is available in HTML or as a Google Sheet.

The results show that Sagemath has more routines than all competitors (66), closely followed by igraph (62). All other libraries are very close to each other, having about 30 routines each. Furthermore, Sagemath could be improved in the fields of neighbor similarity measures (assortativity, bibcoupling, cocitation, etc), community detection, and random graph generators. For instance, igraph contains 29 routines that are not available in Sagemath.

The second comparison analyzes the running-time of some of the algorithms implemented in the libraries. In particular, I have chosen 8 of the most common tasks in graph analysis: computing the diameter, computing the maximum flow between two vertices, finding connected components and strongly connected components, computing betweenness centrality, computing the clustering coefficient, computing the clique number, and generating a graph with the preferential attachment model. I have run each of these algorithms on 3 inputs, and I have considered the total execution time (excluding the time needed to load the graph). More details on this experiment are available here, and the results are also available in a Google Sheet.
In order to make the results more readable, I have plotted the ratio between the time needed by a given library and the minimum time needed by any library. If an algorithm was not implemented, or it needed more than 3 hours to complete, the corresponding bar is not shown.

Overall, the results show that NetworKit is the fastest library, or one of the fastest, in all routines that are implemented (apart from the generation of preferential attachment graphs, where it is very slow). Boost graph library is very close to NetworKit, and it also contains more routines. Also Sagemath is quite efficient in all tasks, apart from the computation of strongly connected components and the generation of a preferential attachment graph, where it needed more than 3 hours. However, in the latter case, the main problem was not speed but memory consumption.

In conclusion, Sagemath can highly benefit from the possibility of using algorithms from other libraries. First of all, it might improve the number of algorithms offered, especially by including igraph, and it also might improve its performance, by including Boost, NetworKit, or other fast graph libraries.

## June 04, 2015

### Michele Borassi

#### Comparison of Graph Libraries

Many times, people asked me "Which is the best available graph library?", or "Which graph library should I use to compute this, or that?".
Well, personally I love to use Sage, but there are also several good alternatives. Then, the question becomes "How could we improve Sage, so that people will choose it?".

In my opinion, graph libraries are compared according to the following parameters:
1. simplicity and documentation: people have little time, and the faster they learn how to use the library, the better;
2. number of routines available;
3. speed: sometimes, the input is very big, and the algorithms take much time to finish, so that a fast implementation is fundamental.
While it is very difficult to measure the first point, the others can be compared and improved. For this reason, in order to outperform other libraries, we should implement new features, and improve existing ones. You don't say!

However, this answer is not satisfactory: in principle, we could add all features available in other libraries, but this is a huge translational work, and while we are doing this work the other libraries will change, making this effort a never-ending story.

My project proposes an alternative: cooperating instead of competing. I will try to interface Sage with other libraries, and to use their algorithms when the Sage counterpart is not available, or less efficient. This way, with an affordable amount of work, we will be able to run all algorithms available in the best graph libraries!

As a first step, I have compared all the most famous C, C++, and Python graph libraries according to points 2 and 3, in order to choose which libraries should be included. The next posts will analyze the results of this comparison.

#### Google Summer of Code: let's start!

This blog will follow my Google Summer of Code project, entitled Performance Improvements for the Graph Module of Sagemath. The complete project is available here, and related documents with partial results will be available on the same website.
In this first post, I would like to thank my mentor David Coudert and Nathann Cohen, who helped me a lot in writing this project and understanding how the graph module of Sagemath works.
With their help, and with the help of the Sage community, I hope it will be a useful and funny work! Let's start!

## May 29, 2015

### Benjamin Hackl

#### Asymptotic Expressions: Motivation

$\def\R{\mathbb{R}}$So, as Google Summer of Code started on Monday, May 25th it is time to give a proper motivation for the project I have proposed. The working title of my project is (Multivariate) Asymptotic Expressions, and its overall goal is to bring asymptotic expressions to SageMath.

### What are Asymptotic Expressions?

A motivating answer for this question comes from the theory of Taylor series. Assume that we have a sufficiently nice (in this case meaning smooth) function $f : \R \to \R$ that we want to approximate in a neighborhood of some point $x_0 \in \R$. Taylor’s theorem allows us to write $f(x) = T_n(x) + R_n(x)$ where

$T_n(x) = \sum_{j=0}^n \frac{f^{(j)}(x_0)}{j!}\cdot (x-x_0)^j = f(x_0) + f'(x_0)\cdot (x-x_0) + \cdots + \frac{f^{(n)}(x_0)}{n!}\cdot (x-x_0)^n,$

and $R_n(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!} \cdot (x-x_0)^{n+1}$, where $\xi$ lies in a neighborhood of $x_0$. Note that for $x\to x_0$, $R_n(x)$ “behaves like” $(x-x_0)^{n+1}$. In particular, we can certainly find a constant $C > 0$ such that $|R_n(x)| \leq C\cdot |x-x_0|^{n+1}$, or, in other words: for $x\to x_0$ the growth of the function $R_n(x)$ is bounded from above by the growth of $(x-x_0)^{n+1}$.

The idea of bounding the growth of a function by the growth of another function when the argument approaches some number (or $\infty$) is the central idea behind the big O notationFor function $f, g : \R \to \R$ we write $f(x) = O(g(x))$ for $x\to x_0$ if there is a constant $C > 0$ such that $|f(x)| \leq C\cdot |g(x)|$ for all $x$ in some neighborhood of $x_0$.

A case that is particularly important is the case of $x_0 = \infty$, that is if we want to compare and/or characterize the behavior of some function for $x\to\infty$, which is also called the functions asymptotic behavior. For example, consider the functions $\log x$, $x^3$ and $e^x$. All of them are growing unbounded for $x\to\infty$ — however, their asymptotic behavior differs. This can be seen by considering pairwise quotients of these functions: $\frac{x^3}{e^x} \to 0$ for $x\to\infty$, and therefore the asymptotic growth of $x^3$ can be bounded above by the growth of $e^x$, meaning $x^3 = O(e^x)$ for $x\to\infty$.

The analysis of a functions asymptotic behavior is important for many applications, for example when determining time and space complexity of algorithms in computer science, or for describing the growth of classes of combinatorial objects: take, for example, binary strings of length $2n$ that contain equally many zeros and ones. If $s_n$ denotes the number of such strings, then we have

$s_n = \binom{2n}{n} = \frac{4^n}{\sqrt{n\pi}} \left(1 + O\left(\frac{1}{n}\right)\right) \quad\text{ for } n\to\infty.$

Expressions like these are asymptotic expressions. When we consider asymptotic expressions in only one variable, everything works out nicely as a total order is induced. But as soon as multiple variables are involved, we don’t have a total order any more. Consider, for example, $x^2 y$ and $xy^2$ when $x$ and $y$ approach $\infty$. These two elements cannot be compared to each other, which complicates computing with these expressions as they may contain multiple “irreducible” O-terms.

The following univariate and multivariate examples shall demonstrate how computing with such expressions looks like (all variables are assumed to go to $\infty$):

$x + O(x) = O(x),\quad x^2 \cdot (x + O(1)) = x^3 + O(x^2),\quad O(x^2) \cdot O(x^3) = O(x^5),$

$x y + O(x^2 y) = O(x^2y),\quad (y \log y + O(y)) (x^2 y + O(4^x \sqrt{x})) = x^2 y^2 \log y + O(x^2 y^2) + O(4^x \sqrt{x} y \log y).$

Our plan is to provide an implementation based on which computations with these and more complicated expressions are possible.

### Planned Structure

There are four core concepts of our implementation.

• Asymptotic Growth Groups: These are multiplicative groups that contain growth elements like $x^2$, $\log x$, $2^x \cdot x \cdot \log x$. For starters, only univariate power growth groups will be implemented.
• Asymptotic Term Monoids: These monoids contain asymptotic terms — in essence, these are summands of asymptotic terms. Apart from exact term monoids (growth elements with a coefficient), we will also implement O-term monoids as well as a term monoid for a deviation of O-terms. Asymptotic terms have (in addition to their group operation, multiplication) absorption as an additional operation: for example, O-terms are able to absorb all asymptotically “smaller” elements.
• Mutable Poset: As we have mentioned above, due to the fact that multivariate asymptotic expressions do not have a total order with respect to their growth, we need a partially ordered set (“Poset”) that deals with this structure such that operations like absorbing terms can be performed efficiently. The mutable poset is the central data structure that asymptotic expressions are built upon.
• Asymptotic Ring: This is our top-level structure which is also supposed to be the main interaction object for users. The asymptotic ring contains the asymptotic expressions, i.e. intelligently managed sums of asymptotic terms. All common operations shall be possible here. Furthermore, the interface should be intelligent enough such that admissible expressions from the symbolic ring can be directly converted into elements of the asymptotic ring.

Obviously, this “planned structure” is rather superficial. However, this is only to supplement the motivation for my project with some ideas on the implementation. I’ll go a lot more into the details of what I am currently implementing in the next few blog posts!

## May 27, 2015

### William Stein

#### Guiding principles for SageMath, Inc.

In February of this year (2015), I founded a Delaware C Corporation called "SageMath, Inc.".  This is a first stab at the guiding principles for the company.    It should help clarify the relationship between the company, the Sage project, and other projects like OpenDreamKit and Jupyter/IPython.

### Company mission statement:

Make open source mathematical software ubiquitous.
This involves both creating the SageMathCloud website and supporting the development and distribution of the SageMath and other software, including Jupyter, Octave, Scilab, etc. Anything open source.

### Company principles:

• Absolutely all company funded software must be open source, under a GPLv3 compatible license. We are a 100% open source company.
• Company independence and self-determination is far more important than money. A core principle is that SMI is not for sale at any price, and will not participate in any partnership (for cost) that would restrict our freedom. This means:
• reject any offers from corp development from big companies to purchase or partner,
• do not take any investment money unless absolutely necessary, and then only from the highest quality investors
• do not take venture capital ever
• Be as open as possible about everything involving the company. What should not be open (since it is dangerous):
• finances (which could attract trolls)
• private user data
What should be open:
• aggregate usage data, e.g., number of users.
• aggregate data that could help other open source projects improve their development, e.g., common problems we observe with Jupyter notebooks should be provided to their team.
• guiding principles

• SageMathCloud is freemium with the expectation that 2-5% of users pay.
• Target audience: all potential users of cloud-based math-related software.

### SageMathCloud mission

Make it as easy as possible to use open source mathematical software in the cloud.
This means:
• Minimize onboard friction, so in less than 1 minute, you can create an account and be using Sage or Jupyter or LaTeX. Morever, the UI should be simple and streamlined specifically for the tasks, while still having deep functionality to support expert users. Also, everything persists and can be sorted, searched, used later, etc.
• Minimize support friction, so one click from within SMC leads to a support forum, an easy way for admins to directly help, etc. This is not at all implemented yet. Also, a support marketplace where experts get paid to help non-experts (tutoring, etc.).
• Minimize teaching friction, so everything involving software related to teaching a course is as easy as possible, including managing a list of students, distributing and collecting homework, and automated grading and feedback.
• Minimize pay friction, sign up for a $7 monthly membership, then simple clear pay-as-you-go functionality if you need more power. ## May 23, 2015 ### Benjamin Hackl #### Google Summer of Code — Countdown Today I received the welcome package for attending this year’s “Google Summer of Code”! Actually, it’s pretty cool; the following things were included: • a blue notebook with a monochromatic GSoC 15 logo (in dark blue) printed on it • a sticker with a colored GSoC 15 logo • a pen that is both a blue ballpoint pen as well as a mechanical pencil (0.5) Here is a photo of all this stuff: The work on our project (multivariate) Asymptotic Expressions (in cooperation with Daniel Krenn and Clemens Heuberger) begins (or rather continues) on Monday, the 25th of May. Over the course of next week (probably in a$\varepsilon-neighborhood of Monday) I will blog about the status quo, as well as about the motivation for the project. ## May 04, 2015 ### Vince Knight #### Code on cake, poker and a number theory classification web app I have just finished writing feedback and obtaining marks for my first year students’ presentations. These presentations follow 11 weeks during which students formed companies and worked together to come up with a ‘product’ which had to involve mathematics and code (this semester comes just after 11 weeks of learning Python and Sage). In this post I’ll briefly describe some of the great things that the students came up with. I must say that I was blown away by the standard this year. Last year the students did exceptionally well but this year the standard was even higher, I am so grateful for the effort put in by more or less everyone. Some of the great projects included: • A website that used a fitted utility function (obtained from questioning family, friends, flatmates) to rank parking lots in terms of price and distance from a given venue (the website was written in Django and the function fitted using Sage). • A commando training app, with an actual reservist marine who is a student of ours: • A story based game with an original storyline stemming from the zodiac. The presentation culminated in Geraint, Jason and I (who were the audience) retaliating to their Nerf gun attack with our (hidden under the desk) Nerf guns (we had a hunch that this group would ambush us…). The game mechanics itself was coded in pure Python and the UI was almost written in Django (that was the goal but they didn’t have the time to fully implement it). • A Django site that had a graphical timeline of mathematics (on click you had access to a quizz and info etc…). This was one I was particularly excited about as it’s a tool I would love to use. • An outreach/educational package based around cryptography. They coded a variety of cyphers in Python and also put together an excellent set of teaching resources with really well drawn characters etc… They even threw in my dog Auraya (the likeness of the drawing is pretty awesome :)): • I ask my students to find an original way of showcasing their code. I don’t actually know the right answer to that ‘challenge’. Most students showcase the website and/or app, some will talk me through some code but this year one group did something quite frankly awesome: code on cake. Here’s some of the code they wrote for their phone app (written with kivy): • One group built a fully functioning and hosted web app (after taking a look at Django they decided that Flask was the way to go for this particular tool). Their app takes in a natural number and classifies it against a number of categories, go ahead and try it right now: Categorising Numbers • One of the more fun presentations was for a poker simulation app that uses a prime number representation of a hand of poker to simulate all possible outcomes of a given state. This work remarkably fast and immediately spits out (with neat graphics of the cards) the probability of winning given the current cards. As well as an impressive app the students presented it very well and invited me to play a game of poker (I lost, their mark was not affected…): Here are a couple of screen shots of the app itself: Home screen: The input card screen: I am missing out a bunch of great projects (including an impressive actual business that I will be delighted to talk about more when appropriate). I am very grateful to the efforts put in by all the students and wish them well during their exams. ## April 06, 2015 ### Vince Knight #### My 5 reasons why jekyll + github is a terrible teaching tool. For the past year or so I have been using jekyll for all my courses. If you do not know, in a nutshell, jekyll is a ruby framework that lets you write templates for pages and build nice websites using static markdown files for your content. Here I will describe what I think of jekyll from a pedagogic point of view, in 5 main points. ## 1. Jekyll is terrible because the tutorial is too well written and easy to follow. First of all, as an academic I enjoy when things are difficult to read and follow. The Jekyll tutorial can get you up and running with a jekyll site in less than 5 minutes. It is far too clear and easy to follow. This sort of clear and to the point explanation is very dangerous from a pedagogic point of view as students might stumble upon it and raise their expectations of the educational process they are going through. In all seriousness, the tutorial is well written and clear, with a basic knowledge of the command line you can modify the base site and have a website deployed in less than 10 minutes. ## 2. Jekyll is terrible because it works too seamlessly with github. First of all gh-pages takes care of the hosting. Not having to use a complicated server saves far too much time. As academics we have too much free time already, I do not like getting bored. Github promotes the sharing and openness of code, resources and processes. Using a jekyll site in conjunction with github means that others can easily see and comment on all the materials as well as potentially improve them. This openness is dangerous as it ensures that courses are living and breathing things as opposed to a set of notes/problem sheets that sit safely in a drawer somewhere. The fact that jekyll uses markdown is also a problem. On github anyone can easily read and send a pull request (which improves things) without really knowing markdown (let alone git). This is very terrible indeed, here for example is a pull request sent to me by a student. The student in question found a mistake in a question sheet and asked me about it, right there in the lab I just said ‘go ahead and fix it :)’ (and they did). Involving students in the process of fixing/improving their course materials has the potential for utter chaos. Furthermore normalising mistakes is another big problem: all students should be terrified of making a mistake and/or trying things. Finally, having a personal site as a github project gives you a site at the following url: username.github.io  By simply having a gh-pages branch for each class site, this will automatically be served at: username.github.io/class-site  This is far too sensible and flexible. Furthermore the promotion of decentralisation of content is dangerous. If one of my class sites breaks: none of my others will be affected!!! How can I expect any free time with such a robust system? This is dangerously efficient. ## 3. Jekyll is terrible because it is too flexible. You can (if you want to) include: • A disqus.com board to a template for a page which means that students can easily comment and talk to you about materials. Furthermore you can also use this to add things to your materials in a discussion based way, for example I have been able to far too easily to add a picture of a whiteboard explaining something students have asked. • Mathjax. With some escaping this works out of the box. Being able to include nicely rendered mathematics misaligns students’ expectations as to what is on the web. • Sage cells can be easily popped in to worksheets allowing students to immediately use code to illustrate/explain a concept. and various others: you can just include any html/javascript etc… This promotion of interactive and modern resources by Jekyll is truly terrible as it gets students away from what teaching materials should really be about: dusty notes in the bottom of a drawer (worked fine for me). The flexibility of Jekyll is also really terrible as it makes me forget the restrictions imposed on me by whatever VLE we are supposed to use. This is making me weak and soft, when someone takes the choice away from me and I am forced to use the VLE, I most probably won’t be ready. (A jekyll + github setup also implis that a wiki immediately exists for a page and I am also experimenting with a gitter.im room for each class). ## 4. Jekyll is terrible because it gives a responsive site out of the box. Students should consume their materials exactly when and how we want them to. The base jekyll site cames with a basic responsive framework, here is a photo of one of my class sheets (which also again shows the disgustingly beautifully rendered mathematics): This responsive framework works right out of the box (you can also obviously use further frameworks if you want to, see my point about flexibility) from the tutorial and this encourages students to have access to the materials on whatever platform they want whenever they want. This cannot be a good thing. ## 5. Jekyll is terrible because it saves me too much time. The main point that is truly worrying about jekyll is how much time it saves me. I have mentioned this before, as academics we need to constantly make sure we do not get bored. Jekyll does not help with this. I can edit my files using whatever system I want (I can even do this on github directly if I wanted to), I push and the website is up to date. In the past I would have a lot of time taken up by compiling a LaTeX document and uploading to our VLE. I would sit back and worry about being bored before realising (thankfully) that I had a typo and so needed to write, delete and upload again. Furthermore, I can easily use the github issue tracker to keep on top of to do lists etc… (which I am actually beginning to do for more or less every aspect of my life). TAs can also easily fix/improve minor things without asking me to upload whatever it is they wrote. Github + Jekyll works seamlessly and ensures that I have more time to respond to student queries and think. This time for reflection on teaching practice is dangerous: I might choose to do things differently than how they have been done for the past 100 years. (In case my tone is unclear: I am such a huge jekyll fan and think it is a brilliant pedagogic tool. There might well be various other static site generators and other options so please do comment about them below :)) ## March 25, 2015 ### Vince Knight #### A one week flipped learning environment to introduce Object Oriented Programming This post describes a teaching activity that is run for the Cardiff MSc. programmes. The activity is revolves around a two day hackathon that gets students to use Python and object oriented programming to solve a challenge. The activity is placed within a flipped learning environment and makes use of what I feel is a very nice form of assessment (we just get to know the students). This year is the third installment of this exercise which came as a result of the MSc advisory board requesting that object oriented programming be introduced to our MSc. Before describing the activity itself let me just put this simple diagram that describes the flipped learning environment here (if you would like more info about it be sure to talk to Robert Talbert who has always been very helpful to me): ## Description of what happens After 3 iterations and a number of discussions about the format with Paul Harper (the director of the MSc) I think the last iteration is pretty spot on and it goes something like this: ### Monday: Transfer of content We give a brief overview of Python (you can see the slides here) up until and including basic syntax for classes. ### Tuesday + Wednesday: Nothing Students can, if they want to, read up about Python, look through videos at the website and elsewhere, look through past challenges etc… This is in effect when the knowledge transfer happens ### Thursday: Flying solo followed by feedback Students are handed a challenge of some sort (you can see the past two here). Students work in groups of 4 at attempting to solve the problem. On this day, the two postgrads (Jason and Geraint) and myself observe the groups. When we are asked questions we in general ask questions back. This sometimes leads to a fair bit of frustration but is the difficult process that makes the rest of the process worthwhile. Here is a photo of some of the groups getting to work: At the very end of the day (starting at 1600 for about 30 minutes with each group). During this feedback session go through the code written by each group in detail, highlighting things they are having difficulty with and agreeing on a course of action for the next day. This is the point at which the class ‘flips’ so to speak: transfer of content is done and difficulties are identified and conceptualised. Here is a photo of Jason, Geraint and I at the end of a very long day after the feedback sessions: The other point of this day is that we start our continuous assessment: taking notes and discussing how each group is doing: • Where are they progress wise? • What difficulties do we need to look out for? • How are the groups approaching the problem and working together. Here you can see a photo of Jason in front of the board that we fill up over the 2 days with notes and comments: ### Friday: Sprint finish with more assistance On the second/last day students are given slightly more assistance from Jason, Geraint and I but are still very much left to continue with their hard work. The main difference being that when students ask questions we sometimes answer them. Here is one group who managed to crack something quite difficult on the second day: The final part of this day is to round all the students together and announce the marks, which brings us nicely to the assessment part of this activity. ## Assessment I really enjoy assessing this activity. This is not something I say about assessment very often, but we are continuously assessing the students and are able to gain a true idea of how they do. The final piece of code is not what everything is marked on as it is in essence not terribly important. Here is a photo of the team who did the best this year: If I could sit with students over the 11 week period of the other courses I teach and get to know them and assess them that way, that is indeed how I would do it. ## Summary Here is a summary of how I feel this activity fits in the original diagram I had: As you can see despite ‘being in contact’ with students for most of Thursday I would not consider this contact time in the usual sense as most of that contact is part of the assessment. This is always a very fun (and exhausting) two days and I look forward to next year. ## March 24, 2015 ### Vince Knight #### Marrying toys and students In class yesterday we took a look at matching games. These are sometimes referred to as stable marriage problems. To have some data for us to play with I asked for some volunteers to marry. Sadly I apparently am not allowed to ask students to rank each other in class and I also do not have the authority to marry. So, like last year I used some of my office toys and asked students to rank them. I brought three toys to class: • The best ninja turtle: Donatello • A tech deck • A foam football I asked 3 students to come down and rank them and in turn I let the toys rank the students. We discussed possible matchings with some great questions such as: “Are we trying to make everyone as happy as possible?” The answer to that is: no. We are simply trying to ensure that no one has an incentive to deviate from their current matching by breaking their match for someone they prefer and who also prefers them. Here is the stable matching we found together: Note that we can run the Gale-Shapley value using Sage: The 3 students got to hold on to the toys for the hour and I was half expecting the football to be thrown around but sadly that did not happen. Perhaps next year. ## March 23, 2015 ### Vince Knight #### Cooperative basketball in class Today in class we repeated the game we played last year. 3 teams of 3 students took part this year and here is a photo of the aftermath: As a class we watched the three teams attempt to free-throw as many crumpled up pieces of paper in to the bin as possible. Based on the total number we then tried to come up with how many each subset/coalition of players would have gotten in. So for example, 2 out of 3 of the teams had one student crumple paper while the other 2 took shots. So whilst that individual did not get any in, they contributed an important part to the team effort. Here are the characteristic functions that show what each team did: Here is some Sage code that gives the Shapley value for each game (take a look at my class notes or at last years post to see how to calculate this): Let us define the first game: If you click Evaluate above you see that the Shapley value is given by: (This one we calculated in class) By changing the numbers above we get the following for the other two games. • Game 2: • Game 3: This was a bit of fun and most importantly from a class point of view gave us some nice numbers to work from and calculate the Shapley value together. If anyone would like to read about the Shapley value a bit more take a look at the Sage documentation which not only shows how to calculate it using Sage but also goes over some of the mathematics (including another formulation). ## March 20, 2015 ### Liang Ze #### Character Theory Basics This post illustrates some of SageMath’s character theory functionality, as well as some basic results about characters of finite groups. ## Basic Definitions and Properties Given a representation(V,\rho)$of a group$G$, its character is a map$ \chi: G \to \mathbb{C}$that returns the trace of the matrices given by$\rho$: A character$\chi$is irreducible if the corresponding$(V,\rho)$is irreducible. Despite the simplicity of the definition, the (irreducible) characters of a group contain a surprising amount of information about the group. Some big theorems in group theory depend heavily on character theory. Let’s calculate the character of the permutation representation of$D_4$. For each$g \in G$, we’ll display the pairs: (The Sage cells in this post are linked, so things may not work if you don’t execute them in order.) Many of the following properties of characters can be deduced from properties of the trace: 1. The dimension of a character is the dimension of$V$in$(V,\rho)$. Since$\rho(\text{Id})$is always the identity matrix, the dimension of$\chi$is$\chi(\text{Id})$. 2. Because the trace is invariant under similarity transformations,$\chi(hgh^{-1}) = \chi(g)$for all$g,h \in G$. So characters are constant on conjugacy classes, and are thus class functions. 3. Let$\chi_V$denote the character of$(V,\rho)$. Recalling the definitions of direct sums and tensor products, we see that ## The Character Table Let’s ignore the representation$\rho$for now, and just look at the character$\chi$: This is succinct, but we can make it even shorter. From point 2 above,$\chi$is constant on conjugacy classes of$G$, so we don’t lose any information by just looking at the values of$\chi$on each conjugacy class: Even shorter, let’s just display the values of$\chi$: This single row of numbers represents the character of one representation of$G$. If we knew all the irreducible representations of$G$and their corresponding characters, we could form a table with one row for each character. This is called the character table of$G$. Remember how we had to define our representations by hand, one by one? We don’t have to do that for characters, because SageMath has the character tables of small groups built-in: This just goes to show how important the character of a group is. We can also access individual characters as a functions. Let’s say we want the last one: Notice that the character we were playing with,$[4,2,0,0,0]$, is not in the table. This is because its representation$\rho$is not irreducible. At the end of the post on decomposing representations, we saw that$\rho$splits into two$1$-dimensional irreducible representations and one$2$-dimensional one. It’s not hard to see that the character of$\rho$is the sum of rows 1,4 and 5 in our character table: Just as we could decompose every representation of$G\$ into a sum of irreducible representations, we can express any character as a sum of irreducible characters.

The next post discusses how to do this easily, by making use of the Schur orthogonality relations. These are really cool relations among the rows and columns of the character table. Apart from decomposing representations into irreducibles, we’ll also be able to prove that the character table is always square!

## March 19, 2015

### Vince Knight

#### Playing stochastic games in class

The final blog post I am late in writing is about the Stochastic game we played in class last week. The particular type of game I am referring to is also called a Markov game where players play a series of Normal Form games, with the next game being picked from a random distribution the nature of which depends on the strategy profiles. In other words the choice of the players does not only impact on the utility gained by the players but also on the probability of what the net game will be… I blogged about this last year so feel free to read about some of the details there.

The main idea is that one stage game corresponds to this normal form game (a prisoner’s dilemma):

at the other we play:

The probability distributions, of the form $(x,1-x)$ where $x$ is the probability with which we play the first game again are given by:

and the probability distribution for the second game was $(0,1)$. In essence the second game was an absorption game and so players would try and avoid it.

To deal with the potential for the game to last for ever we played this with a discounting factor $\delta=1/2$. Whilst that discounting factor will be interpreted as such in theory, for the purposes of playing the game in class we used that as a probability at which the game continues.

You can see a photo of this all represented on the board:

We played this as a team game and you can see the results here:

As opposed to last year no actual duel lasted more than one round: I had a coded dice to sample at each step. The first random roll of the dice was to see if the game continued based on the $\delta$ property (this in effect ‘deals with infinity’). The second random sample was to find out which game we payed next and if we ever went to the absorption games things finished there.

The winner was team B who in fact defected after the initial cooperation in the first game (perhaps that was enough to convince other teams they would be cooperative).

After playing this, we calculated (using some basic algebra examining each potential pure equilibria) the Nash equilibria for this game and found that there were two pure equilibria: both players Cooperating and both players defecting.

## March 17, 2015

### Vince Knight

#### Incomplete information games in class

Last week my class and I looked at the basics of games with incomplete information. The main idea is that players do not necessarily know where there are in an extensive form game. We repeated a game I played last year that you can read about here.

Here is a picture of the game we played (for details take a look at the post from last year):

We played a round robing where everyone played against everyone else and you can see the results in these two notebook pages that Jason kept track off:

We see that the winner was Reg, who on both occasions of being the second player: went with the coin.

To find the Nash equilibria for this game we can translate it in to normal form game by doing the following two things:

1. Identify the strategy sets for the players
2. Averaging of the outcome probabilities

This gives the following strategies:

and

The strategies for the second player correspond to a 2-vector indexed by the information sets of the second player. In other words the first letter says what to do if the coin comes up as heads and the second letter says what to do if the coin comes up as tails:

1. $HH$: No matter what: play heads;
2. $HT$: If the coin comes up as heads: play heads. If the coin comes up as tails: play tails.
3. $TH$: If the coin comes up as heads: play tails. If the coin comes up as tails: play heads.
4. $TT$: No matter what: play tails;

Once we have done that and using the above ordering we can obtain the normal form game representation:

In class we obtained the Nash equilibria for this game by realising that the third column strategy ($TH$: always disagree with the coin) was dominated and then carrying out some indifference analysis.

Here let us just throw it at Sage (here is a video showing and explaining some of the code):

The equilibria returned confirms what we did in class: the first player can more or less randomly (with bounds on the distribution) pick heads or tails but the second player should always agree with the coin.

#### Discussing the game theory of walking/driving in class

Last week, in game theory class we looked at pairwise contest games. To introduce this we began by looking at the particular game that one could use to model the situation of two individuals walking or driving towards each other:

The above models people walking/driving towards each other and choosing a side of the road. If they choose the same side they will not walk/drive in to each other.

I got a coupe of volunteers to simulate this and ‘walk’ towards each other having picked a side. We very quickly arrived at one of the stage Nash equilibria: both players choosing left and/or choosing right.

We went on to compute the evolutionary stability of 3 potential stable equilibria:

1. Everyone driving on the left;
2. Everyone driving on the right;
3. Everyone randomly picking a side each time.

Note that the above corresponds to the three Nash equilibria of the game itself. You can see this using some Sage code immediately (here is a video I just put together showing how one can use Sage to obtain Nash equilibria) - just click on ‘Evaluate’:

We did this calculations in two ways:

1. From first principles using the definitions of evolutionary stability (this took a while). 2 Using a clever theoretic result that in effect does the analysis for us once and for all.

Both gave us the same result: driving on a given side of the road is evolutionarily stable whereas everyone randomly picking a side is not (a nudge in any given direction would ensure people picked a side).

This kind of corresponds to the two (poorly drawn) pictures below:

To further demonstrate the instability of the ‘choose a random side’ situation here is a plot of the actual evolutionary process (here is a video that shows what is happening):

We see that if we start by walking randomly the tiniest of mutation send everyone to picking a side.

## March 12, 2015

### Liang Ze

#### Animated GIFs

I really should be posting about character theory, but I got distracted making some aesthetic changes to this blog (new icon and favicon!) and creating animations like this:

I’m not putting this in a SageCell because this could take quite a while, especially if you increase the number of frames (by changing the parameters in srange), but feel free to try it out on your own copy of Sage. It saves an animated GIF that loops forever (iterations = 0) at the location specified by savefile.

## March 08, 2015

### Vince Knight

#### Playing an infinitely repeated game in class

Following the iterated Prisoner’s dilemma tournament my class I and I played last week we went on to play a version of the game where we repeated things infinitely many times. This post will briefly describe what we got up to.

As you can read in the post about this activity from last year, the way we play for an infinite amount of time (that would take a while) is to apply a discounting factor $\delta$ to the payoffs and to interpret this factor as the probability with which the game continues.

Before I go any further (and put up pictures with the team names) I need to explain something (for the readers who are not my students).

For every class I teach I insist in spending a fair while going over a mid module feedback form that is used at Cardiff University (asking students to detail 3 things they like and don’t like about the class). One student wrote (what is probably my favourite piece of feedback ever):

“Vince is a dick… but in a good way.”

Anyway, I mentioned that to the class during my feedback-feedback session and that explains one of the team names (which I found pretty amusing):

• Orange
• Where’s the gun
• We don’t know
• Vince is a good dick

Once we had the team names set up (and I stopped trying to stop laughing) I wrote some quick Python code that we could run after each iteration:

import random
continue_prob = .5

if random.random() < continue_prob:
print 'Game continues'
else:
print 'Game Over'

We started off by playing with (\delta=.5) and here are the results:

You can see the various duels here:

As you can see, very little cooperation happened this way and in fact because everyone could see what everyone else was doing Orange took advantage of the last round to create a coalition and win. We also see one particular duel that cost two teams very highly (because the ‘dice rolls’ did not really help).

After this I suggest to the class that we play again but that no one got to see what was happening to the other teams (this was actually suggested to me by students the year before). We went ahead with this and used $delta=.25$: so the game had a less chance of carrying on.

You can see the result and duels here (this had to be squeezed on to a board that could be hidden):

Based on the theory we would expect more cooperation to be likely but as you can see this did not happen.

The tie at the end was settled with a game of Rock Paper Scissors Lizard Spock which actually gave place to a rematch of the Rock Paper Scissors Lizard Spock tournament we played earlier. Except this time Laura, lost her crown :)

## February 26, 2015

### Sébastien Labbé

#### Arnoux-Rauzy-Poincaré sequences

In a recent article with Valérie Berthé [BL15], we provided a multidimensional continued fraction algorithm called Arnoux-Rauzy-Poincaré (ARP) to construct, given any vector $v\in\mathbb{R}_+^3$, an infinite word $w\in\{1,2,3\}^\mathbb{N}$ over a three-letter alphabet such that the frequencies of letters in $w$ exists and are equal to $v$ and such that the number of factors (i.e. finite block of consecutive letters) of length $n$ appearing in $w$ is linear and less than $\frac{5}{2}n+1$. We also conjecture that for almost all $v$ the contructed word describes a discrete path in the positive octant staying at a bounded distance from the euclidean line of direction $v$.

In Sage, you can construct this word using the next version of my package slabbe-0.2 (not released yet, email me to press me to finish it). The one with frequencies of letters proportionnal to $(1, e, \pi)$ is:

sage: from slabbe.mcf import algo
sage: D = algo.arp.substitutions()
sage: it = algo.arp.coding_iterator((1,e,pi))
sage: w = words.s_adic(it, repeat(1), D)
word: 1232323123233231232332312323123232312323...


The factor complexity is close to 2n+1 and the balance is often less or equal to three:

sage: w[:10000].number_of_factors(100)
202
sage: w[:100000].number_of_factors(1000)
2002
sage: w[:1000].balance()
3
sage: w[:2000].balance()
3


Note that bounded distance from the euclidean line almost surely was proven in [DHS2013] for Brun algorithm, another MCF algorithm.

Other approaches: Standard model and billiard sequences

Other approaches have been proposed to construct such discrete lines.

One of them is the standard model of Eric Andres [A03]. It is also equivalent to billiard sequences in the cube. It is well known that the factor complexity of billiard sequences is quadratic $p(n)=n^2+n+1$ [AMST94]. Experimentally, we can verify this. We first create a billiard word of some given direction:

sage: from slabbe import BilliardCube
sage: v = vector(RR, (1, e, pi))
sage: b = BilliardCube(v)
sage: b
Cubic billiard of direction (1.00000000000000, 2.71828182845905, 3.14159265358979)
sage: w = b.to_word()
sage: w
word: 3231232323123233213232321323231233232132...


We create some prefixes of $w$ that we represent internally as char*. The creation is slow because the implementation of billiard words in my optional package is in Python and is not that efficient:

sage: p3 = Word(w[:10^3], alphabet=[1,2,3], datatype='char')
sage: p4 = Word(w[:10^4], alphabet=[1,2,3], datatype='char') # takes 3s
sage: p5 = Word(w[:10^5], alphabet=[1,2,3], datatype='char') # takes 32s
sage: p6 = Word(w[:10^6], alphabet=[1,2,3], datatype='char') # takes 5min 20s


We see below that exactly $n^2+n+1$ factors of length $n<20$ appears in the prefix of length 1000000 of $w$:

sage: A = ['n'] + range(30)
sage: c3 = ['p_(w[:10^3])(n)'] + map(p3.number_of_factors, range(30))
sage: c4 = ['p_(w[:10^4])(n)'] + map(p4.number_of_factors, range(30))
sage: c5 = ['p_(w[:10^5])(n)'] + map(p5.number_of_factors, range(30)) # takes 4s
sage: c6 = ['p_(w[:10^6])(n)'] + map(p6.number_of_factors, range(30)) # takes 49s
sage: ref = ['n^2+n+1'] + [n^2+n+1 for n in range(30)]
sage: T = table(columns=[A,c3,c4,c5,c6,ref])
sage: T
n    p_(w[:10^3])(n)   p_(w[:10^4])(n)   p_(w[:10^5])(n)   p_(w[:10^6])(n)   n^2+n+1
+----+-----------------+-----------------+-----------------+-----------------+---------+
0    1                 1                 1                 1                 1
1    3                 3                 3                 3                 3
2    7                 7                 7                 7                 7
3    13                13                13                13                13
4    21                21                21                21                21
5    31                31                31                31                31
6    43                43                43                43                43
7    52                55                56                57                57
8    63                69                71                73                73
9    74                85                88                91                91
10   87                103               107               111               111
11   100               123               128               133               133
12   115               145               151               157               157
13   130               169               176               183               183
14   144               195               203               211               211
15   160               223               232               241               241
16   176               253               263               273               273
17   192               285               296               307               307
18   208               319               331               343               343
19   224               355               368               381               381
20   239               392               407               421               421
21   254               430               448               463               463
22   268               470               491               507               507
23   282               510               536               553               553
24   296               552               583               601               601
25   310               596               632               651               651
26   324               642               683               703               703
27   335               687               734               757               757
28   345               734               787               813               813
29   355               783               842               871               871


Billiard sequences generate paths that are at a bounded distance from an euclidean line. This is equivalent to say that the balance is finite. The balance is defined as the supremum value of difference of the number of apparition of a letter in two factors of the same length. For billiard sequences, the balance is 2:

sage: p3.balance()
2
sage: p4.balance() # takes 2min 37s
2


Other approaches: Melançon and Reutenauer

Melançon and Reutenauer [MR13] also suggested a method that generalizes Christoffel words in higher dimension. The construction is based on the application of two substitutions generalizing the construction of sturmian sequences. Below we compute the factor complexity and the balance of some of their words over a three-letter alphabet.

On a three-letter alphabet, the two morphisms are:

sage: L = WordMorphism('1->1,2->13,3->2')
sage: R = WordMorphism('1->13,2->2,3->3')
sage: L
WordMorphism: 1->1, 2->13, 3->2
sage: R
WordMorphism: 1->13, 2->2, 3->3


Example 1: periodic case $LRLRLRLRLR\dots$. In this example, the factor complexity seems to be around $p(n)=2.76n$ and the balance is at least 28:

sage: from itertools import repeat, cycle
sage: W
word: 1213122121313121312212212131221213131213...
sage: map(W[:10000].number_of_factors, [10,20,40,80])
[27, 54, 110, 221]
sage: [27/10., 54/20., 110/40., 221/80.]
[2.70000000000000, 2.70000000000000, 2.75000000000000, 2.76250000000000]
sage: W[:1000].balance()  # takes 1.6s
21
sage: W[:2000].balance()  # takes 6.4s
28


Example 2: $RLR^2LR^4LR^8LR^{16}LR^{32}LR^{64}LR^{128}\dots$ taken from the conclusion of their article. In this example, the factor complexity seems to be $p(n)=3n$ and balance at least as high (=bad) as $122$:

sage: W = words.s_adic([R,L,R,R,L,R,R,R,R,L]+[R]*8+[L]+[R]*16+[L]+[R]*32+[L]+[R]*64+[L]+[R]*128,'1')
sage: W.length()
330312
sage: map(W.number_of_factors, [10, 20, 100, 200, 300, 1000])
[29, 57, 295, 595, 895, 2981]
sage: [29/10., 57/20., 295/100., 595/200., 895/300., 2981/1000.]
[2.90000000000000,
2.85000000000000,
2.95000000000000,
2.97500000000000,
2.98333333333333,
2.98100000000000]
sage: W[:1000].balance()  # takes 1.6s
122
sage: W[:2000].balance()  # takes 6s
122


Example 3: some random ones. The complexity $p(n)/n$ occillates between 2 and 3 for factors of length $n=1000$ in prefixes of length 100000:

sage: for _ in range(10):
....:     W = words.s_adic([choice((L,R)) for _ in range(50)],'1')
....:     print W[:100000].number_of_factors(1000)/1000.
2.02700000000000
2.23600000000000
2.74000000000000
2.21500000000000
2.78700000000000
2.52700000000000
2.85700000000000
2.33300000000000
2.65500000000000
2.51800000000000


For ten randomly generated words, the balance goes from 6 to 27 which is much more than what is obtained for billiard words or by our approach:

sage: for _ in range(10):
....:     W = words.s_adic([choice((L,R)) for _ in range(50)],'1')
....:     print W[:1000].balance(), W[:2000].balance()
12 15
8 24
14 14
5 11
17 17
14 14
6 6
19 27
9 16
12 12


# References

 [BL15] V. Berthé, S. Labbé, Factor Complexity of S-adic words generated by the Arnoux-Rauzy-Poincaré Algorithm, Advances in Applied Mathematics 63 (2015) 90-130. http://dx.doi.org/10.1016/j.aam.2014.11.001
 [DHS2013] Delecroix, Vincent, Tomás Hejda, and Wolfgang Steiner. “Balancedness of Arnoux-Rauzy and Brun Words.” In Combinatorics on Words, 119–31. Springer, 2013. http://link.springer.com/chapter/10.1007/978-3-642-40579-2_14.
 [A03] E. Andres, Discrete linear objects in dimension n: the standard model, Graphical Models 65 (2003) 92-111.
 [AMST94] P. Arnoux, C. Mauduit, I. Shiokawa, J. I. Tamura, Complexity of sequences defined by billiards in the cube, Bull. Soc. Math. France 122 (1994) 1-12.
 [MR13] G. Melançon, C. Reutenauer, On a class of Lyndon words extending Christoffel words and related to a multidimensional continued fraction algorithm. J. Integer Seq. 16, No. 9, Article 13.9.7, 30 p., electronic only (2013). https://cs.uwaterloo.ca/journals/JIS/VOL16/Reutenauer/reut3.html

### Vince Knight

#### This class teaches me to not trust my classmates: An iterated prisoners dilemma in class

On Monday, in class we played an iterated prisoner’s dilemma tournament. I have done this many times (both in outreach events with Paul Harper and in this class). This is always a lot of fun but none more so than last year when Paul’s son Thomas joined us. You can read about that one here.

The format of the game is as close to that of Axelrod’s original tournament as I think it can be. I split the class in to 4 teams and we create a round robin where each team plays every other team at 8 consecutive rounds of the prisoner’s dilemma:

The utilities represent ‘years in prison’ and over the 3 matches that each team will play (against every other team) the goal is to reduce the total amount of time spent in prison.

Here are some photos from the game:

Here are the scores:

We see that ‘We will take the gun’ acquired the least total score and so they won the collection of cookies etc…

(The names followed a promise from me to let the team with the coolest name have a nerf gun… Can’t say this had the wanted effect…)

At one point during the tournament, one team actually almost declared a strategy which was cool:

We will cooperate until you defect at which point we will reevaluate

This was pretty cool as I hadn’t discussed at all what a strategy means in a repeated game (ie I had not discussed the fact that a strategy in a repeated game takes count of both play histories).

Here are all the actual duels:

You’ll also notice at the end that a coalition formed and one team agreed to defect so that they could share the prize. This happens about 50% of the time when we play this game but I never cease to be amused by it. Hopefully everyone found this fun and perhaps some even already agree with a bit of feedback I received on this course last year:

‘This class teaches me to not trust my classmates’

One of the other really cool things that happened after this class was H asking for a hand to submit a strategy to my Axelrod repository. She built a strategy called ‘Once Bitten’ that performs pretty well! You can see it here (click on ‘Blame’ and you can see the code that she wrote).

(Big thanks to Jason for keeping track of the scores and to Geraint for helping and grabbing some nice pictures)