September 21, 2014

Vince Knight

My thoughts on plotly ('the github for plots')

A while ago I saw plotly appear on my G+ stream. People seemed excited about it but I was too busy to look at it properly and just assumed: must be some sort of new matplotlib: ain’t nobody got time for that!

Then, one of the guys from plotly reached out saying I should take a look. I took a brief glance and realised that this was nothing like a new matplotlib and in fact looked pretty cool. So I dutifully put it on my to do list but very much near the bottom.

I’m writing this sat in between sessions at PyconUK 2014. One of the talks on the first day was by Chris from plotly. He gave a great talk (which once the video link is up I’ll share here) and I immediately threw ‘check out plotly’ to the top of my to do list.

How I got started

import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
import numpy as np
import plotly.plotly as py

n = 50
x, y, z, s, ew = np.random.rand(5, n)
c, ec = np.random.rand(2, n, 4)
area_scale, width_scale = 500, 5

fig, ax = plt.subplots()
sc = ax.scatter(x, y, c=c,
                s=np.square(s)*area_scale,
                edgecolor=ec,
                linewidth=ew*width_scale)
ax.grid()

plot_url = py.plot_mpl(fig)

That code automatically creates the following plotly plot (which you can edit, zoom in etc…):

By ‘automatically’ I mean: ‘opens up web browser and your plots is there’!!

Doing something of my own

In my previous post I wrote about how to use Markov Chains to obtain the expected wait in a tandem qeue. Here’s a plot I put together that compared the analytical values to simulated values:

The code to obtain that particular plot is below:

# Libraries
from matplotlib.pyplot import plt
import csv
import plotly.plotly as py

# Get parameters

c1   = 12
N    = 9
c2   = 12
mu_1 = 1
mu_2 = .2
p    = .5

# Read analytical data
analytical_data = [[float(k) for k in row] for row in csv.reader(open('analytical.csv', 'r'))]

# Read simulation data
simulation_data = [[float(k) for k in row] for row in csv.reader(open('simulated.csv', 'r'))]

# Create the plot

fig = plt.figure()
ax = plt.subplot(111)
x_sim = [row[0] for row in simulation_data[::10]]  # The datasets have more data than I want to plot so skipping some values
y_sim = [row[1:] for row in simulation_data[::10]]
ax.boxplot(y_sim, positions=x_sim)
x_ana = [row[0] for row in analytical_data if row[0] <= max(x_sim)]
y_ana = [row[1] for row in analytical_data[:len(x_ana)]]
ax.plot(x_ana,y_ana)
plt.xticks(range(0,int(max(x_ana) + max(int(max(x_ana)) / 10,1)), max(int(max(x_ana)) / 10,1)))
ax.set_xlabel('$\Lambda$')
ax.set_ylabel('Mean expected wait')
title="$c_1=%s,\; N=%s,\; c_2=%s,\;\mu_1=%s,\; \mu_2=%s,\; p=%s $" % (c1, N, c1, mu_1, mu_2, p)

# Save the plot as a pdf file locally
plt.title(title)
plt.savefig("%s-%s-%s-%s-%s-%s.pdf" % (int(c1), int(N), int(c1), mu_1, mu_2, p), bbox_inches='tight')

# THIS IS THE ONLY LINE THAT I HAD TO ADD TO GET THIS UP TO plotly!
plot_url = py.plot_mpl(fig)

If you’d like to repeat the above you can download the analytical and simulated datafiles.

The result of that can be seen here:

Further more that is just a ‘thing’ on my plotly profile so you can see it at this url: https://plot.ly/~drvinceknight/2.

Getting other formats

On that page I can tweak the graph if I wanted to and finally I can just grab the plot in whatever format I want by simply adding the correct format extension to the url:

My overall thoughts

So right now I’m just kind of excited about the possibilities (too many ideas to coherently filter out the good ones), there are also packages for R so I might try and get my students to play around with it in R when I teach it…

As a research tool, I think this will also be nice (it’s certainly the way to go). Although recently, I’ve been working remotely with two students and being able to throw a png of a plot in a hangout chat is pretty cool (and mobile friendly). So maybe that’s something the plotly guys could think about…

At the end of the day: this is an awesome tool. Plotly ‘abstractifies’ plots so that people using different packages/languages can still talk to each other. One of the big things I’m forgetting to talk about in detail is that there’s a web tool that allows you to change colors, change titles, mess with the data etc. That’s also a very cool collaborative tool obviously as I can imagine throwing up a plot that a co-author who doesn’t like code could then tweak.

Similarly (if/when) publications start using smarter formats (than things that are restricted by the need to be printed on paper) you could even just embed the plots like I’ve done here (so people could zoom, grab the data etc…). Here’s another way I could put that:

Papers are where plots go to die, they can go to plotly to live…

Woops, I’ve started blurting out some ideas… Hopefully they’re good ones.

I look forward to playing around with this tool some more (I need to see how it behaves with a Sage plot…).

September 21, 2014 12:00 AM

September 19, 2014

Vince Knight

Calculating the expected wait in a tandem queue with blocking using Sage

In this post I’ll describe a particular mathematical model that I’ve been working on for the purpose of a research paper. This is actually part of some work that I’ve done with +James Campbell an undergraduate who worked with me over the Summer.

Consider the system shown in the picture below:

This is a system composed of ‘two queues in tandem’ each with a number of servers \(c\) and a service rate \(\mu\). It is assumed here (as is often the case in queueing theory) that the service rate is exponentially distributed (in effect that the service time is random with mean \(1/\mu\)).

Individuals arrive at the queue with mean arrival rate \(\Lambda\) (once again exponentially distributed).

There is room for up to \(N\) individuals to wait for a free server at the first station. After their service at the first station is complete, individuals leave the system with probability \(p\), but if they don’t and there is no free place in the next station (ie there are not less than \(c_2\) individuals in the second service center) then they become blocked.

There are a vast array of applications of queueing systems like the above (in particular in the paper we’re working on we’re using it to model a healthcare system).

In this post I’ll describe how to use Markov chains to be able to describe the system and in particular how to get the expected wait for an arrival at the queue.

I have discussed Markov chains before (mainly posts on my old blog) and so I won’t go in to too much detail about the underlying theory (I think this is perhaps a slightly technical post so it is mainly aimed at people familiar with queueing theory but by all means ask in the comments if I can clarify anything).

The state space

One has to think about this carefully as it’s important to keep track not only of where individuals are but whether or not they are blocked. Based on that one might think of using a 3 dimensional state space: \((i,j,k)\) where \(i\) would denote the number of people at the first station, \(j\) the number at the second and \(k\) the number at the first who are blocked. This wouldn’t be a terrible idea but we can actually do things in a slightly neater and more compact way:

In the above \(i\) denotes the number of individuals in service or waiting for service at the first station and \(j\) denotes the number of individuals in service at the second station or blocked at the first station.

The continuous time transition rates between two states \((i_1,j_1)\) and \((i_2, j_2)\) are given by:

where \(\delta=(i_2,j_2)-(i_1,j_1)\).

Here’s a picture of the Markov Chain for \(N=c_1=c_2=2\):

The steady state probabilities

Using the above we can index our states and keep all the information in a matrix \(Q\), to obtain the steady state distributions of the chain (the probabilities of finding the queue in a given state) we then simply solve the following equation:

subject to \(\sum \pi = 1\).

Here’s how we can do this using Sage:

class Tandem_Queue():
        """
        A class for an instance of the tandem_queue
        """
        def __init__(self, c_1, N, c_2, Lambda, mu_1, mu_2, p):
            self.c_1 = c_1
            self.c_2 = c_2
            self.N = N
            self.Lambda = Lambda
            self.mu_1 = mu_1
            self.mu_2 = mu_2
            self.p = p
            self.m = c_1 + c_2 + 1
            self.n = c_2 + 1
            self.state_space = [(i, j)  for j in range(c_1 + c_2 + 1) for i in range(self.c_1 + self.N - max(j - self.c_2, 0) + 1)]
            if p == 1:  # Reduces state space in particular case of p = 1
                self.state_space = [state for state in self.state_space if state[1] == 0]
            Q = [[self.q(state1, state2) for state2 in self.state_space] for state1 in self.state_space]
            for i in range(len(Q)):
                Q[i][i] = - sum(Q[i])
            self.Q = matrix(QQ, Q)
            self.expected_wait_cache = {}

        def q(self, state1, state2):
            """
            Returns the rate of transition between to given states.
            """
            delta = list(vector(state2) - vector(state1))
            if delta == [1, 0]:
                return self.Lambda
            if delta == [-1, 1]:
                return min(self.c_1 - max(state1[1] - self.c_2, 0), state1[0]) * self.mu_1 * (1 - self.p)
            if delta == [-1, 0]:
                return min(self.c_1 - max(state1[1] - self.c_2, 0), state1[0]) * self.mu_1 * self.p
            if delta == [0, -1]:
                return min(state1[1], self.c_2) * self.mu_2
            return 0

        def pi(self):
            """
            Solves linear system.
            """
            A = transpose(self.Q).stack(vector([1 for state in self.state_space]))
            return A.solve_right(vector([0 for state in self.state_space] + [1]))

Most of the above is glorified book keeping but here’s a quick example showing what the above does and how it can be used. First let’s create an instance of our problem with \(N=c_1=c_2=2\), \(5\mu_2=2p=\mu_1=1\) and \(\Lambda=5\).

sage: small_example = tandem_queue(2,2,2,5,1,1/5,1/2)
sage: small_example.Q
22 x 22 dense matrix over Rational Field (use the '.str()' method to see the entries)

We see that if we return \(Q\) we get a 22 by 22 matrix which if you recall the picture above corresponds to the 22 states in that picture.

We can see the states by just typing:

sage: small_example.state_space
[(0, 0),
 (1, 0),
 (2, 0),
 (3, 0),
 (4, 0),
 (0, 1),
 (1, 1),
 (2, 1),
 (3, 1),
 (4, 1),
 (0, 2),
 (1, 2),
 (2, 2),
 (3, 2),
 (4, 2),
 (0, 3),
 (1, 3),
 (2, 3),
 (3, 3),
 (0, 4),
 (1, 4),
 (2, 4)]

If you check carefully they all correspond to the states of the picture above.

Now what we would like to know is the probability of being in any given state. To do this we need to solve the matrix equation \(\pi Q = 0\) such that \(\sum \pi=1\).

This is done in Sage (for any matrix Q) using the following:

sage: A = transpose(Q).stack(vector([1 for state in self.state_space]))
sage: A.solve_right(vector([0 for state in self.state_space] + [1]))

There we build a matrix A with an added column of 1s and then solve the corresponding equation using solve_right (note that we transposed the matrix). If you look at the class definition this was all defined earlier so we can in fact just run:

sage: small_example.pi()
(974420487508335328592/55207801002325145206717477, 6717141852060739142960/55207801002325145206717477, 25263720112088475982400/55207801002325145206717477, 107693117265184715581000/55207801002325145206717477, 499825193288571759140000/55207801002325145206717477, 7567657557556535357400/55207801002325145206717477, 50835142813671411162000/55207801002325145206717477, 177836071295654602905000/55207801002325145206717477, 638540135036394350075000/55207801002325145206717477, 2305924001256099701875000/55207801002325145206717477, 26439192416069771765000/55207801002325145206717477, 185599515623092483825000/55207801002325145206717477, 700026867396942548625000/55207801002325145206717477, 2256398553097737112500000/55207801002325145206717477, 4700830318953618984375000/55207801002325145206717477, 61385774570987050093750/55207801002325145206717477, 444444998037114715312500/55207801002325145206717477, 3393156381219452445312500/55207801002325145206717477, 15476151589322058007812500/55207801002325145206717477, 41152314633066177343750/55207801002325145206717477, 352285141439825390625000/55207801002325145206717477, 1826827211896183837890625/4246753923255780400516729)

That’s not super helpful displayed like that (all the arithmetic is being done exactly) so here’s a quick plot of the probabilities:

sage: p = list_plot(small_example.pi(), axes_labels = ['State', 'Probability'])
sage: p

That plot could be made a lot prettier an informative (by for example using the names of the states as xticks) but it will do for now. We can see from there for example that the most probable state of our queue (with the parameters we picked) is to be in the last state (see list above) which is \((2,4)\).

Out of interest here’s a plot when we change $\Lambda=1/2$ (a tenth of what we did above):

We see that now the most probable state is the sixth state (Python/Sage indexing starts at 0), which corresponds to \((0,1)\).

Here’s an animated plot of the steady state distribution for a larger system as \(\Lambda\) increases, displayed in a more informative way (the two dimensions corresponding to \(i\) and \(j\)):

All of that is very nice and interesting but where things get very useful is when trying to calculate the mean time one would expect to wait in a queue.

Mean expected wait in the queue

If we consider all states \((i,j)\in S\) only a subset of them will actually imply a wait:

  • If there are less than \(c_1\) individuals in the first station then anyone who arrives has direct access to a server;
  • If there are more than \(N+c_1\) individuals in the first station then anyone who arrives will be lost to the system.

With a little bit of thought (recalling what the \(i\)s and \(j\)s represent) we see that the states that incur a wait are given by:

Now if we know the expected wait when arriving in any state \((i,j) \in S_A\) we can get the mean wait as:

where \(w(i,j)\) denotes the expected time spent in any given state \((i,j)\). We are in essence, summing over arrivals that will have a wait and dividing by the probability of an individual not being lost to the system, which happens when \(i + \max(j - c_2, 0) < c_1 + N\).

Obtaining \(c(i,j)\) is relatively simple. We consider the ‘almost’ same Markov chain as above (except that we ignore arrivals). In this instance a jump from one step to another will only occur if a service occurs at the first station (with rate \(\min(c_1-\max(j-c_2,0),i)\mu_1\)) or if a service at the second station (with rate \(\min(c_2, j)\mu_2\)).

So the mean time spent in state \((i,j)\) is the inverse of the total exit rate:

Using that notion we are in effect discretizing the ‘ignore arrivals’ Markov chain. Once a transition occurs we can obtain the probability of where the service occurs:

  • The probability of the service being at the first station:
  • The probability of the service being at the second station:

We can use all of the above to put together a nice recursive formula for the expected wait \(w(i,j)\) in terms of the expected wait of states that are in effect getting closer and closer to having no wait:

where \(A\), is the set of states with no wait: \(i+\max(j-c_2,0) < c_1\)

Using the recursive formula is actually very easy to implement in code (we use a dictionary to cache calculated values to not make sure we don’t waste any time).

Here is a reduced version of the methods that need to be added to above to get this to work in Sage (you need to add self.expected_wait_cache = {} to the __init__ method):

def _pi_dict(self):
        """
        Obtain a dictionary which indexes the states.
        """
        self.pi_list = self.pi()
        return {state:self.pi_list[index] for index, state in enumerate(self.state_space)}

    def p_service_1(self, state):
        """
        Returns the discretized probability of a service occurring at first station
        """
        if self.p == 1:
            return 1
        return min(self.c_1 - max(state[1]- self.c_2, 0), state[0]) * self.mu_1 / (min(self.c_1 - max(state[1]- self.c_2, 0), state[0]) * self.mu_1 + min(self.c_2, state[1]) * self.mu_2)

    def p_service_2(self, state):
        """
        Returns the discretized probability of a service occurring at second station
        """
        if self.p == 1:
            return 0
        return  min(self.c_2, state[1]) * self.mu_2 / (min(self.c_1 - max(state[1]- self.c_2, 0), state[0]) * self.mu_1 + min(self.c_2, state[1]) * self.mu_2)

    def mean_time_in_state(self, state):
        """
        Returns the mean time in any given state before a transition occurs
        """
        return  1 / (min(self.c_1 - max(state[1]- self.c_2, 0), state[0]) * self.mu_1 + min(self.c_2, state[1]) * self.mu_2)

    def expected_wait(self, state):
        """
        Function that returns the expected time till absorption for a given state
        """
        if state in self.expected_wait_cache:
            return self.expected_wait_cache[state]
        if state not in self.state_space:  # If state outside of boundary. (Might not need this after new conditions below)
            return 0
        if state[0] + max(state[1] - self.c_2, 0) < self.c_1:  # If absorbing state
            self.expected_wait_cache[state] = 0
            return 0
        self.expected_wait_cache[state] =  (self.mean_time_in_state(state) + self.p * self.p_service_1(state) * self.expected_wait((state[0] - 1, state[1])))
        if (state[0] - 1, state[1] + 1) in self.state_space:
            self.expected_wait_cache[state] += (1-self.p) * self.p_service_1(state) * self.expected_wait((state[0] - 1, state[1] + 1))
        if (state[0], state[1] - 1) in self.state_space:
            self.expected_wait_cache[state] += self.p_service_2(state) * self.expected_wait((state[0], state[1] - 1))
        return self.expected_wait_cache[state]

    def mean_expected_wait(self):
        """
        Returns the mean wait
        """
        self.pi_dict = self._pi_dict()
        accepting_states = [state for state in [s for s in self.state_space if s[0] + max(s[1] - self.c_2, 0) < self.c_1 + self.N]]
        prob_of_accepting = sum([self.pi_dict[state] for state in accepting_states])
        return sum([self.expected_wait(state) * self.pi_dict[state] for state in accepting_states]) / prob_of_accepting

Using all of the above we can get the expected wait for our system:

sage: small_example = Tandem_Queue(2,2,2,1/2,1,1/5,1/2)
sage: small_example.mean_expected_wait()
60279471210745371/50645610005072978
sage: n(_)
1.19022105182873

Below is a plot showing the effect on the mean wait as demand increases in the plot below for a large system:

What that plot shows is the calculated values (solid blue) line going through box plots of the simulated value. Perhaps in another blog post some day I’ll write about how to simulate the above but I think that’s probability sufficient for now.

If it’s of interest all of the above code can be downloaded here or at this gist.

September 19, 2014 12:00 AM

August 27, 2014

Sébastien Labbé

slabbe-0.1.spkg released

These is a summary of the functionalities present in slabbe-0.1 optional Sage package. It depends on version 6.3 of Sage because it uses RecursivelyEnumeratedSet code that was merged in 6.3. It contains modules on digital geometry, combinatorics on words and more.

Install the optional spkg (depends on sage-6.3):

sage -i http://www.liafa.univ-paris-diderot.fr/~labbe/Sage/slabbe-0.1.spkg

In each of the example below, you first have to import the module once and for all:

sage: from slabbe import *

To construct the image below, make sure to use tikz package so that view is able to compile tikz code when called:

sage: latex.add_to_preamble("\\usepackage{tikz}")
sage: latex.extra_preamble()
'\\usepackage{tikz}'

Draw the part of a discrete plane

sage: p = DiscretePlane([1,pi,7], 1+pi+7, mu=0)
sage: d = DiscreteTube([-5,5],[-5,5])
sage: I = p & d
sage: I
Intersection of the following objects:
Set of points x in ZZ^3 satisfying: 0 <= (1, pi, 7) . x + 0 < pi + 8
DiscreteTube: Preimage of [-5, 5] x [-5, 5] by a 2 by 3 matrix
sage: clip = d.clip()
sage: tikz = I.tikz(clip=clip)
sage: view(tikz, tightpage=True)
/~labbe/Files/2014/discreteplane1pi7.png

Draw the part of a discrete line

sage: L = DiscreteLine([-2,3], 5)
sage: b = DiscreteBox([0,10], [0,10])
sage: I = L & b
sage: I
Intersection of the following objects:
Set of points x in ZZ^2 satisfying: 0 <= (-2, 3) . x + 0 < 5
[0, 10] x [0, 10]
sage: I.plot()
/~labbe/Files/2014/discreteline23.png

Double square tiles

This module was developped for the article on the combinatorial properties of double square tiles written with Ariane Garon and Alexandre Blondin Massé [BGL2012]. The original version of the code was written with Alexandre.

sage: D = DoubleSquare((34,21,34,21))
sage: D
Double Square Tile
  w0 = 3032321232303010303230301012101030   w4 = 1210103010121232121012123230323212
  w1 = 323030103032321232303                w5 = 101212321210103010121
  w2 = 2321210121232303232123230301030323   w6 = 0103032303010121010301012123212101
  w3 = 212323032321210121232                w7 = 030101210103032303010
(|w0|, |w1|, |w2|, |w3|) = (34, 21, 34, 21)
(d0, d1, d2, d3)         = (42, 68, 42, 68)
(n0, n1, n2, n3)         = (0, 0, 0, 0)
sage: D.plot()
/~labbe/Files/2014/fibo2.png
sage: D.extend(0).extend(1).plot()
/~labbe/Files/2014/fibo2extend0extend1.png

We have shown that using two invertible operations (called SWAP and TRIM), every double square tiles can be reduced to the unit square:

sage: D.plot_reduction()
/~labbe/Files/2014/fibo2reduction.png

The reduction operations are:

sage: D.reduction()
['SWAP_1', 'TRIM_1', 'TRIM_3', 'SWAP_1', 'TRIM_1', 'TRIM_3', 'TRIM_0', 'TRIM_2']

The result of the reduction is the unit square:

sage: unit_square = D.apply(D.reduction())
sage: unit_square
Double Square Tile
  w0 =     w4 =
  w1 = 3   w5 = 1
  w2 =     w6 =
  w3 = 2   w7 = 0
(|w0|, |w1|, |w2|, |w3|) = (0, 1, 0, 1)
(d0, d1, d2, d3)         = (2, 0, 2, 0)
(n0, n1, n2, n3)         = (0, NaN, 0, NaN)
sage: unit_square.plot()
/~labbe/Files/2014/unit_square.png

Since SWAP and TRIM are invertible operations, we can recover every double square from the unit square:

sage: E = unit_square.extend(2).extend(0).extend(3).extend(1).swap(1).extend(3).extend(1).swap(1)
sage: D == E
True

Christoffel graphs

This module was developped for the article on a d-dimensional extension of Christoffel Words written with Christophe Reutenauer [LR2014].

sage: G = ChristoffelGraph((6,10,15))
sage: G
Christoffel set of edges for normal vector v=(6, 10, 15)
sage: tikz = G.tikz_kernel()
sage: view(tikz, tightpage=True)
/~labbe/Files/2014/christoffelgraph6_10_15.png

Bispecial extension types

This module was developped for the article on the factor complexity of infinite sequences genereated by substitutions written with Valérie Berthé [BL2014].

The extension type of an ordinary bispecial factor:

sage: L = [(1,3), (2,3), (3,1), (3,2), (3,3)]
sage: E = ExtensionType1to1(L, alphabet=(1,2,3))
sage: E
  E(w)   1   2   3
   1             X
   2             X
   3     X   X   X
 m(w)=0, ordinary
sage: E.is_ordinaire()
True

Creation of a strong-weak pair of bispecial words from a neutral not ordinaire word:

sage: p23 = WordMorphism({1:[1,2,3],2:[2,3],3:[3]})
sage: e = ExtensionType1to1([(1,2),(2,3),(3,1),(3,2),(3,3)], [1,2,3])
sage: e
  E(w)   1   2   3
   1         X
   2             X
   3     X   X   X
 m(w)=0, not ord.
sage: A,B = e.apply(p23)
sage: A
  E(3w)   1   2   3
    1
    2         X   X
    3     X   X   X
 m(w)=1, not ord.
sage: B
  E(23w)   1   2   3
    1          X
    2
    3              X
 m(w)=-1, not ord.

Fast Kolakoski word

This module was written for fun. It uses cython implementation inspired from the 10 lines of C code written by Dominique Bernardi and shared during Sage Days 28 in Orsay, France, in January 2011.

sage: K = KolakoskiWord()
sage: K
word: 1221121221221121122121121221121121221221...
sage: %time K[10^5]
CPU times: user 1.56 ms, sys: 7 µs, total: 1.57 ms
Wall time: 1.57 ms
1
sage: %time K[10^6]
CPU times: user 15.8 ms, sys: 30 µs, total: 15.8 ms
Wall time: 15.9 ms
2
sage: %time K[10^8]
CPU times: user 1.58 s, sys: 2.28 ms, total: 1.58 s
Wall time: 1.59 s
1
sage: %time K[10^9]
CPU times: user 15.8 s, sys: 12.4 ms, total: 15.9 s
Wall time: 15.9 s
1

This is much faster than the Python implementation available in Sage:

sage: K = words.KolakoskiWord()
sage: %time K[10^5]
CPU times: user 779 ms, sys: 25.9 ms, total: 805 ms
Wall time: 802 ms
1

References

[BGL2012]A. Blondin Massé, A. Garon, S. Labbé, Combinatorial properties of double square tiles, Theoretical Computer Science 502 (2013) 98-117. doi:10.1016/j.tcs.2012.10.040
[LR2014]Labbé, Sébastien, and Christophe Reutenauer. A d-dimensional Extension of Christoffel Words. arXiv:1404.4021 (April 15, 2014).
[BL2014]V. Berthé, S. Labbé, Factor Complexity of S-adic sequences generated by the Arnoux-Rauzy-Poincaré Algorithm. arXiv:1404.4189 (April, 2014).

by Sébastien Labbé at August 27, 2014 04:53 PM

Releasing slabbe, my own Sage package

Since two years I wrote thousands of line of private code for my own research. Each module having between 500 and 2000 lines of code. The code which is the more clean corresponds to code written in conjunction with research articles. People who know me know that I systematically put docstrings and doctests in my code to facilitate reuse of the code by myself, but also in the idea of sharing it and eventually making it public.

I did not made that code into Sage because it was not mature enough. Also, when I tried to make a complete module go into Sage (see #13069 and #13346), then the monstrous never evolving #12224 became a dependency of the first and the second was unofficially reviewed asking me to split it into smaller chunks to make the review process easier. I never did it because I spent already too much time on it (making a module 100% doctested takes time). Also, the module was corresponding to a published article and I wanted to leave it just like that.

Getting new modules into Sage is hard

In general, the introduction of a complete new module into Sage is hard especially for beginners. Here are two examples I feel responsible for: #10519 is 4 years old and counting, the author has a new work and responsabilities; in #12996, the author was decouraged by the amount of work given by the reviewers. There is a lot of things a beginner has to consider to obtain a positive review. And even for a more advanced developper, other difficulties arise. Indeed, a module introduces a lot of new functions and it may also introduce a lot of new bugs... and Sage developpers are sometimes reluctant to give it a positive review. And if it finally gets a positive review, it is not available easily to normal users of Sage until the next release of Sage.

Releasing my own Sage package

Still I felt the need around me to make my code public. But how? There are people (a few of course but I know there are) who are interested in reproducing computations and images done in my articles. This is why I came to the idea of releasing my own Sage package containing my public research code. This way both developpers and colleagues that are user of Sage but not developpers will be able to install and use my code. This will make people more aware if there is something useful in a module for them. And if one day, somebody tells me: "this should be in Sage", then I will be able to say : "I agree! Do you want to review it?".

Old style Sage package vs New sytle git Sage package

Then I had to chose between the old and the new style for Sage packages. I did not like the new style, because

  • I wanted the history of my package to be independant of the history of Sage,
  • I wanted it to be as easy to install as sage -i slabbe,
  • I wanted it to work on any recent enough version of Sage,
  • I wanted to be able to release a new version, give it to a colleague who could install it right away without changing its own Sage (i.e., updating the checksums).

Therefore, I choose the old style. I based my work on other optional Sage packages, namely the SageManifolds spkg and the ore_algebra spkg.

Content of the initial version

The initial version of the slabbe Sage package has modules concerning four topics: Digital geometry, Combinatorics on words, Combinatorics and Python class inheritance.

/~labbe/Files/2014/slabbe_content.png

For installation or for release notes of the initial version of the spkg, consult the slabbe spkg section of the Sage page of this website.

by Sébastien Labbé at August 27, 2014 04:48 PM

William Stein

What is SageMathCloud: let's clear some things up

[PDF version of this blog post]
"You will have to close source and commercialize Sage. It's inevitable." -- Michael Monagan, cofounder of Maple, told me this in 2006.
SageMathCloud (SMC) is a website that I first launched in April 2013, through which you can use Sage and all other open source math software online, edit Latex documents, IPython notebooks, Sage worksheets, track your todo items, and many other types of documents. You can write, compile, and run code in most programming languages, and use a color command line terminal. There is realtime collaboration on everything through shared projects, terminals, etc. Each project comes with a default quota of 5GB disk space and 8GB of RAM.

SMC is fun to use, pretty to look at, frequently backs up your work in many ways, is fault tolerant, encourages collaboration, and provides a web-based way to use standard command-line tools.

The Relationship with the SageMath Software

The goal of the SageMath software project, which I founded in 2005, is to create a viable free open source alternative to Magma, Mathematica, Maple, and Matlab. SMC is not mathematics software -- instead, SMC is best viewed by analogy as a browser-based version of a Linux desktop environment like KDE or Gnome. The vast majority of the code we write for SMC involves text editor issues (problems similar to those confronted by Emacs or Vim), personal information management, support for editing LaTeX documents, terminals, file management, etc. There is almost no mathematics involved at all.

That said, the main software I use is Sage, so of course support for Sage is a primary focus. SMC is a software environment that is being optimized for its users, who are mostly college students and teachers who use Sage (or Python) in conjunction with their courses. A big motivation for the existence of SMC is to make Sage much more accessible, since growth of Sage has stagnated since 2011, with the number one show-stopper obstruction being the difficulty of students installing Sage.

Sage is Failing

Measured by the mission statement, Sage has overall failed. The core goal is to provide similar functionality to Magma (and the other Ma's) across the board, and the Sage development model and community has failed to do this across the board, since after 9 years, based on our current progress, we will never get there. There are numerous core areas of research mathematics that I'm personally familiar with (in arithmetic geometry), where Sage has barely moved in years and Sage does only a few percent of what Magma does. Unless there is a viable plan for the areas to all be systematically addressed in a reasonable timeframe, not just with arithmetic geometry in Magma, but with everything in Mathematica, Maple., etc, we are definitely failing at the main goal I have for the Sage math software project.

I have absolutely no doubt that money combined with good planning and management would make it possible to achieve our mission statement. I've seen this hundreds of times over at a small scale at Sage Days workshops during the last decade. And let's not forget that with very substantial funding, Linux now provides a viable free open source alternative to Microsoft Windows. Just providing Sage developers with travel expenses (and 0 salary) is enough to get a huge amount done, when possible. But all my attempts with foundations and other clients to get any significant funding, at even the level of 1% of the funding that Mathematica gets each year, has failed. For the life of the Sage project, we've never got more than maybe 0.1% of what Mathematica gets in revenue. It's just a fact that the mathematics community provides Mathematica $50+ million a year, enough to fund over 600 fulltime positions, and they won't provide enough to fund one single Sage developer fulltime.

But the Sage mission statement remains, and even if everybody else in the world gives up on it, I HAVE NOT. SMC is my last ditch strategy to provide resources and visibility so we can succeed at this goal and give the world a viable free open source alternative to the Ma's. I wish I were writing interesting mathematical software, but I'm not, because I'm sucking it up and playing the long game.

The Users of SMC

During the last academic year (e.g., April 2014) there were about 20K "monthly active users" (as defined by Google Analytics), 6K weekly active users, and usually around 300 simultaneous connected users. The summer months have been slower, due to less teaching.

Numerically most users are undergraduate students in courses, who are asked to use SMC in conjunction with a course. There's also quite a bit of usage of SMC by people doing research in mathematics, statistics, economics, etc. -- pretty much all computational sciences. Very roughly, people create Sage worksheets, IPython notebooks, and Latex documents in somewhat equal proportions.

What SMC runs on

Technically, SMC is a multi-datacenter web application without specific dependencies on particular cloud provider functionality. In particular, we use the Cassandra database, and custom backend services written in Node.js (about 15,000 lines of backend code). We also use Amazon's Route 53 service for geographically aware DNS. There are two racks containing dedicated computers on opposites sides of campus at University of Washington with 19 total machines, each with about 1TB SSD, 4TB+ HDD, and 96GB RAM. We also have dozens of VM's running at 2 Google data centers to the east.

A substantial fraction of the work in implementing SMC has been in designing and implementing (and reimplementing many times, in response to real usage) a robust replicated backend infrastructure for projects, with regular snapshots and automatic failover across data centers. As I write this, users have created 66677 projects; each project is a self-contained Linux account whose files are replicated across several data centers.

The Source Code of SMC

The underlying source of SMC, both the backend server and frontend client, is mostly written in CoffeeScript. The frontend (which is nearly 20,000 lines of code) is implemented using the "progressive refinement" approach to HTML5/CSS/Javascript web development. We do not use any Javascript single page app frameworks, though we make heavy use of Bootstrap3 and jQuery. All of the library dependencies of SMC, e.g., CodeMirror, Bootstrap, jQuery, etc. for SMC are licensed under very permissive BSD/MIT, etc. libraries. In particular, absolutely nothing in the Javascript software stack is GPL or AGPL licensed. The plan is that any SMC source code that will be open sourced will be released under the BSD license. Some of the SMC source code is not publicly available, and is owned by University of Washington. But other code, e.g., the realtime sync code, is already available.
Some of the functionality of SMC, for example Sage worksheets, communicate with a separate process via a TCP connection. That separate process is in some cases a GPL'd program such as Sage, R, or Octave, so the viral nature of the GPL does not apply to SMC. Also, of course the virtual machines are running the Linux operating system, which is mostly GPL licensed. (There is absolutely no AGPL-licensed code anywhere in the picture.)

Note that since none of the SMC server and client code links (even at an interpreter level) with any GPL'd software, that code can be legally distributed under any license (e.g., from BSD to commercial).
Also we plan to create a fully open source version of the Sage worksheet server part of SMC for inclusion with Sage. This is not our top priority, since there are several absolutely critical tasks that still must be finished first on SMC, e.g., basic course management.

The SMC Business Model

The University of Washington Center for Commercialization (C4C) has been very involved and supportive since the start of the projects. There are no financial investors or separate company; instead, funding comes from UW, some unspent grant funds that were about to expire, and a substantial Google "Academic Education Grant" ($60K). Our first customer is the "US Army Engineer Research and Development Center", which just started a support/license agreement to run their own SMC internally. We don't currently offer a SaaS product for sale yet -- the options for what can be sold by UW are constrained, since UW is a not-for-profit state university. Currently users receive enhancements to their projects (e.g., increased RAM or disk space) in exchange for explaining to me the interesting research or teaching they are doing with SMC.

The longterm plan is to start a separate for-profit company if we build a sufficient customer base. If this company is successful, it would also support fulltime development of Sage (e.g., via teaching buyouts for faculty, support of students, etc.), similar to how Magma (and Mathematica, etc.) development is funded.

In conclusion, in response to Michael Monagan, you are wrong. And you are right.

by William Stein (noreply@blogger.com) at August 27, 2014 07:55 AM

You don't really think that Sage has failed, do you?

I just received an email from a postdoc in Europe, and very longtime contributor to the Sage project.  He's asking for a letter of recommendation, since he has to leave the world of mathematical software development (after a decade of training and experience), so that he can take a job at hedge fund.  He ends his request with the question:

> P.S. You don't _really_ think that Sage has failed, do you?

After almost exactly 10 years of working on the Sage project, I absolutely do think it has failed to accomplish the stated goal of the mission statement: "Create a viable free open source alternative to Magma, Maple, Mathematica and Matlab.".     When it was only a few years into the project, it was really hard to evaluate progress against such a lofty mission statement.  However, after 10 years, it's clear to me that not only have we not got there, we are not going to ever get there before I retire.   And that's definitely a failure.   

Here's a very rough quote I overheard at lunch today at Sage Days 61, from John Voight, who wrote much quaternion algebra code in Magma: "I'm making a list of what is missing from Sage that Magma has for working with quaternion algebras.  However, it's so incredibly daunting, that I don't want to put in everything.  I've been working on Magma's quaternion algebras for over 10 years, as have several other people.  It's truly daunting how much functionality Magma has compared to Sage."

The only possible way Sage will not fail at the stated mission is if I can get several million dollars a year in money to support developers to work fulltime on implementing interesting core mathematical algorithms.  This is something that Magma has had for over 20 years, and that Maple, Matlab, and Mathematica also have.   That I don't have such funding is probably why you are about to take a job at a hedge fund.    If I had the money, I would try to hire a few of the absolute best people (rather than a bunch of amateurs), people like you, Robert Bradshaw, etc. -- we know who is good. (And clearly I mean serious salaries, not grad student wages!)

So yes, I think the current approach to Sage has failed.    I am going to try another approach, namely SageMathCloud.  If it works, maybe the world will get a free open source alternative to Magma, Mathematica, etc.  Otherwise, maybe the world never ever will.      If you care like I do about having such a thing, and you're teaching course, or whatever, maybe try using SageMathCloud.   If enough people use SageMathCloud for college teaching, then maybe a business model will emerge, and Sage will get proper funding.   

by William Stein (noreply@blogger.com) at August 27, 2014 07:52 AM

Vince Knight

A Sneak Preview of Game Theory in Sage (2/3): Matching Games

In my previous post here I described some of the Sage development that +James Campbell and I spent a lot of time this Summer working on. In that post I described some work that has subsequently been accepted and included in the latest release of Sage (here’s the latest changlog): code to calculate the Shapley value.

In this post I’ll talk about the second of 3 tickets that James and I worked on: looking at Matching games. This has not actually been reviewed yet so please do help us get this code in to Sage by taking a look at the ticket: 16331.

What is a matching game?

One of the best explanations of a matching game (also called the stable marriage problem) can be found in this video. That video really is awesome but it might be a bit long (it’s 25 minutes) so this very short video I threw together for a class I teach might be of interest (it is no where near as good as the previous one but it’s 3 minutes long).

Basically a matching game attempts to create links between two groups of people (referred to as suitors and reviewers) in such a way as no one wants to break their link:

In the above picture we see the preferences of the suitors and the reviewers. So \(c\), prefers \(B\) to \(A\), and \(A\) to \(C\).

Here is the actual definition of a stable matching that I give my students:

A matching game of size \(N\) is defined by two disjoint sets \(S\) and \(R\) or suitors and reviewers of size \(N\). Associated to each element of \(S\) and \(R\) is a preference list:

A matching \(M\) is a any bijection between \(S\) and \(R\). If \(s\in S\) and \(r\in R\) are matched by \(M\) we denote:

The above image defines a matching game, one possible matching could be given below:

It’s immediate to note however that \(B\) and \(c\) prefer each other to their current matching: so the above matching is unstable. In that example \((B,c)\) is called a ‘blocking pair’.

Luckily Gale and Shapley obtained an algorithm that guarantees a stable matching and this is what James and I put together in to Sage.

First, let’s define a matching game:

sage: suitr_pref = {'a': ('B', 'A', 'C'),
....:               'b': ('B', 'C', 'A'),
....:               'c': ('A', 'B', 'C')}
sage: reviewr_pref = {'A': ('a', 'b', 'c'),
....:                 'B': ('a', 'c', 'b'),
....:                 'C': ('b', 'c', 'a')}
sage: m = MatchingGame([suitr_pref, reviewr_pref])

You can see that python dictionaries are used for the functions \(f\) and \(g\) described above (the suitor preferences).

If you tab complete after typing m. you can see some of the methods and attributes associated with the MatchingGame class:

sage: m.
m.add_reviewer  m.bi_partite    m.db            m.dumps         m.rename        m.reviewers     m.solve         m.version
m.add_suitor    m.category      m.dump          m.plot          m.reset_name    m.save          m.suitors

I won’t go in to much of the details of that year but you can get some help on anyone of those by typing ? after one of them (below you can see some of the output):

sage: m.solve?

Type:            instancemethod
File:            /Users/vince/sage/local/lib/python2.7/site-packages/sage/game_theory/matching_game.py
Definition:      m.solve(self, invert=False)
Docstring:
   Computes a stable matching for the game using the Gale-Shapley
   algorithm.

...

Let’s give that method a spin (as you can see it’ll use the Gale-Shapley algorithm).

sage: m.solve()
{'C': ['b'], 'a': ['B'], 'b': ['C'], 'A': ['c'], 'c': ['A'], 'B': ['a']}

We see that a matching has been obtained. You can see the corresponding matching here:

Another nice method that we implemented is to use the awesome graph theory stuff that’s in Sage so you can obtain the corresponding bi-partite graph:

sage: p = m.bi_partite()
Bipartite graph on 6 vertices
sage: p.show()

You can see the corresponding plot here:

All of the above has not been reviewed yet so if you do have any comments they’d be very gratefully received. If you actually went over to trac and took a look at it there that would be great but otherwise just commenting here would be awesome.

This is the second in a series of 3 posts that I’ll get around to writing, in the next one I’ll cover ticket 16333: Normal Form Game. This is the biggest contribution by James as it involved interfacing with two other packages and also coding up a bespoke support enumeration algorithm.

August 27, 2014 12:00 AM

August 23, 2014

Nikhil Peter

GSoC: An End, And A New Beginning

Well, it’s officially done. As per my proposal, the project has been officially completed. It’s been a rollercoaster ride of new experiences, a ton of code(by my count its somewhere around 20k lines or so, but GitHub shows a much larger number) and some unforgettable memories. The app is nowhere near perfect, however, and I […]

by hav3n at August 23, 2014 09:15 AM

August 22, 2014

Simon Spicer

GSoC: Wrapping Up

Today marks the last day of my Google Summer of Code 2014 project. Evaluations are due midday Friday PDT, and code submissions for successfully passed projects start soon thereafter. The end result of my project can be found at Sage Trac Ticket 16773. In total it's just over 2000 lines of Python and Cython code, to be added to the next major Sage release (6.4) if/when it gets a final thumbs-up review.

When I write just the number of lines of code it doesn't sound like very much at all - I'm sure there are GSoC projects this year that produced at least 10k lines of code! However, given that what I wrote is complex mathematical code that needed a) significant optimization, and b) to be be mathematically sound in the first place, I'd say that isn't too bad. Especially since the code does what the project description aimed for it to do: compute analytic ranks of rational elliptic curves very quickly with a very low rate of failure. Hopefully this functionality can and will be used to produce numerical data for number theory research in the years to come.

The Google Summer of Code has been a thoroughly rewarding experience for me. It's a great way to sharpen one's coding skills and get paid in the process. I recommend it for any university student who's looking to go into any industry that requires programming skills; I'd apply to do it again next year if I wasn't planning to graduate then!

by Simon Spicer (noreply@blogger.com) at August 22, 2014 10:04 AM

August 19, 2014

Amit Jamadagni

Edits, dowker notation, HOMFLY …

Hello everyone, This week we have been working on editing the code to reach the standards along side running the tests. A decent amount of time has been spent on documentation. Plot methods and 3d co ordinates seem to be … Continue reading

by esornep at August 19, 2014 08:31 PM

August 18, 2014

Harald Schilly

New combinatorial designs in Sage - by Nathann Cohen

This is a guest post by Nathann Cohen. 


New combinatorial designs in Sage

Below, these graphs are a decomposition of a $K_{13}$ (i.e. the complete graph on 13 points) into copies of $K_4$. Pick two vertices you like: they appear exactly once together in one of the $K_4$.


The second graph shows a decomposition of a $K_{4,4,4}$ (i.e. the complete multipartite graph on $4\times 3$ points) into copies of $K_3$. Pick two vertices you like from different groups: they appear exactly once together in one of the $K_4$.




Sage has gotten quite good at building such decompositions (a specific kind of combinatorial designs) when they exist. This post is about them.

The first object belongs to a family called Balanced Incomplete Block Designs (or $(n,k)$-BIBD), which are defined as "a collection $\mathcal S$ of sets, all of them with size $k$ (here $k=4$), such that any pair of points of a set $X$ with $|X|=n$ (here $n=13$) appears in exactly one set of $\mathcal S$".

The second belongs to the family of Transversal Designs (or $TD(k,n)$) which have a similar definition: consider a set $X$ containing $k$ groups (here $k=3$) of $n$ vertices (here $n=4$). A collection $\mathcal S$ of sets, each of which contains one point from each group, is a $TD(k,n)$ if any two points from different groups appear together in exactly one set of $\mathcal S$.

The main problem with combinatorial designs is to know where they exist. And that is not obvious. Sage does what it can on about that:
  • If you want it to build a $(14,4)$-BIBD, it will tell you that none exists.
  • If you want it to build a $(16,4)$-BIBD, it will tell you that one exists.
  • If you want it to build a $(51,6)$-BIBD, it will tell you that it just not know if there is one (and nobody knows better at the moment)
Examples here:

sage: designs.balanced_incomplete_block_design(14,4,existence=True)
False
sage: designs.balanced_incomplete_block_design(16,4,existence=True)
True
sage: designs.balanced_incomplete_block_design(51,6,existence=True)
Unknown

For a developer (and design lover), the game consists in teaching Sage how to build all combinatorial designs that appear in some research paper. For BIBD as well as for Transversal Designs, on which a LOT of sweat was spent these last months.

For Transversal designs the game is a bit different, as we know that a $TD(k-1,n)$ exists whenever a $TD(k,n)$ exists. Thus, the game consists in finding the largest integer $k_n$ such that a $TD(k_n,n)$ exists. This game is hardly new, and hardly straightforward: In the Handbook of Combinatorial Designs, one can find the table of such $k$ up to $n=10000$ (see here).

The good thing about Sage is that it does not just claim that such a design exists: it also builds it, and there is no better existence proof than that (it is very very quick to check that a combinatorial design is valid). The other good thing is that there is no common database for such data (the Handbook is not updated/printed every night), and that by teaching Sage all new designs found by researchers we build such a database. And it already contains designs that were not known when the Handbook was printed.

Finally, the other other good thing about Sage is that it will soon be able to tell you where those designs come from. Indeed, the most powerful results in the field of Transversal Designs are of the shape "If there exists a $TD(k_1,n_1)$, and a $TD(k_2,n_2)$, ..., and a $TD(k_c,n_c)$, then you can combine them all to obtain a $TD(k,n)$ with $k=k(k_1,...,k_c)$ and $n=n(n_1,...,n_c)$". And it is never very clear how to inverse these functions: if you want to build a $TD(k,n)$, which integers $k_1,...,k_c,n_1,...,n_c$ should you pick ?

Sage knows. It must know it, in order to build these designs anyway. And you can find that data inside. And soon, we will teach it to give you the bibliographical references of the papers in which you can find the right construction to produce the $TD(k,n)$ that you want. And we will provide the right parameters. And the world will be at peace.

A couple of things before we part:
  • Transversal Designs (TD), Orthogonal Arrays (OA), and Mutually Orthogonal Latin Squares (MOLS) are all equivalent objects.
  • We write a LOT of Transversal Designs code these days, so expect all this to improve very fast.
  • You can learn what Sage knows of combinatorial designs right here.
Finally, there are far too many combinatorial designs for one man to learn. If you love combinatorial designs, come join us: Vincent Delecroix, you, and I have code to write together. And if you know a related mathematical results that Sage ignores, come tell us: we could not have gone so far without the mathematical knowledge of Julian Abel. And Sage does not know everything yet.

Have fuuuuuuuuuuuuuuuuuuun !

Nathann

by Harald Schilly (noreply@blogger.com) at August 18, 2014 09:32 PM

August 15, 2014

Simon Spicer

How big should Delta be?

Let's take a look at the central formula in my GSoC rank estimation code. Let $E$ be a rational elliptic curve with analytic rank $r$. Then
$$ r < \sum_{\gamma} \text{sinc}^2(\Delta\gamma) =  \frac{1}{\pi\Delta} \left[ C_0 + \frac{1}{2\pi\Delta}\left(\frac{\pi^2}{6}-\text{Li}_2\left(e^{-2\pi\Delta}\right) \right) + \sum_{n=1}^{e^{2\pi\Delta}} c_n\left(1 - \frac{\log n}{2\pi\Delta}\right) \right] $$
where

  • $\gamma$ ranges over the nontrivial zeros of the $L$-function attached to $E$
  • $\Delta$ is a positive parameter
  • $C_0 = -\eta + \log\left(\frac{\sqrt{N}}{2\pi}\right)$; $\eta$ is the Euler-Mascheroni constant $=0.5772\ldots$ and $N$ is the conductor of $E$
  • $\text{Li}_2(x)$ is the dilogarithm function, defined by $\text{Li}_2(x) = \sum_{k=1}^{\infty} \frac{x^k}{k^2}$
  • $c_n$ is the $n$th coefficient of the logarithmic derivative of the $L$-function of $E$.
The thing I want to look at in this post is that parameter $\Delta$. The larger you make it, the closer the sum on the left hand side over the zeros is to the analytic rank, so when trying to determine the rank of $E$ we want to pick as large a $\Delta$ as we can. However, the bigger this parameter is the more terms we have to compute in the sum over the $c_n$ on the right hand side; moreover the number of terms - and thus the total computation time - scales exponentially with $\Delta$. This severely constrains how big we can make $\Delta$; generally a value of $\Delta=2$ may take a second or two for a single curve on SageMathCloud, while $\Delta=3$ may take upwards of an hour. For the average rank project I'm working on I ran the code on 12 million curves using $\Delta=1.3$; the total computation time was about 4 days on SMC.

However, it should be clear that using too large a $\Delta$ is overkill: if you run the code on a curve with $\Delta=1$ and get a bound of zero out, you know that curve's rank exactly zero (since it's at most zero, and rank is a non-negative integer). Thus using larger $\Delta$ values on that curve will do nothing except provide you the same bound while taking much longer to do so.

This begs the question: just how big of a $\Delta$ value is good enough? Can we, given some data defining an elliptic curve, decide a priori what size $\Delta$ to use so that a) the computation returns a bound that is likely to be the true of the curve, and b) it will do so in as little time as possible?

The relevant invariant to look at here is conductor $N$ of the elliptic curve; go back to the formula above and you'll see that the zero sum includes a term which is $O\left(\frac{\log(N)}{2\pi\Delta}\right)$ (coming from the $\frac{1}{\pi \Delta} C_0$ term). This means that size of the returned estimate will scale with $\log(N)$: for a given $\Delta$, the bound returned on a curve with 10-digit conductor will be about double that which is returned for a 5-digit conductor curve, for example. However, we can compensate this by increasing $\Delta$ accordingly.

To put it all more concretely we can pose the following questions:
  • Given an elliptic curve $E$ with conductor $N$, how large does $\Delta$ need to be in terms of $N$ so that the returned bound is guaranteed to be the true analytic rank of the curve?
  • Given a conductor size $N$ and a proportionality constant $P \in [0,1]$, how big does $\Delta$ have to be in terms of $N$ and $P$ so that at least $P\cdot 100$ percent of all elliptic curves with conductor of size about $N$ will, when the rank estimation code is run on them with that $\Delta$ value, yield returned bounds that are equal to their true analytic rank?
[Note: in both of the above questions we are making the implicit assumption that the returned rank bound is monotonically decreasing for increasing $\Delta$. This is not necessarily the case: the function $y = \text{sinc}^2(x)$ is not a decreasing function in $x$. However, in practice any upwards fluctuation we see in the zero sum is small enough to be eliminated when we take the floor function to get an integer rank bound.]

A $\Delta$ CHOICE GOOD ENOUGH FOR MOST CURVES


The first question is easier to phrase, but more difficult to answer, so we will defer it for now. To answer the second question, it is useful mention what we know about the distribution and density of nontrivial zeros of an elliptic curve $L$-function.

Using some complex analysis we can show that, for the $L$-function of an elliptic curve with conductor $N$, the expected number of zeros in the critical strip with imaginary part at most $T$, is $O(T\log N+ T\log T)$. That is, expected zero density has two distinct components: a part that scales linearly with log of the conductor, and a part that doesn't scale with conductor (but does scale slightly faster than linearly with how far out you go).

Consider the following: if we let 
$$\Delta(N) = \frac{C_0}{\pi} = \frac{1}{\pi}\left(-\eta+\log\left(\frac{\sqrt{N}}{2\pi}\right)\right)$$
then the first term in the right hand side of the zero sum formula is precisely 1 - this is the term that comes from the $\log N$ part of the zero density. The next term - the one involving $\frac{\pi^2}{6}-\text{Li}_2\left(e^{-2\pi\Delta}\right)$ - is the term that comes from the part independent of $N$; because the right hand side is divided by $\Delta$ it therefore goes to zero as the curve's conductor increases. The third term contains the $c_n$ coefficients which (per Sato-Tate) will be positive half the time and negative half the time, so the entire sum could be positive or negative; we therefore expect its contribution to be small on average when we look at large number of elliptic curves.

It thus stands to reason that for this value of Delta, and when the conductor $N$ is sufficiently large, the zero sum will be about 1, plus or minus a smallish amount coming from the $c_n$ sum. This argument is by no means rigorous, but we might therefore expect the zero sum to be within 2 of the actual analytic rank most of the time. Couple that with knowledge of the root number and you get a rank upper bound out which is equal to the actual analytic rank in all but a few pathological cases.

Empirical evidence bears this out. I ran my rank estimation code with this choice of $\Delta$ scaling on the whole Cremona database, which contains all elliptic curves up to conductor 350000:
The proportion of curves up to conductor $N$ for which the computed rank bound is strictly greater than rank. The $x$-axis denotes conductor; the $y$-axis is the proportion of all curves up to that conductor for which the returned rank bound was not equal to the true rank (assuming BSD and GRH as always).
As you can see, the percentage of curves for which the rank bound is strictly greater than rank holds fairly constant at about 0.25%. That's minuscule: what this is saying is that if you type in a random Weierstrass equation, there is only about a 1 in 1000 chance that the rank bounding code with $\Delta = \frac{C_0}{\pi}$ will return a value that isn't the actual analytic rank. Moreover, the code runs considerably faster than traditional analytic rank techniques, so if you wanted to, for example, compute the ranks of a database of millions of elliptic curves, this would be a good first port of call.

Of course, this $\Delta$ scaling approach is by no means problem-free. Some more detailed analysis will show that that as stated above, the runtime of the code will actually be $O(N)$ (omitting log factors), i.e. asymptotic scaling is actually worse than traditional analytic rank methods, which rely on evaluating the $L$-function directly and thus are $O(\sqrt{N})$. It's just that with this code we have some very small constants sitting in front, so the crossover point is at large enough conductor values that neither method is feasible anyway. 

This choice of $\Delta$ scaling works for conductor ranges up to about $10^9$; that corresponds to $\Delta \approx 2.5$, which will give you a total runtime of about 10-20 seconds for a single curve on SMC. Increase the conductor by a factor of 10 and your runtime will also go up tenfold.

For curves of larger conductor, instead of setting $\Delta = \frac{C_0}{\pi}$ we can choose to set $\Delta$ to be $\alpha\cdot \frac{C_0}{\pi}$ for any $\alpha \in [0,1]$; the resulting asymptotic runtime will then be $O(N^{\alpha})$, at the expense of having a reduced proportion of elliptic curves where rank bound is equal to true rank.

HOW LARGE DOES $\Delta$ HAVE TO BE TO GUARANTEE TRUE RANK?


When we use $\Delta = \frac{C_0}{\pi}$, the curves for which the returned rank estimate is strictly larger than the true rank are precisely those which have unusually low-lying zeros. For example, the rank 0 curve with Cremona label 256944c1, has a zero with imaginary part at 0.0256 (see here for a plot), compared to an expected value of 0.824. Using $\Delta = \frac{C_0}{\pi}$ on this curve means $\Delta \approx 1.214$; if we compute the corresponding zero sum with this value of $\Delta$ we get a value of 2.07803. The smallest value of $\Delta$ for which we get a zero sum value of less than 2 is empirically about 2.813; at this point taking the floor and invoking parity tells us that the curve's rank is zero.

The above example demonstrates that if we want to guarantee that the returned rank bound is the true analytic rank, we are forced to increase the size of $\Delta$ to something larger than $\frac{C_0}{\pi}$. Do we need to increase $\Delta$ by a fixed value independent of $N$? Do we need to increase it by some constant factor? Or does it need to scale faster than $\log N$? These are hard questions to answer; it comes down to determining how close to the central point the lowest nontrivial zero can be as a function of the conductor $N$ (or some other invariants of $E$), which in turn is intimately related to estimating the size of the leading coefficient of the $L$-series of $E$ at the central point. This is already the topic of a previous post: it is a question that I hope to make progress in answering in my PhD dissertation.

by Simon Spicer (noreply@blogger.com) at August 15, 2014 10:44 AM

August 14, 2014

Simon Spicer

Things are Better in Parallel

A recent improvement I've implemented in my GSoC code is to allow for parallelized computation. The world has rapidly moved to multi-core as a default, so it makes sense to write code that can exploit this. And it turns out that the zero sum rank estimation method that I've been working on can be parallelized in a very natural way.

THE @parallel DECORATOR IN SAGE


But first: how does one compute in parallel in Sage? Suppose I have written a function in a Sage environment (e.g. a SageMathCloud worksheet, .sage file, Sage console etc.) which takes in some input and returns some other input. The simple example f below takes in a number n and returns the square of that number.

sage: def f(n):
....:     return n*n
....: 
sage: f(2),f(3),f(5)
(4, 9, 25)

This is a fairly straightforward beast; put in a value, get a value out. But what if we have some computation that requires evaluating that function on a large number of inputs? For example, say we want to compute the sum of the first 10 million squares. If you only have one processor to tap, then you're limited to calling f over and over again in series:

sage: def g(N):
....:     y = 0
....:     for n in range(N+1):
....:         y += f(n)
....:     return y
....: 
sage: %time g(10000000)
CPU times: user 17.5 s, sys: 214 ms, total: 17.7 s
Wall time: 17.6 s
333333383333335000000

In this example you could of course invoke the formula for the sum of the first $n$ squares and just write down the answer without having to add anything up, but in general you won't be so lucky. You can optimize the heck out of f, but in general when you're limited to a single processor you're confined to iterating over all the values you need to consider sequentially .

However, if you have 2 processors available one could try write code that splits the work into two roughly equal parts that can run relatively independently. For example, for our function we could compute the sum of all the even squares up to a given bound in one process and the sum of all the odd squares in another, and then add the two results together to get the sum of all square up to our bound.

Sage has a readily available mechanism to do exactly this: the @parallel decorator. To enable parallel computation in your function, put @parallel above your function definition (the decorator can take some parameters; below ncpus=2 tells it that we want to use 2 processors). Note that we also have to modify the function: now it no longer only takes the bound up to which we must add squares, but also a flag indicating whether we should consider even or odd squares.

sage: @parallel(ncpus=2)
....: def h(N,parity):
....:     y = 0
....:     if parity=="even":
....:         nums = range(0,N+1,2)
....:     elif parity=="odd":
....:         nums = range(1,N+1,2)
....:     for n in nums:
....:         y += f(n)
....:     return y

Instead of calling h with its standard sequence of parameters, we can pass it a list of tuples, where each tuple is a valid sequence of inputs. Sage then sends each tuple of inputs off to an available processor and evaluates the function on them there. What's returned is a generator object that can iterate over all the outputs; we can always see the output directly by calling list() on this returned generator:

sage: for tup in list(h([(1000,"even"),(1000,"odd")])):
....:     print(tup)
....: 
(((1000, 'even'), {}), 167167000)
(((1000, 'odd'), {}), 166666500)

Note that the tuple of inputs is also returned. Because we're doing things in parallel, we need to know which output corresponds to which input, especially since processes may finish at different times and return order is not necessarily the same as the order of the input list.

Finally, we can write a wrapper function which calls our parallelized function and combines the returned results:

sage: def i(N):
....:     y = 0
....:     for output in h([(N,"even"),(N,"odd")]):
....:         y += output[1]
....:     return y
....: 
sage: %time i(10000000)
CPU times: user 1.76 ms, sys: 33.2 ms, total: 35 ms
Wall time: 9.26 s
333333383333335000000

Note that i(10000000) produces the same output at g(10000000) but in about half the time, as the work is split between two processes instead of one. This is the basic gist of parallel computation: write code that can be partitioned into parts that can operate (relatively) independently; run those parts on different processors simultaneously; and then collect returned outputs and combine to produce desired result.

PARALLELIZING THE ZERO SUM COMPUTATION


Let's take a look at the rank estimating zero formula again. Let $E$ be a rational elliptic curve with analytic rank $r$. Then

\begin{align*}
r < \sum_{\gamma} \text{sinc}^2(\Delta\gamma) &=  \frac{1}{\pi \Delta}\left(-\eta+\log\left(\frac{\sqrt{N}}{2\pi}\right)\right) \\
&+ \frac{1}{2\pi^2\Delta^2}\left(\frac{\pi^2}{6}-\text{Li}_2\left(e^{-2\pi\Delta}\right) \right) \\
&+ \frac{1}{\pi \Delta}\sum_{\substack{n = p^k \\ n < e^{2\pi\Delta}}} c_n\left(1 - \frac{\log n}{2\pi\Delta}\right)
\end{align*}
where
  • $\gamma$ ranges over the nontrivial zeros of the $L$-function attached to $E$
  • $\Delta$ is a positive parameter
  • $\eta$ is the Euler-Mascheroni constant $=0.5772\ldots$
  • $N$ is the conductor of $E$
  • $\text{Li}_2(x)$ is the dilogarithm function, defined by $\text{Li}_2(x) = \sum_{k=1}^{\infty} \frac{x^k}{k^2}$
  • $c_n$ is the $n$th coefficient of the logarithmic derivative of the $L$-function of $E$, which is zero when $n$ is not a perfect prime power.
The right hand side of the sum, which is what we actually compute, can be broken up into three parts: the first term involving the curve's conductor $N$; the second term involving the dilogarithm function $Li_2(x)$; and the sum over prime powers. The first two parts are quick to compute: evaluating them can basically be done in constant time regardless of the magnitude of $N$ or $\Delta$.

It is therefore not worth considering parallelizing these two components, since the prime power sum dominates computation time for all but the smallest $\Delta$ values. Instead, what I've done is rewritten the zero sum code so that the prime power sum can be evaluated using multiple cores.

As mentioned in this post, we can turn this sum into one indexed by the primes (and not the prime powers); this actually makes parallelization quite straightforward. Recall that all primes except $2$ are odd, and all primes except $2$ and $3$ are either $1$ or $5$ remainder $6$. One can scale this up: given a list of small primes $[p_1,p_2,\ldots,p_n]$, all other primes fall into one of a relatively small number of residue classes modulo $p_1 p_2\cdots p_n$. For example, all primes beyond $2$, $3$, $5$ and $7$ have one of the following 48 remainders when you divide them by $210 = 2\cdot 3\cdot 5 \cdot 7$:
\begin{align*}
&1, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79,\\
&83, 89, 97, 101, 103, 107, 109, 113, 121, 127, 131, 137, 139, 143, 149, \\
&151, 157, 163, 167, 169, 173, 179, 181, 187, 191, 193, 197, 199, 209,
\end{align*}
and no other.

If we had 48 processors available. the natural thing to do would be to get each of them to iterate over all integers in a particular residue class up to $e^{2\pi\Delta}$, evaluating the summand whenever that integer is prime, and returning the sum thereof. For example, if the bound was 1 million, then processor 1 would compute and return $\sum_{n \equiv 1 (\text{mod } 210)}^{1000000} c_n\left(1 - \frac{\log n}{2\pi\Delta}\right)$. Processor 2 would do the same with all integers that are $11$ modulo $210$, etcetera.

In reality, we have to figure out a) how many processors are available, and b) partition the work relatively equally among those processors. Thankfully sage.parallel.ncpus.ncpus() succinctly addresses the former, and the latter is achieved by splitting the residue classes into $n$ chunks of approximately equal size (where $n$ is the number of available CPUs) and then getting a given processor to evaluate the sum over those residues in a single chunk.

Here is the method I wrote that computes the $\text{sinc}^2$ zero sum with (the option of) multiple processors:

Note that I've defined a subfunction to compute the prime sum over a given subset of residue classes; this is the part that is parallelized. Obtaining the residue chunks and computing the actual summand at each prime are both farmed out to external methods.

Let's see some timings. The machine I'm running Sage on has 12 available processors:

sage: sage.parallel.ncpus.ncpus()
12
sage: E = EllipticCurve([12838,-51298])
sage: Z = LFunctionZeroSum(E)
sage: %time Z._zerosum_sincsquared_fast(Delta=2.6)
CPU times: user 36.7 s, sys: 62.9 ms, total: 36.8 s
Wall time: 36.8 s
2.8283629046
sage: %time Z._zerosum_sincsquared_parallel(Delta=2.6)
CPU times: user 7.87 ms, sys: 133 ms, total: 141 ms
Wall time: 4.06 s
2.8283629046

Same answer in a ninth of the time! Note that the parallelized method has some extra overhead, so even with 12 processors we're unlikely to get a full factor of 12 speedup. Nevertheless, the speed boost will allow us to run the zero sum code with larger $\Delta$ values, allowing us to investigate elliptic curves with even larger conductors.

by Simon Spicer (noreply@blogger.com) at August 14, 2014 01:26 PM

August 01, 2014

Vince Knight

A Sneak Preview of Game Theory in Sage (1/3): Cooperative Game Theory

+James Campbell and I spent a lot of time this Summer working on implementing some Game Theory in to Sage.

In this post I’ll (very briefly) describe some of the process involved in contributing to Sage and give a sneak peek at some of the Game Theory code that will (hopefully) be in Sage soon.

This has been a really great experience as it was my first time really contributing to an open source project.

It involved a lot of coding, documentation writing and also being very thankful for all the help we got from the Sage community (a huge thanks to Karl).

It all starts with opening ‘tickets’ on the trac server. We opened 3 tickets:

  • 16331: Build capacity to solve matching games in to Sage.
  • 16332: Build capacity to calculate Shapley value of cooperative games.
  • 16333: Build class for normal form games as well as ability to obtain Nash equilibria

After doing that James and I quickly realised that we needed to have our own common repository for Game Theory development so that’s up on github here.

In this post I’ll talk briefly about some of the stuff that you will be able to do thanks to ticket 16332. This particular ticket has in fact been reviewed and given the all clear so should hopefully make its way in to a future release of Sage! (Which is very exciting indeed!).

If you would like to follow along with some of the stuff written here you’ll need to grab the 16332 branch from the github repository: https://github.com/theref/sage-game-theory/tree/16332 or go directly to the trac server and grab the 16332 ticket.

What is a cooperative game?

Here is the definition that I give my students:

A characteristic function game G is given by a pair \(N,v\) where \(N\) is the number of players and \(v:2[N]\to\mathbb{R}\) is a characteristic function which maps every coalition of players to a payoff.

Here is something else that I describe to my students:

Let’s assume that Alice, Bob and Celine all share a taxi. They all live in a straight line (with regards to the trajectory of the taxi) and the costs associated with their trip is Alice: £5, Bob: £20, Celine: £39.

What is the fairest way of sharing out the total taxi fair (which would be £39)?

From Alice’s point of view she needs to pay less than £5 (or their would be no point in her sharing the taxi). Similarly for Bob and Celine, however we also want the amount paid by Alice and Bob to be less than if they had shared a taxi etc…

To solve our problem we can use cooperative game theory and in particular use a characteristic function game:

\[ v(c) = \begin{cases} 0 &\text{if } c = \emptyset, \\ 5 &\text{if } c = \{A\}, \\ 20 &\text{if } c = \{B\}, \\ 39 &\text{if } c = \{C\}, \\ 20 &\text{if } c = \{A,B\}, \\ 39 &\text{if } c = \{A,C\}, \\ 39 &\text{if } c = \{B,C\}, \\ 39 &\text{if } c = \{A,B,C\}. \\ \end{cases} \]

This function maps each coalition of players to a value (in particular to their taxi fair). So we see that if Alice and Bob shared a taxi without Celine then their taxi fair would be £20.

It can be shown (I won’t cover that here as I want to get to the Sage code) that the ‘fair’ way of sharing the cost of the taxi is called the Shapley value \(\phi\) which is a vector given by:

\[ \phi_i(G) = \frac{1}{N!} \sum_{\pi\in\Pi_n} \Delta_{\pi}^G(i) \]

where:

\[ \Delta_{\pi}^G(i) = v\bigl( S_{\pi}(i) \cup {i} \bigr) - v\bigl( S_{\pi}(i) \bigr) \]

where \(S_{\pi}(i)\) is the set of predecessors of \(i\) in some permutation of the players \(\pi\), i.e. \(S_{\pi}(i) = \{ j \mid \pi(i) > \pi(j) \}\).

I’ve got a video that describes all this if it’s helpful: https://www.youtube.com/watch?v=aThG4YAFErw. Here however I want to give a sneak preview of how to figure out what Alice, Bob and Celine should pay using a future release of Sage:

First of all we need to define the characteristic function:

sage: v = {(): 0,
....:      ('A'): 5,
....:      ('B'): 20,
....:      ('C'): 39,
....:      ('A', 'B'): 20,
....:      ('A', 'C'): 39,
....:      ('B', 'C'): 39,
....:      ('A', 'B', 'C'): 39}

As you can see we do this using a Python dictionary which allows us to map tuples (or indeed coalitions) to values (which is exactly what a characteristic function is).

We then create an instance of the CooperativeGame class (which is what James and I put together):

sage: taxi_game = CooperativeGame(v)

If you tab complete after typing taxi_game. you can see some of the methods and attributes associated with the CooperativeGame class:

sage: taxi_game.
taxi_game.category          taxi_game.dump              taxi_game.is_monotone       taxi_game.nullplayer        taxi_game.rename            taxi_game.shapley_value
taxi_game.ch_f              taxi_game.dumps             taxi_game.is_superadditive  taxi_game.number_players    taxi_game.reset_name        taxi_game.version
taxi_game.db                taxi_game.is_efficient      taxi_game.is_symmetric      taxi_game.player_list       taxi_game.save

I won’t go in to much of the details of that year but you can get some help on anyone of those by typing ? after one of them (below you can see some of the output):

sage: taxi_game.is_symmetric?
Type:       instancemethod
String Form:<bound method CooperativeGame.is_symmetric of A 3 player co-operative game>
File:       /Users/vince/sage/local/lib/python2.7/site-packages/sage/game_theory/cooperative_game.py
Definition: taxi_game.is_symmetric(self, payoff_vector)
Docstring:
   Return "True" if "payoff_vector" possesses the symmetry property.

   A payoff vector possesses the symmetry property if v(C cup i) =
   v(C cup j) for all C in 2^{Omega} setminus {i,j}, then x_i =
   x_j.

   INPUT:

   * "payoff_vector" -- a dictionary where the key is the player and
     the value is their payoff
...

But what we really want to know is how much should Alice, Bob and Celine pay the taxi?

To calculate this we simply get Sage to tell us the Shapley value:

sage: taxi_game.shapley_value()
{'A': 5/3, 'B': 55/6, 'C': 169/6}

This shows that in this particular case Alice should pay £1.67, Bob £9.17 and Celine £28.17 (rounding has obviously caused us to gain a penny along the way but you get the idea) :)

This is the first in a series of 3 posts that I’ll get around to writing, in the next one I’ll cover ticket 16331 which takes care of matching games :)

To go back to the process of contributing to an open source project, I really think this is something everyone with any interests in code should have a go at doing as it has a large number of benefits. None more so than improving the standard of code that one writes. When you’re writing because you hope that someone will look at and review your code you make sure it’s well written (or at least try to!).

August 01, 2014 12:00 AM

July 31, 2014

William Stein

SageMathCloud -- history and status

2005: I made first release the SageMath software project, with the goal to create a viable open source free alternative to Mathematica, Magma, Maple, Matlab.

2006: First web-based notebook interface for using Sage, called "sagenb". It was a cutting edge "AJAX" application at the time, though aimed at a small number of users.

2007-2009: Much work on sagenb. But it's still not scalable. Doesn't matter since we don't have that many users.

2011-: Sage becomes "self sustaining" from my point of view -- I have more time to work on other things, since the community has really stepped up.

2012: I'm inspired by the Simons Foundation's (and especially Jim Simon's) "cluelessness" about open source software to create a new online scalable web application to (1) make it easier for people to get access to Sage, especially on Windows, and (2) generate a more longterm sustainable revenue stream to support Sage development. (I was invited to a day-long meeting in NYC at Simon's headquarters.)

2012-2013: Spent much of 2012 and early 2013 researching options, building prototypes, some time talking with Craig Citro and Robert Bradshaw (both at Google), and launched SageMathCloud in April 2013. SMC got some high-profile use, e.g., by UCLA's 400+ student calculus course.

2014: Much development over the last 1.5 years. Usage has also grown. There is some growth information here. I also have useful google analytics data from the whole time, which shows around 4000 unique users per week, with an average session duration of 97 minutes (see attached). Number of users has actually dropped off during the summer, since there is much less teaching going on.

SMC itself is written mostly in CoffeeScript using Node.js on the backend. There's a small amount of Python as well.

It's a highly distributed multi-data center application. The database is Cassandra. The backend server processes are mostly Node.js processes, and also nginx+haproxy+stunnel.

A copy of user data is stored in every data center, and is snapshotted every few minutes, both via :
  • ZFS -- for rolling snapshots that vanish after a month -- and via
  • bup -- for snapshots that remain forever, and are consistent across data centers.
These snapshots are critical for making it possible to trust collaborators on projects to not (accidentally) destroy your work. It is not possible for users to delete the bup snapshots, by design.
Here's what it does: realtime collaborative editing of Latex docs, IPython notebooks, Sage worksheets; use the command line terminal; have several people collaborate on a project (=a Linux account).
The main applications seem to be:
  • teaching courses with a programming or math software components -- where you want all your students to be able to use something, e.g., IPython, Julia, etc, and don't want to have to deal with trying to get them to install said software themselves. Also, you want to easily be able to share files with students, see their work in realtime, etc. It's a much, much easier for people to get going that with naked VM's they have to configure -- and also I provide cross-data center replication.
  • collaborative research mathematics -- all co-authors of a paper work together in an SMC project, both writing the paper there and doing computations.
Active development work right now:
  • course management for homework (etc)
  • administration functionality (mainly motivated by self-hosting and better moderation)
  • easy history slider to see all pasts states of a document
  • switching from bootstrap2 to bootstrap3.

by William Stein (noreply@blogger.com) at July 31, 2014 11:17 PM

Simon Spicer

The average rank of elliptic curves

It's about time I should demonstrate the utility of the code I've written - the aim of the game for my GSoC project, after all, is to provide a new suite of tools with which to conduct mathematical research.

First some background. Given an elliptic curve $E$ specified by equation $y^2 = x^3 + Ax + B$ for integers $A$ and $B$, one of the questions we can ask is: what is the average rank of $E$ as $A$ and $B$ are allowed to vary? Because there are an infinite number of choices of $A$ and $B$, we need to formulate this question a bit more carefully. To this end, let us define the height of $E$ to be
$$ h(E) = \max\{|A|^3,|B|^2\} $$
[Aside: The height essentially measures the size of the coefficients of $E$ and is thus a fairly decent measure of the arithmetic complexity of the curve. We need the 3rd and 2nd powers in order to make the height function scale appropriately with the curve's discriminant.]

We can then ask: what is the limiting value of the average rank of all curves up to height $X$, as $X$ gets bigger and bigger? Is it infinity? Is it 0? Is it some non-trivial positive value? Does the limit even exist? It's possible that the average rank, as a function of $X$ could oscillate about and never stabilize.

There are strong heuristic arguments suggesting that the answer should be exactly $\frac{1}{2}$. Specifically, as the height bound $X$ gets very large, half of all curves should have rank 0, half should have rank 1, and a negligible proportion should have rank 2 or greater.

Even as recently as five years ago this there we knew nothing concrete unconditionally about average curve rank. There are some results by BrumerHeath-Brown and Young providing successively better upper bounds on the average rank of curves ordered by height (2.3, 2 and $\frac{25}{14}$ respectively), but these results are contingent on the Riemann Hypothesis.

However, starting in 2011 Manjul Bhargava, together with Christopher Skinner and Arul Shankar, published a series of landmark papers (see here for a good expository slideshow, and here and here for two of the latest publications) proving unconditionally that average rank - that is, the limiting value of the average rank of all elliptic curves up to height $X$ - is bounded above by 0.885. A consequence of their work too is that at least 66% of all elliptic curves satisfy the rank part of the Birch and Swinnerton-Dyer Conjecture.

To a computationally-minded number theorist, the obvious question to ask is: Does the data support these results? I am by no means the first person to ask this question. Extensive databases of elliptic curves under various orderings have already been compiled, most notably those by Cremona (ordered by conductor) and Stein-Watkins (ordered essentially by discriminant). However, as yet no extensive tabulation of height-ordered elliptic curves has been carried out.

Here is a summarized table of elliptic curves with height at most 10000 - a total of 8638 curves, and the ranks thereof (all computations done in Sage, of course):

Rank# Curves%
0
298834.59%
1
430749.86%
2
128614.89%
3
570.66%
$\ge$4
00.00%
Total:8638

Thus the average rank of elliptic curves is 0.816 when the height bound is 10000. This is worrisome: the average is significantly different from the value of 0.5 we're hoping to see.

The situation gets even worse when we go up to height bound 100000:

Rank# Curves%
0
1949233.11%
1
2881848.96%
2
974716.56%
3
8011.36%
$\ge$4
40.01%
Total:58862

This yields an average rank of 0.862 for height bound 100000. Bizarrely, the average rank is getting bigger, not smaller!

[Note: the fact that 0.862 is close to Bhargava's asymptotic bound of 0.885 is coincidental. Run the numbers for height 1 million, for example, and you get an average rank of 0.8854, which is bigger than the above asymptotic bound. Observationally, we see the average rank continue to increase as we push out to even larger height bounds beyond this.]

So what's the issue here? It turns out that a lot of the asymptotic statements we can make about elliptic curves are precisely that: asymptotic, and we don't yet have a good understanding of the associated rates of convergence. Elliptic curves, ornery beasts that they are, can seem quite different from their limiting behaviour when one only considers curves with small coefficients. We expect (hope?) that the average to eventually turn around and start to decrease back down to 0.5, but the exact point at which that happens is as yet unknown.

This is where I come in. One of the projects I've been working on (with Wei Ho, Jen Balakrishnan, Jamie Weigandt, Nathan Kaplan and William Stein) is to compute the average rank of elliptic curves for as large a height bound as possible, in the hope that we will get results a bit more reassuring than the above. The main steps of the project are thus:

  1. Generate an ordered-by-height database of all elliptic curves up to some very large  height bound (currently 100 million; about 18.5 million curves);
  2. Use every trick in the book to compute the ranks of said elliptic curves;
  3. Compute the average of said ranks.
Steps 1 and 3 are easy. Step 2 is not. Determining the rank of an elliptic curve is a notoriously hard problem - no unconditional algorithm with known complexity currently exists - especially when you want to do it for millions of curves in a reasonable amount of time. Sage, for example, already has a rank() method attached to their EllipticCurve class; if you pass the right parameters to it, the method will utilize an array of approaches to get a value out that is (assuming standard conjectures) the curve's rank. However, its runtime for curves of height near 100 million is on the order of 20 seconds; set it loose on 18.5 million such curves and you're looking at a total computation time of about 10 CPU years.

Enter GSoC project stage left. At the expense of assuming the Generalized Riemann Hypothesis and the Birch and Swinnerton-Dyer Conjecture, we can use the zero sum rank bounding code I've been working on to quickly compute concrete upper bounds on an elliptic curve's rank. This approach has a couple of major advantages to it:
  • It's fast. In the time it's taken me to write this post, I've computed rank bounds on 2.5 million curves.
  • Runtime is essentially constant for any curve in the database; we don't have to worry about how the method scales with height or conductor. If we want to go up to larger height bounds at a later date, no problem.
As always, some Terms and Conditions apply. The rank bounding code only gives you an upper bound on the rank: if, for example, you run the code on a curve and get the number 4 back, there's no way to determine with this method if the curve's rank is 4, or if it is really some non-negative integer less than 4. 

Note: there is an invariant attached to an elliptic curve called the root number which is efficiently computable, even for curves with large conductor (it took less than 20 minutes to compute the root number for all 18.5 million curves in our database). The root number is one of two values: -1 or +1; if it's -1 the curve has odd analytic rank, and if it's +1 the curve has even analytic rank. Assuming BSD we can therefore always easily tell if the curve's rank is even or odd. My GSoC rank estimation code takes the root number into account, so in the example above, a returned value of 4 tells us that the curve's true rank is one of three values: 0, 2 or 4.

Even better, if the returned value is 0 or 1, we know this must be the actual algebraic rank of the curve: firstly, there's no ambiguity as to what the analytic rank is - it has to the returned 0 or 1; secondly, the BSD conjecture has been proven in the rank 0 & 1 cases. Thus even though we are a priori only computing analytic rank upper bounds, for some proportion of curves we've found the actual algebraic rank.

[Note that the rank bounding code I've written is predicated on knowing that all nontrivial zeros of an elliptic curve $L$-function lie on the critical line, so we still have to assume the Generalized Riemann Hypothesis.]

Thus running the rank bound code on the entire database of curves is a very sensible first step, and it's what I'm currently doing. It's computationally cheap to do - on SageMathCloud, using a Delta value of 1.0, the runtime for a single curve is about 4 milliseconds. Moreover, for some non-negligible percentage of curves the bounds will be observably sharp - based on some trial runs I'm expecting about 20-30% of the computed bounds to be 0 or 1.

That's about 4 million curves for which we won't have to resort to much more expensive rank finding methods. Huge savings!

by Simon Spicer (noreply@blogger.com) at July 31, 2014 08:37 AM

July 27, 2014

Vince Knight

Game Theory and Pavement Etiquette

Last week the BBC published an article entitled: ‘Advice for foreigners on how Britons walk’. The piece basically discusses the fact that in Britain there doesn’t seem to be any etiquette over which side of the pavement one walks on:

The British have little sense of pavement etiquette, preferring a slalom approach to pedestrian progress. When two strangers approach each other, it often results in the performance of a little gavotte as they double-guess in which direction the other will turn.

Telling people how to walk is simply not British.

But on the street? No, we don’t walk on the left or the right. We are British and wander where we will.

I thought this was a really nice piece and more importantly it is very closely linked to an exercise in game theoretical modelling I’ve used in class.

Let’s consider two people walking along a street. We’ll call one of them Alexandre and the other one Bernard.

Alexandre and Bernard have two options available to them. In game theory we call these strategies and write: \(S=\{L,R\}\) where \(L\) denotes walk on the left and \(R\) denotes walk on the right.

We imagine that Alexandre and Bernard close there eyes and simply walk towards each other making a choice from \(S\). To analyse the outcome of these choices we’ll attribute a value of \(1\) to someone who doesn’t bump in to someone else and \(-1\) if they do bump in to the opposite person.

Thus we write:

\[u_{A}(L,L)=u_{B}(L,L)=u_{A}(R,R)=u_{B}(R,R)=1\]

and

\[u_{A}(L,R)=u_{B}(L,R)=u_{A}(R,L)=u_{B}(R,L)=-1\]

We usually represent this situation using two matrices, one showing the utility of each player:

\[ A = \begin{pmatrix} 1&-1\\
-1&1 \end{pmatrix} \] \[ B = \begin{pmatrix} 1&-1\\
-1&1 \end{pmatrix} \]

From these matrices it is easy to read the outcomes of any strategy pairs. If Alexandre plays \(L\) and Bernard plays \(R\) then they both get a utility of \(1\). If both are at that strategy pair then neither has a reason to ‘deviate’ their strategy: this is called a Nash Equilibrium.

Of course though (as alluded to in the BBC article), some people might not always do the same thing. Perhaps Bernard would randomly choose from \(S\). In which case it makes sense to refer to what Bernard does by the mixed strategy \(\sigma_{B}=(x,1-x)\) for \(0\leq x\leq 1\).

If we know that Alexandre is playing \(L\) then Bernard’s utility becomes:

\[u_{B}(L,\sigma)=x-(1-x)=2x-1\]

Similarly:

\[u_{B}(R,\sigma)=-x+(1-x)=1-2x\]

Here is a plot of both these utilities:

With a little bit of work that I’ll omit from this post we can show that there exists another Nash equilibrium which is when both Alexandre and Bernard play \(\sigma=(1/2,1/2)\). At this equilibrium the utility to both players is in fact \(u_{A}(\sigma,\sigma)=u_{B}(\sigma,\sigma)=0\).

This Nash equilibrium is in fact much worse than the other Nash equilibria. In a situation with central control (which if you recall the BBC article is not something that happens on British pavements) then we would be operating with either everyone walking on the left or everyone walking on the right so that the utility would be 1. As this is also a Nash Equilibrium then there would be no reason for anyone to change. Sadly it is possible that we get stuck at the other Nash equilibrium where everyone randomly walks on the right or the left (again: at this point no one has an incentive to move). This idea of comparing worst case Nash Equilibrium to the best possible outcome is referred to as the Price of Anarchy and has a lot to do with my personal research. If it is of interest here is a short video on the subject and here’s a publication that looked at the effect of selfish behaviour in public service systems (sadly that is behind a paywall but if anyone would like to read it please do get in touch).

There are some major assumptions being made in all of the above. In particular two people walking with their eyes closed towards each other is probably not the best way to think of this. In fact as all the people on the pavements of Britain constantly walk around you’d expect them to perhaps learn and evolve how they decide to walk.

This in fact fits in to a fascinating area of game theory called evolutionary game theory. The main idea is to consider multiple agents randomly ‘bumping in to each other’ and playing the game above.

Below are two plots showing a simulation of this (using Python code I describe in this video). Here is the script that makes use of this small agent based simulation script:

import Abm
number_of_agents = 1000  # Size of the population
generations = 100  # Number of generations of players
rounds_per_generation = 5  # How many time everyone from a given generation will `play`
death_rate = .1  # How many weak players we get rid of
mutation_rate = .2  # The chance of a player doing something new
row_matrix = [[1, -1], [-1, 1]]  # The utilities
col_matrix = row_matrix
initial_distribution = [{0: 100, 1: 0}, {0:100, 1:0}]  # The initial distribution which in this case has everyone walking on the left

g = Abm.ABM(number_of_agents, generations, rounds_per_generation, death_rate, mutation_rate, row_matrix, col_matrix, initial_distribution)
g.simulate(plot=True)

We see that if we start with everyone walking on one side of the left side of the pavement then things are pretty stable (using a little bit of algebra this can be shown to be a so called ‘evolutionary stable strategy’):

However, if we start with everyone playing the worst possible Nash Equilibrium (randomly choosing a side of the pavement) then we see that this is not stable and in fact we slowly converge towards the population picking a side of the pavement (this is what is called a non evolutionary stable strategy):

So perhaps there is a chance yet for the British to automagically choose a side of the pavement…

July 27, 2014 12:00 AM

July 21, 2014

Simon Spicer

How to efficiently enumerate prime numbers

There's a part in my code that requires me to evaluate a certain sum
$$ \sum_{p\text{ prime }< \,M} h_E(p) $$
where $h_E$ is a function related to specified elliptic curve that can be evaluated efficiently, and $M$ is a given bound that I know. That is, I need to evaluate the function h_E at all the prime numbers less than $t$, and then add all those values up.

The question I hope to address in this post is: how can we do this efficiently as $M$ gets bigger and bigger? Specifically, what is the best way to compute a sum over all prime numbers up to a given bound when that bound can be very large?

[For those who have read my previous posts (you can skip this paragraph if you haven't - it's not the main thrust of this post), what I want to compute is, for an elliptic curve $E/\mathbb{Q}$, the analytic rank bounding sum $ \sum_{\gamma} \text{sinc}^2(\Delta \gamma) $ over the zeros of $L_E(s)$ for positive parameter $\Delta$; this requires us to evaluate the sum $ \sum_{n < \exp(2\pi\Delta)} c_n\cdot(2\pi\Delta-\log n)$. Here the $c_n$  are the logarithmic derivative coefficients of the completed $L$-function of $E$. Crucially $c_n = 0$ whenever $n$ isn't a prime power, and we can lump together all the terms coming from the same prime; we can therefore express the sum in the form you see in the first paragraph.]

As with so many things in mathematical programming, there is a simple but inefficient way to do this, and then there are more complicated and ugly ways that will be much faster. And as has been the case with other aspects of my code, I've initially gone with the first option to make sure that my code is mathematically correct, and then gone back later and reworked the relevant methods in an attempt to speed things up.

METHOD 1: SUCCINCT BUT STUPID


Here's a Python function that will evaluate the sum over primes. The function takes two inputs: a function $h_E$ and an integer $M$, and returns a value equal to the sum of $h_E(p)$ for all primes less than $M$. We're assuming here that the primality testing function is_prime() is predefined.


As you can see, we can achieve the desired outcome in a whopping six lines of code. Nothing mysterious going on here: we simply iterate over all integers less than our bound and test each one for primality; if that integer is prime, then we evaluate the function h_E at that integer and add the result to y. The variable y is then returned at the end.

Why is this a bad way to evaluate the sum? Because there are far more composite integers than there are primes. According to the prime number theorem, the proportion of integers up to $M$ that are prime is approximately $\frac{1}{\log M}$. For my code I want to compute with bounds in the order of $M = e^{8\pi} \sim 10^{11}$; the proportion of integers that are prime up to this bound value is correspondingly about $\frac{1}{8\pi} \sim 0.04$. That is, 96% of the integers we iterate over aren't prime, and we end up throwing that cycle away.

Just how inefficient this method is of course depends on how quickly we can evaluate the primality testing function is_prime(). The best known deterministic primality testing algorithm has running time that scales with (at most) the 6th power of $\log n$, where $n$ is the number being tested. This places primality testing in a class of algorithms called Polynomial Time Complexity Algorithms, which means the runtime of the function scales relatively well with the size of the input. However, what kills us here is the sheer number of times we have to call is_prime() - on all integers up to our bound $M$ - so even if it ran in constant time the prime_sum() function's running time is going to scale with the magnitude of $M$.

METHOD 2: SKIP THOSE $n$ WE KNOW ARE COMPOSITE


We can speed things up considerably by noting that apart from 2, all prime numbers are odd. We are therefore wasting a huge amount of time running primality tests on integers that we know a priori are composite. Assuming is_prime() takes a similar time to execute than our coefficient function h_E(), we could therefore roughly halve the runtime of the prime sum function by skipping the even integers and just checking odd numbers for primality.

We can go further. Apart from 2 and 3, all primes yield a remainder of 1 or 5 when you divide them by 6 (because all primes except for 2 are 1 (modulo 2) and all primes except for 3 are 1 or 2 (modulo 3)). We can therefore skip all integers that are 0, 2, 3 or 4 modulo 6; this means we only have to check for primality on only one third of all the integers less than $M$.

Here's a second version of the prime_sum() function that does this:


Of course we could go even further with the technique by looking at remainders modulo $p$ for more primes $p$ and combining the results: for example, all primes outside of 2, 3 and 5 can only have a remainder of 7, 11, 13, 17, 19, 23 or 29 modulo 30. However, the further you go the more special cases you need to consider, and the uglier your code becomes; as you can see, just looking at cases modulo 6 requires us to write a function about three times as long as previously. This method therefore will only be able to take us so far before the code we'd need to write would become too unwieldy for practicality.

METHOD 3: PRIME SIEVING...


This second prime_sum() version is a rudimentary example of a technique called prime sieving. The idea is to use quick computations to eliminate a large percentage of integers from consideration in a way that doesn't involve direct primality testing, since this is computationally expensive. Sieving techniques are an entire field of research in their own right, so I thought I'd just give as example one of the most famous methods: the Sieve of Eratosthenes (named after the ancient Greek mathematician who is thought to first come up with the idea). This takes as input a positive bound $M$ and returns a list of all prime numbers less than $M$. The method goes as follows:
  1. Start with a list of boolean flags indexed by the numbers 2 through $M$, and set all of them to True. 
  2. Starting at the beginning of the list, let $i$ be the index of the first True entry. Set all entries at indices a multiples of $i$ to False.
  3. Repeat step 2 until the first True entry is at index $> \sqrt{M}$.
  4. Return a list of all integers $i$ such that the entry at index $i$ is True.
This is definitely a case where a (moving) picture is worth a thousand words:
A good graphic representation of the Sieve of Eratosthenes being used to generate all primes less than 121. Courtesy Wikipedia: "Sieve of Eratosthenes animation". Licensed under CC BY-SA 3.0 via Wikimedia Commons
How and why does this work? By mathematical induction, at each iteration the index of the first True entry will always be prime. However any multiple thereof is by definition composite, so we can walk along the list and flag them as not prime. Wash, rinse, repeat. We can stop at $\sqrt{M}$, since all composite numbers at most $M$ in magnitude must have at least one prime factor at most $\sqrt{M}$ in size.

Here is a third version of our prime_sum() function that utilizes the Sieve of Eratosthenes:

Let's see how the three versions stack up against each other time-wise in the Sage terminal. I've saved the three functions in a file called prime_sum_functions.py, which I then import up front (if you want to do the same yourself, you'll need to import or define appropriate is_prime() and sqrt() functions at the top of the file). I've also defined a sample toy function h_E() and bound M:

sage: from prime_sum_functions import *
sage: def h_E(n): return sin(float(n))/float(n)
sage: M = 10000
sage: prime_sum_v1(h_E,M)
0.19365326958140347
sage: prime_sum_v2(h_E,M)
0.19365326958140347
sage: prime_sum_v3(h_E,M)
0.19365326958140347
sage: %timeit prime_sum_v1(h_E,M)
1 loops, best of 3: 363 ms per loop
sage: %timeit prime_sum_v2(h_E,M)
1 loops, best of 3: 206 ms per loop
sage: %timeit prime_sum_v3(h_E,M)
10 loops, best of 3: 86.8 ms per loop

Good news! All three functions (thankfully) produce the same result. And we see version 2 is about 1.8 times faster than version 1, while version 3 is four times as fast. These ratios remained roughly the same when I timed the functions on larger bounds, which indicates that the three versions have the same or similar asymptotic scaling - this should be expected, since no matter what we do we will always have to check something at each integer up to the bound.

METHOD 4: ...AND BEYOND


It should be noted, however, that the Sieve of Eratosthenes as implemented above would be a terrible choice for my GSoC code. This is because in order to enumerate the primes up to $M$ we need to create a list in memory of size $M$. This isn't an issue when $M$ is small, but for my code I need $M \sim 10^{11}$; an array of booleans that size would take up about 12 gigabytes in memory, and any speedups we get from not having to check for primality would be completely obliterated by read/write slowdowns due to working with an array that size. In other words, while the Sieve of Eratosthenes has great time complexity, it has abysmal space complexity.

Thankfully, more memory-efficient sieving methods exist that drastically cut down the space requirements. The best of these - for example, the Sieve of Atkin - need about $\sqrt{M}$ space. For $M \sim 10^{11}$ this translates to only about 40 kilobytes; much more manageable.

Of course, there's always a downside: bleeding edge prime enumeration methods are finicky and intricate, and there are a plethora of ways to get it wrong when implementing them. At some point squeezing an extra epsilon of speedup from your code is no longer worth it in terms of the time and effort it will take to get there. For now, I've implemented a more optimized version of the second prime_sum() function in my code (where we skip over all integers that are obviously not prime), since for now that is my happy middle ground.  If I have time at the end of the project I will revisit the issue of efficient prime enumeration and try implement a more optimized sieving method, but that is a tomorrow problem.

by Simon Spicer (noreply@blogger.com) at July 21, 2014 04:00 PM

July 19, 2014

Vince Knight

Using Github pages and Python to distribute my conference talks

I’m very conscious about wanting to share as much of what I do as a research/instructor in as easy a way as possible. One example of this is the slides/decks that I use for talks I give. 3 or 4 years ago I was doing this with a Dropbox link. More recently this has lead to me putting everything on github but this wasn’t ideal as I’ve started to accumulate a large number of repositories for the various talks I give.

One of the great things about using github though is that for html presentations (reveal.js is the one I’ve used a couple of times), you can use the github deployement branch gh-pages to serve the files directly. A while back I put a video together showing how to do this:

So there are positives and negatives. After moving to a jekyll framework for my blog I started thinking of a way of getting all the positives without any negatives…

  • I want to use a git+github workflow;
  • I don’t want to have a bunch of separate repositories anymore;
  • I want to be able to just write my talks and they automatically appear online.

My immediate thought was to just use jekyll but as far as I can tell I’d have to hack it a bit to get blog posts be actual .pdf files (for my beamer slides) (please tell me if I’m wrong!). I could of course write a short blog post with a link to the file but this was one extra step to just writing my talk that I didn’t want to have to do. Instead of hacking jekyll a bit I decided to write a very simple Python script. You can see it here but I’ll use this blog post to just walk through how it works.

I have a Talks repository:

~$ cd Talks
Talks$ ls

2014-07-07-Using-a-flipped-classroom-in-a-large-programming-course-for-mathematicians
2014-07-09-Embedding-entrepreneurial-learning-through-the-teaching-of-programming-in-a-large-flipped-classroom
2014-07-25-Measuring-the-Price-of-Anarchy-in-Critical-Care-Unit-Interactions
README.md
favicon.ico
footer.html
head.html
header.html
index.html
main.css
reveal.js
serve.py

What you can see in there are 3 directories (each starting with a date) and various other files. In each of the talk directories I just have normal files for those talks:

Talks$ cd 2014-07-25-Measuring-the-Price-of-Anarchy-in-Critical-Care-Unit-Interactions/
...$ ls

2014-07-25-Measuring-the-Price-of-Anarchy-in-Critical-Care-Unit-Interactions.nav    2014-07-25-Measuring-the-Price-of-Anarchy-in-Critical-Care-Unit-Interactions.snm
2014-07-25-Measuring-the-Price-of-Anarchy-in-Critical-Care-Unit-Interactions.pdf    2014-07-25-Measuring-the-Price-of-Anarchy-in-Critical-Care-Unit-Interactions.tex

I can work on those slides just as I normally would. Once I’m ready I go back to the root of my Talks directory and run the serve.py script:

Talks$ python serve.py

This file automatically goes through my sub-directories reading the date from the directory names and identifying .html or .pdf files as talks. This creates the index.html file which is an index of all my talks (sorted by date) with a link to the right file. To get the site online you simply need to push it to the gh-pages branch of your github repository.

You can see the site here: drvinceknight.github.io/Talks. Clicking on relevant links brings up the live version of my talk, or at least as live as my latest push to the github gh-pages branch.

The python script itself is just:

 1 #!/usr/bin/env python
 2 """
 3 Script to index the talks in this repository and create an index.html file.
 4 """
 5 import os
 6 import glob
 7 import re
 8 
 9 root = "./"
10 directories = sorted([name for name in os.listdir(root) if os.path.isdir(os.path.join(root, name))], reverse=True)
11 talk_formats = ['.pdf', '.html']
12 
13 
14 index = open('index.html', 'w')
15 index.write(open('head.html', 'r').read())
16 index.write(open('header.html', 'r').read())
17 
18 index.write("""
19             <body>
20             <div class="page-content">
21             <div class="wrap">
22             <div class="home">
23             <ul class='posts'>""")
24 
25 for dir in directories:
26     if dir not in ['.git', 'reveal.js']:
27         talk = [f for f in glob.glob(root + dir + "/" + dir[:10] + '*') if  os.path.splitext(f)[1] in talk_formats][0]
28         date = talk[len(root+dir) + 1: len(root + dir) + 11]
29         title, extension =  os.path.splitext(talk[len(root+dir)+11:].replace("-", " "))
30         index.write("""
31                     <li>
32                     <span class="post-date">%s [%s]</span>
33                     <a class="post-link" href="%s">%s</a>
34                     <p>
35                     <a href="%s">(Source files)</a>
36                     </p>
37                     </li>
38                     """ % (date, extension[1:], talk, title, 'https://github.com/drvinceknight/Talks/tree/gh-pages/' + dir ))
39 index.write("""
40             </ul>
41             </div>
42             </div>
43             </div>
44             """)
45 index.write(open('footer.html', 'r').read())
46 index.write("</body>")
47 index.close()

The main lines that do anything are lines 25-38, everything else just reads in the relevant header and footer files.

So now getting my talks written and online is hardly an effort at all:

# Write awesome talk
Talks$ git commit -am 'Finished talk on proof of finite number of primes'  # This I would do anyway
Talks$ python serve.py  # This is the 1 extra thing I need to do
Talks$ git push origin  # This I would do anyway

There are various things that could be done to improve this (including pushing via serve.py as an option) and I’m not completely convinced I can’t just use jekyll for this but it was quicker to write that script then to figure it out (or at least that was my conclusion after googling twice).

If anyone has any fixes/improvements (including: “You idiot just run jekyll with the build-academic-conference-talk-site flag”) that’d be super appreciated and if you want to see the Talk repository (with python script, css files, header.html etc…) it’s here: github.com/drvinceknight/Talks.

Now to finish writing my talk for ORAHS2014

July 19, 2014 12:00 AM

July 15, 2014

Vince Knight

Three Days: Two Higher Ed Conferences

Last week I took part in two conferences on the subject of higher education and so here’s a blog post with some thoughts and reflections.

Monday and Tuesday: ‘Workshop on Innovations in University Mathematics Teaching’

The first conference was organised by +Paul Harper, Rob Wilson and myself. The conference website can be found here. The main subject of this was active learning pedagogic techniques, in particular:

  • The flipped classroom;
  • Inquiry Based Learning (IBL).

The plan for these two days included an almost full day of talks on the Monday and an interactive IBL session on the Tuesday.

Here’s a couple of snippets from each session:

  • Robert Talbert gave the opening talk describing the flipped learning environment. You can see his slides here.



    I was quite nervous about Robert’s talk as he’s an expert in flipping classrooms and I was scheduled to talk after him. He gave a great talk (in fairness every single talk on the day was awesome) and here are a couple of things I noted down:

    A flipped classroom does not imply a flipped learning environment!

    A traditional classroom encourages the dependency of a student on the instructor.

    Flipped learning is not just videos out of class and homework in class.

    My favourite:

    The most important part of the flipped classroom is not what happens outside of the classroom (videos etc…) but what happens inside the classroom.

  • I spoke next and you can find my slides here.



    I mainly spoke about the programming course that I teach using a flipped class to our first years. I didn’t want to go in to any details about what a flipped learning environment is as I would most certainly not have been able to do it justice after Robert’s talk so I just gave an exemplar of it in practice. I might blog about the particular approach I used another time.

  • Toby Bailey then gave an excellent talk about the flipped classroom / peer instruction that he has in place for a large class.



    One of the highlights was certainly a video of his class in which we saw students respond to a question via in class clickers and then break in to groups of 2 to discuss the particular problem and finally answer the question one more time. Responses were then put up for all to see and it was great to see that students were indeed improving (you could see the distributions of clicker answers improve after the peer instruction).

    Here are a couple of other things I noted down during the talk:

    It’s not all about the lecturer.

    The importance of getting out of the way.

    Tell the class why you are doing it.

  • Stephen Rutherford then spoke about his flipped classroom in biosciences.



    This was a great talk and it was very neat to have a non mathematical point of view. My first highlight from Steve’s talk can be seen in the photo above. I think that that fundamental question (‘why am I better than a book’) could in fact help improve the instruction of many.

    A flipped classroom allows some control to be put in the hands of the students.

    The reason students are at university is to get an education and not necessarily a degree.

  • We then moved on to a relaxed panel discussion about the flipped classroom, one of the things that I think was a big highlight of that was the importance of involving students in the pedagogic reasoning behind whatever approach is used in a class.

  • The final ‘talk’ of the day was by Chris Sangwin who talked about his Moore Method class.



    This was a fascinating talk as Chris clearly described the success he has had with implementing a Moore Method class.

    In particular he highlighted the importance of he role of the instructor in this framework where students are given a set of problems to work through and present to their peers (there is no lecturing in a Moore method class).

    Some highlights:

    In 2007, after his class finished students found the book from which his problems originated and continued to work through them on their own.

    In 2008, students set up a reading group and started to read complex mathematical topics.

  • The rest of the conference was a natural continuation from Chris’s talk as Dana Ernst and TJ Hitchman spoke about Inquiry Based Learning (a pedagogic term that encompasses the Moore method - I’m sure someone can correct me if I got that subtlety wrong).




    This was a really great interactive session that ran over to Tuesday. There is far too much that happened in this session and it was hard to take notes as we were very much involved but here is some of the things that stuck for me.

    1. TJ ran a great first little session that basically got us to think about what we want to be as educators. One of the main things that came out of the ‘what do you want your students to remember in 20 years time’ question was that very few of us (I’m not sure if anyone did) mentioned the actual content of the courses we teach.
    2. The importance of creating a safe environment in which students can fail (in order to learn). Productive failure.
    3. The various difficulties associated with implementing an IBL approach due to class size (this was a recurring theme with regards to UK vs US class sizes).

    Another important point was the criteria that defines an IBL approach:

    Students are in charge of not only generating the content but also critiquing the content.

    You can find all of Dana and TJ’s content on their github repository.

After this session I enjoyed a good chat with TJ who helped me figure out how to make my R/SAS course better. After that my project student who will be working with me to evaluate my flipped classroom had a great talk with Robert, TJ and Dana who gave some really helpful advice. One of the highlights that came out of this was Robert putting very simply what I believe defines an effective pedagogic approach. Hopefully Robert will either correct me or forgive me for paraphrasing (EDIT: He has corrected me in the comments):

Whatever the approach: flipped classroom, IBL, interpretive dance, as long as the system allows you to empower your students and monitor how they are learning it is worth doing.

I’m probably forgetting quite a few details about the workshop (including the 6+ course conference dinner which was pretty awesome). Now to describe the next conference which was the Cardiff University Annual Learning and Teaching Conference

Wednesday: Cardiff University Annual Learning and Teaching Conference

This was my first time attending this conference and I was lucky enough to have my abstract accepted so I was able to give a talk.

You can find my slides here.

In all honesty I was kind of tired so I didn’t take as detailed notes as I would like and/or as many photos but here are some highlights:

  • I enjoyed Stephen Rutherford discussing the plans of the Biosciences school to bring in peer assessment:

    Assessment for learning and not of learning

  • I liked Anne Cunningham reminding everyone that students obtain content from a variety of sources when talking about their using of scoop.it:

    The curators are not just the staff. The prime curators are the students.

  • Rob Wilson and Nathan Roberts gave an overview of the tutoring system that we as a university will be heading towards.

A great three days

This brings my attendance at education conference to a total of 3 and I must say that I can’t wait to go to the next one (incidentally my abstract for the CETL-MSOR conference got accepted today). I really enjoy the vibe at education conferences as there is a slight sense of urgency in case anyone says anything that someone might be able to use/adapt/steal so as to improve their teaching and have a real impact on our students’ lives.

Two final links:

  • The #innovcardiff hashtag if you would like to see what was being said online about the innovation conference (big applause to Calvin Smith who did a tremendous job on there!);
  • The #cardiffedu hashtag if you would like to see the same for the Cardiff University education conference.

If anyone who attended the conferences has anything to add it would be great to hear from you and if anyone couldn’t make it but would like to know more: please get in touch :)

July 15, 2014 12:00 AM

July 14, 2014

Simon Spicer

Cythonize!

I'm at the stage where my code essentially works: it does everthing I initially set out to have it do, including computing the aforementioned zero sums for elliptic curve $L$-functions. However, the code is written in pure Python, and it is therefore not as fast as it could be.

This is often not a problem; Python is designed to be easy to read and maintain, and I'm hoping that the Python code I wrote is both of those. If we were just planning to run it on elliptic curves with small coefficients - for example, the curve represented by the equation $y^2=x^3-16x+16$ - this wouldn't be an issue. Curves with small coefficients have small conductors and thus few low-lying zeros near the central point, which allows us to run the zero sum code on them with small Delta parameter values. A small Delta value means the computation will finish very quickly regardless of how efficiently it's implemented, so it probably wouldn't be worth my while trying to optimize the code in that case.

To illustrate this point, here is the first most high-level, generic version of the method that computes the sum $\sum_{\gamma} \text{sinc}^2(\Delta \gamma)$ over the zeros of a given elliptic curve $L$-function (minus documentation):

[Of course, there's plenty going on in the background here. I have a separate method, self.cn() which computes the logarithmic derivative coefficients, and I call the SciPy function spence() to compute the part of the sum that comes from the Fourier transform of the digamma function $\frac{\Gamma^{\prime}}{\Gamma}$. Nevertheless, the code is simple and straightforward, and (hopefully) it's easy to follow the logic therein.]

However, the whole point of my GSoC project is to produce code that can be used for mathematical research; ultimately we want to push computations as far as we can and run the code on elliptic curves with large conductor, since curves with small conductor are already well-understood. Ergo, it's time I thought about going back over what I've written and seeing how I can speed it up.

There are two distinct ways to achieve speedups. The first is to rewrite the code more cleverly to eliminates unnecessary loops, coercions, function calls etc. Here is a second version I have written of the same function (still in Python):

The major change I've made between the two versions is improving how the sum involving the logarithmic derivative coefficients is computed - captured in the variable y. In the first version, I simply iterated over all integers $n$ up to the bound $t$, calling the method self.cn() each time. However, the logarithmic derivative coefficient $c_n$ is zero whenever $n$ is not a prime power, and knowing its value for $n=p$ a prime allows us to efficiently compute its value for $p^k$ any power of that prime. It therefore makes sense to do everything "in-house": eliminate the method call to self.cn(), iterate only over primes, and compute the logarithmic derivative coefficients for all powers of a given prime together.

Let's see how the methods match up in terms of speed. Below we run the two versions of the zero sum method on the elliptic curve $E: y^2=x^3-16x+16$, which is a rank 1 curve of conductor 37:

sage: import sage.lfunctions.zero_sums as zero_sums
sage: ZS = zero_sums.LFunctionZeroSum(EllipticCurve([-16,16]))
sage: ZS._zerosum_sincsquared(Delta=1)
1.01038406984
sage: ZS._zerosum_sincsquared_fast(Delta=1)
1.01038406984
sage: %timeit ZS._zerosum_sincsquared(Delta=1)
10 loops, best of 3: 20.5 ms per loop
sage: %timeit ZS._zerosum_sincsquared_fast(Delta=1)
100 loops, best of 3: 3.46 ms per loop

That's about a sixfold speedup we've achieved, just by reworking the section of the code that computes the $c_n$ sum.

The downside, of course, is that the code in method version 2 is more complicated - and thus less readable - than that in version 1. This is often the case in software development: you can write code that is elegant and easy to read but slow, or you can write code that is fast but horribly complicated and difficult to maintain. And when it comes to mathematical programming, unfortunately, sometimes the necessity for speed trumps readability.

The second major way to achieve speedups is to abandon pure Python and switch to a more low-level language. I could theoretically take my code and rewrite it in C, for example; if done relatively intelligently I'm sure the resulting code would blow what I have here out the water in terms of speed. However, I have no experience writing C code, and even if I did getting the code to interface with the rest of the Sage codebase would be a major headache.

Thankfully there is a happy middle ground: Cython. Cython is a programming language - technically a superset of Python - that allows direct interfacing with C and C++ data types and structures. Code written in Cython can be as fast as C code and nearly as readable as pure Python. Crucially, because it's so similar to Python it doesn't require rewriting all my code from scratch. And Sage already knows how to deal with Cython, so there aren't any compatibility issues there.

I am therefore in the process of doing exactly that: rewriting my code in Cython. Mostly this is just a cut-and-paste job and is pleasingly hassle-free; however, in order to achieve the desired speedups, the bottleneck methods - such as our $\text{sinc}^2$ zero sum method above - must be modified to make use of C data types.

Here is the third, most recent version of the _zerosum_sincsquared() method for our zero sum class, this time written in Cython:

Definitely longer and uglier. I now must declare my (C) variables up front; previously Python just created them on the fly, which is nice but slower than allocating memory space for the variables a priori. I've eliminated the use of complex arithmetic, so that everything can be done using C integer and float types. I still iterate over all primes up to the bound $t$; however now I deal with those primes that divide the conductor of $E$ (for which the associated summand is calculated slightly differently) beforehand, so that in the main loop I don't have to check at each point if my prime $p$ divides the conductor or not [This last check is expensive: the conductor $N$ can be very large - too large to cast as a $C$ long long even - so we would have to use slower Python or Sage data types to represent it. Better to get rid of the check altogether].

Let's see how this version holds up in a speed test. The Cython code has already been built into Sage and the class loaded into the global namespace, so I can just call it without having to attach or load any file:

sage: ZS = LFunctionZeroSum(EllipticCurve([-16,16]))
sage: ZS._zerosum_sincsquared_fast(Delta=1)
1.01038406984
sage: %timeit ZS._zerosum_sincsquared_fast(Delta=1)
1000 loops, best of 3: 1.72 ms per loop

The good news: the Cythonized version of the method produces the same output as the Python versions, and it's definitely faster. The bad news: the speedup is only about a factor of 2, which isn't hugely impressive given how much uglier the code is.

Why is this? Crucially, we still iterate over all integers up to the bound $t$, testing for primality at each step. This is very inefficient: most integers are not prime (in fact, asymptotically 0 percent of all positive integers are prime); we should be using sieving methods to eliminate primality testing at those integers which we know before checking are composite. For example, we should at the very least only ever iterate over the odd numbers beyond 3. That immediately halves the number of primality tests we have to do, and we should therefore get a comparable speedup if primality testing is what is dominating the runtime in this method.

This is therefore what I hope to implement next: rework the zero sum code yet again to incorporate prime sieving. This has applicability beyond just the $\text{sinc}^2$ method: all explicit formula-type sums at some point invoke summing over primes or prime powers, so having access to code that can do this quickly would be a huge boon.

by Simon Spicer (noreply@blogger.com) at July 14, 2014 12:37 PM

July 11, 2014

Nikhil Peter

Sage Android – UI Update

It’s been a busy week so far in the land of UI improvements. Disclaimer: I’m pretty bad at UI, so any and all suggestions are welcome. Firstly, last week’s problems have been resolved viz. Everything is saved nicely on orientation change, including the interacts, which required quite a bit of effort. In short, it involved […]

by hav3n at July 11, 2014 05:59 PM

July 08, 2014

Simon Spicer

The Birch and Swinnerton-Dyer Conjecture and computing analytic rank

Let $E$ be an elliptic curve with $L$-function $L_E(s)$. Recall that Google Summer of Code project is to implement in Sage a method that allows us to compute $\sum_{\gamma} f(\Delta \gamma)$, where $\gamma$ ranges over the imaginary parts of the nontrivial zeros of $L_E$, $\Delta$ is a given positive parameter, and $f(x)$ is a specified symmetric continuous integrable function that is 1 at the origin. The value of this sum then bounds the analytic rank of $E$ - the number of zeros at the central point - from above, since we are summing $1$ with multipliticy $r_{an}$ in the sum, along with some other nonzero positive terms (that are hopefully small). See this post for more info on the method.

One immediate glaring issue here is that zeros that are close to the critical point will contribute values that are close to 1 in the sum, so the curve will then appear to have larger analytic rank than it actually does. An obvious question, then, is to ask: how close can the noncentral zeros get to the central point? Is there some way to show that they cannot be too close? If so, then we could figure out just how large of a $\Delta$ we would need to use in order to get a rank bound that is actually tight.

The rank zero curve $y^2 = x^3 -  x^3 - 7460362000712 x - 7842981500851012704$ has an extremely low-lying zero at $\gamma = 0.0256$ (and thus another one at $-0.0256$; as a result the zero sum looks like it's converging towards a value of 2 instead of the actual analytic rank of zero. In order to actually get a sum value out that's less than one we would have to use a $\Delta$ value of about 20; this is far beyond what's feasible due to the exponential dependence of the zero sum method on $\Delta$.

The good news is that there is hope in this regard; the nature of low-lying zeros for elliptic curve $L$-functions is actually the topic of my PhD dissertation (which I'm still working on, so I can't provide a link thereto just yet!). In order to understand how close the lowest zero can get to the central point we will need to talk a bit about the BSD Conjecture.

The Birch and Swinnerton-Dyer Conjecture is one of the two Clay Math Millenium Problems related to $L$-functions. The conjecture is comprised of two parts; the first part I mentioned briefly in this previous post. However, we can use the second part to gain insight into how good our zero sum based estimate for analytic rank will be.

Even though I've stated the first part of the BSD Conjecture before, for completeness I'll reproduce the full conjecture here. Let $E$ be an elliptic curve defined over the rational numbers, e.g. a curve represented by the equation $y^2 = x^3 + Ax + B$ for some integers $A$ and $B$ such that $4A^3+27B^2 \ne 0$. Let $E(\mathbb{Q})$ be the group of rational points on the elliptic curve, and let $r_{al}$ be the algebraic rank of $E(\mathbb{Q})$. Let $L_E(s)$ be the $L$-function attached to $E$, and let $L_E(1+s) = s^{r_{an}}\left[a_0 + a_1 s + \ldots\right]$ be the Taylor expansion of $L_E(s)$ about $s=1$ such that the leading coefficient $a_0$ is nonzero; $r_{an}$ is called the analytic rank of $E$ (see here for more details on all of the above objects). The first part of the BSD conjecture asserts that $r_{al}=r_{an}$; that is, the order of vanishing of the $L$-function about the central point is exactly equal to the number of free generators in the group of rational points on $E$.

The second part of the conjecture asserts that we actually know the exact value of that leading coefficient $a_0$ in terms of other invariants of $E$. Specifically:
$$ a_0 = \frac{\Omega_E\cdot\text{Reg}_E\cdot\prod_p c_p\cdot\#\text{Ш}(E/\mathbb{Q})}{(\#E_{\text{Tor}}(\mathbb{Q}))^2}. $$

Fear not if you have no idea what any of these quantities are. They are all things that we know how to compute - or at least estimate in size. I provide below brief descriptions of each of these quantities; however, feel free to skip this part. It suffices to know that we have a formula for computing the exact value of that leading coefficient $a_0$.
  1. $\#E_{\text{Tor}}(\mathbb{Q})$ is the number of rational torsion points on $E$. Remember that the solutions $(x,y)$ to the elliptic curve equation $y^2 = x^3 + Ax+B$, where $x$ and $y$ are both rational numbers, form a group. Recall also that that the group of rational points $E(\mathbb{Q})$ may be finite or infinite, depending on whether the group has algebraic rank zero, or greater than zero. However, it turns out that there are only ever finitely many torsion points - those which can be added to themselves some finite number of times to get the group identity element. These points of finite order form a subgroup, denoted $E_{\text{Tor}}(\mathbb{Q})$, and the quantity in question is just the size of this finite group (squared in the formula). In fact, it's been proven that the size of $E_{\text{Tor}}(\mathbb{Q})$ is at most 16.
  2. $\Omega_E$ is the real period of $E$. This is perhaps a bit more tricky to define, but it essentially is a number that measures the size of the set of real points of $E$. If you plot the graph of the equation representing $E: y^2 = x^3 + Ax + B$ on the cartesian plane, you get something that looks like one of the following:

    The plots of the real solutions to four elliptic curves, and their associated real periods.

    There is a way to assign an intrinsic "size" to these graphs, which we denote the real period $\Omega_E$. The technical definition is that $\Omega_E$ is equal to the integral of the absolute value of the differential $\omega = \frac{dx}{2y}$ along the part of the real graph of $E$ that is connected to infinity (that or twice that depending on whether the cubic equation $x^3 + Ax + B$ has one or three real roots respectively).
  3. $\text{Reg}_E$ is the regulator of $E$. This is a number that measures the "density" of rational points on $E$. Recall that $E(\mathbb{Q}) \approx T \times \mathbb{Z}^{r_{an}}$, i.e there free part of $E(\mathbb{Q})$ is isomorphic to $r_{an}$ copies of the integers. There is a canonical way to embed the free part of $E(\mathbb{Q})$ in $\mathbb{R}^{r_{an}}$ as a lattice; the regulator $\text{Reg}_E$ is the volume of the fundamental domain of this lattice. The thing to take away here is that elliptic curves with small regulators have lots of rational points whose coefficients have small numerators and denominators, while curves with large regulators have few such points.
  4. $\prod_p c_p$ is the Tamagawa product for $E$. For each prime $p$, one can consider the points on $E$ over the $p$-adic numbers $\mathbb{Q}_p$. The Tamagawa number $c_p$ is the ratio of the size of the full group of $p$-adic points on $E$ to the subgroup of $p$-adic points that are connected to the identity. This is always a positive integer, and crucially, in all but a finite number of cases the ratio is 1. Thus we can consider the product of the $c_p$ as we range over all prime numbers, and this is precisely the definition of the Tamagawa product.
  5. $\#\text{Ш}(E/\mathbb{Q})$ is the order of the Tate-Shafarevich group of $E$ over the rational numbers. The Tate-Shafarevich group of $E$ is probably the most mysterious part of the BSD formula; it is defined as the subgroup of the Weil–Châtelet group $H^1(G_{\mathbb{Q}},E)$ that becomes trivial when passing to any completion of $\mathbb{Q}$. If you're like me then this definition will be completely opaque; however, we can think of $\text{Ш}(E/\mathbb{Q})$ as measuring how much $E$ violates the local-global principle: that one should be able to find rational solutions to an algebraic equation by finding solutions modulo a prime number $p$ for each $p$, and then piecing this information together with the Chinese Remainder Theorem to get a rational solution. Curves with nontrivial $\text{Ш}$ have homogeneous spaces that have solutions modulo $p$ for all $p$, but no rational points. The main thing here is that $\text{Ш}$ is conjectured to be finite, in which case $\#\text{Ш}(E/\mathbb{Q})$ is just a positive integer (in fact, it can be shown for elliptic curves that if $\text{Ш}$ is indeed finite, then its size is a perfect square).
Why is the BSD Conjecture relevant to rank estimation? Because it helps us overcome the crucial obstacle to computing analytic rank exactly: without extra knowledge, it's impossible to decide using numerical techniques whether the $n$th derivative of the $L$-function at the central point is exactly zero, or just so small that it looks like it is zero to the precision that we're using. If we can use the BSD formula to show a priori that $a_0$ must be at least $M_E$ in magnitude, where $M_E$ is some number that depends only on some easily computable data attached to the elliptic curve $E$, then all we need to do is evaluate successive derivatives of $L_E$ at the central point to enough precision to decide if that derivative is less than $M_E$ or not; this is readily doable on a finite precision machine. Keep going until we hit a derivative which is then definitely greater than $M_E$ in magnitude, at which we can halt and declare that the analytic rank is precisely the order of that derivative.

In the context of the explicit formula-based zero sum rank estimation method implemented in our GSoC project, the size of the leading coefficient also controls how far close the lowest noncentral zero can be from the central point. Specifically, we have the folling result: Let $\gamma_0$ be the lowest-lying noncentral zero for $L_E$ (i.e. the zero closest to the central point that is not actually at the central point); let $\Lambda_E(s) = N^{s/2}(2\pi)^s\Gamma(s)L_E(s)$ is the completed $L$-function for $E$, and let $\Lambda_E(1+s) = s^{r_{an}}\left[b_0 + b_1 s + b_2 s^2 \ldots\right]$ be the Taylor expansion of $\Lambda_E$ about the central point. Then:
$$ \gamma_0 > \sqrt{\frac{b_0}{b_2}}. $$
Thankfully, $b_0$ is easily relatable back to the constant $a_0$, and we have known computable upper bounds on the magnitude of $b_2$, so knowing how small $a_0$ is tells us how close the lowest zero can be to the central point. Thus bounding $a_0$ from below in terms of some function of $E$ tells us how large of a $\Delta$ value we need to use in our zero sum, and thus how long the computation will take as a function of $E$.

If this perhaps sounds a bit handwavy at this point it's because this is all still a work in progress, to be published as part of my dissertation. Nevertheless, the bottom line is that bounding the leading $L$-series coefficient from below gives us a way to directly analyze the computational complexity of analytic rank methods. 

I hope to go into more detail in a future post about what we can do to bound from below the leading coefficient of $L_E$ at the central point, and why this is a far from straightforward endeavour.

by Simon Spicer (noreply@blogger.com) at July 08, 2014 06:04 PM

July 06, 2014

David Horgan (quantumtetrahedron)

Sagemath21: Knot theory calculations

This week I have been working on a variety of topics, but I’d like to post about some sagemath knot theory investigations I‘ve been doing.  This is connected to one of my longer term aims, which  is to  perform SU(2) recoupling calculations based on  Temperley-Lieb algebras using Sage Mathematics Software. Essentially I would like to to write a set […]

by David Horgan at July 06, 2014 07:23 PM

Vince Knight

My CSV python video gets 10000 views

So the other day I got the following notification on my phone:

This both terrified me and made me smile. It’s nice to think that a bunch of people have (hopefully) been helped with what I put together but also slightly worrying as I think I would be able to put together a much better video if I did it now.

Here it is:

The basic code I use in the video is:

import csv

out = open('data.csv', 'r')  # Open a file in read mode
data = csv.reader(out)  # Initiate a csv reader object which will parse the data
data = [[row[0], eval(row[1]), eval(row[2])] for row in data]  # Read in the data and convert certain entries of each row
out.close()  # Close the file

new_data = [[row[0], row[1] + row[2]] for row in data]  # Create some new data

out = open('new_data.csv', 'w')  # Open a file in write mode
output = csv.writer(out)  # Initiate a csv writer object which will write the data in the correct format (csv)

for row in new_data:  # Loop through the data
    output.writerow(row)  # Write the row to file

out.close()  # Close the file

There are a couple of other videos that are getting near that landmark, this is possibly my favourite of them:

The above video shows how to simulate a basic queue in Python. My Summer student James Campbell and I are working on something related at the moment.

July 06, 2014 12:00 AM

July 02, 2014

Vince Knight

A gitignore file for Python (Sage) and LaTeX

The main tools I use on a day to day basis are Python/Sage and LaTeX. Since learning to love git and even more so github my usual workflow when starting a new project is something like this:

$ git init
$ vim proof_that_there_are_a_finite_number_of_primes.py
...
$ vim application_for_fields_medal.tex
$ latexmk --xelatex application_for_fields_medal.tex
$ git status

Once I run the git status that’s when I realise that I’ve now got a bunch of files I don’t want to commit .aux, .pyc etc…

I then normally throw whatever those files are to a .gitignore file. Despite teaching my students to never do anything twice and to always script whatever can be scripted: I have never actually put a .gitignore file together that has everything in it.

So here’s my first attempt.

This is basically a modification of the Python github .gitignore and a slight tweak of this gist (there doesn’t seem to be a bespoke LaTeX .gitignore as an option when you create a repository).

# LaTeX Files
*.aux
*.glo
*.idx
*.log
*.toc
*.ist
*.acn
*.acr
*.alg
*.bbl
*.blg
*.dvi
*.glg
*.gls
*.ilg
*.ind
*.lof
*.lot
*.maf
*.mtc
*.mtc1
*.out
*.synctex.gz
*.fdb_latexmk
*.fls
*.nav
*.snm

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]

# C extensions
*.so

# Distribution / packaging
.Python
env/
bin/
build/
develop-eggs/
dist/
eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.cache
nosetests.xml
coverage.xml

# Translations
*.mo

# Mr Developer
.mr.developer.cfg
.project
.pydevproject

# Rope
.ropeproject

# Django stuff:
*.log
*.pot

# Sphinx documentation
docs/_build/

You can find this at this gist.

If anyone can suggest anything I’m missing it’d be great to hear it :)

July 02, 2014 12:00 AM

June 28, 2014

Nikhil Peter

GSoC – Post Mid-Term Update

Well, Mid-Terms have just finished(I passed!), and I’ve been neglecting this blog so here are some updates. The app is now in beta! More info can be found at the sage-devel post here Most of the work done is internal, but there are a few external fixes and enhancements as well. Agendas for the next […]

by hav3n at June 28, 2014 05:41 PM

June 27, 2014

Vince Knight

Pointing at a blog post about Sage and a treasure hunt

It looks like I might be moving... I've been playing with jekyll + github and really love the workflow so it looks like I might be moving this blog over there.

It's not official yet as I still need to think about a couple of things but in the meantime here's a post I just wrote over there about how the team I was in used Sage in a mathematical treasure hunt: http://goo.gl/iLgGnL

by Vincent Knight (noreply@blogger.com) at June 27, 2014 12:10 PM

Vince Knight

Using Sage in a Treasure hunt

Here is my first post using jekyll, I’ll see how this goes and potentially migrate my blog completely to here.

This past year I’ve had a very minor role helping to organise the EURO Summer Institute for Healthcare. This took part from June 11 to 20 and it was really cool with 20 (ish) young academics (from first year PhD student to early years of Post Doc) coming together to present a paper they’re working on and get bespoke feedback from peers and an expert in the field.

The academic activities are really valuable but arguably of more value is the fact that everyone comes together and has a good time. One such ‘good time’ was a treasure hunt that because of some people dropping out of I was lucky enough to take part in. This was really great fun and in this post I’ll describe the problems as well as the Sage code my team used to almost win!

Here’s a photo that Paul Harper took of Pieter, Omar, Jenny and I huddled around Sage:

The treasure hunt involved us running around the small village to find an envelope with a particular mathematical problem. Once we’d solved that problem we would be given an indication as to where our next problem was.

Here are the various challenges (again: thanks to Paul for taking the photos!):

A clinical pathway problem (we did this one by hand):

A queueing problem:

To solve the above we wrote down the Markov chain (that’s actually the picture that Pieter is working on in the first photo of this blog post). Here’s the transition matrix corresponding to the Markov Chain:

\[\begin{pmatrix} -\frac{4_5}{45} & \frac{4}{45} & 0 & 0 & 0 \\
\frac{1}{20} & -\frac{5}{36} & \frac{1}{45} & \frac{1}{15} & 0 \\
0 & \frac{1}{10} & -\frac{1}{6} & 0 & \frac{1}{15} \\
0 & \frac{1}{20} & 0 & -\frac{13}{180} & \frac{1}{45} \\
0 & 0 & 0 & \frac{1}{10} & -\frac{1}{10} \end{pmatrix}\]

At the very beginning of the treasure hunt the organisers encouraged everyone to go and get their laptops. We originally thought that we’d manage without but at this point I ran to my room to go get my laptop :)

Here’s how we obtained the steady state probability \(\pi\):

Q = matrix([[-4/45,4/45,0,0,0,1],
            [1/20,-(1/20+3/45+1/45),1/45,3/45,0,1],
            [0,2/20,-(2/20+3/45),0,3/45,1],
            [0,1/20,0,-1/20-1/45,1/45,1],
            [0,0,0,2/20,-2/20,1]])
pi = Q.solve_left(vector([0,0,0,0,0,1]))

The above solves the matrix equation \(\pi Q=0\) with an extra column in \(Q\) to ensure that the elements of \(\pi\) sum to 1.

Finally, the particular questions asked for the following weighted sum (corresponding the mean utilisation):

pi[4]+2*pi[2]+1*pi[3]+1*pi[1])/2

This gave a value of about \(0.499\).

I’m obviously missing out a couple of details (the actually description of the state space) but I’ll leave that as an exercise.

A chinese postman problem:

I’m not entirely sure what we did here as Pieter kind of ran with this one but at some point I was asked to calculate a matching on our graph. Here’s the Sage code:

sage: K = 500  # The inbuilt matching algorithm aim to maximise: we wanted to minimise so I threw a large constant in...
sage: G = Graph( { 1: {2: -23+K, 5:-20+K, 8:-19+K},
....:              2: {1: -23+K, 3:-8+K, 5:-18+K},
....:              3: {2: -8+K, 4:-5+K, 5:-16+K},
....:              4: {3: -5+K, 5:-14+K, 7:-7+K, 10:-21+K},
....:              5: {1: -20+K, 2:-18+K, 3:-16+K, 4:-14+K, 6:-1+K},
....:              6: {5: -1+K, 7:-8+K, 8:-20+K, 9:-12+K, 10:-17+K},
....:              7: {4: -7+K, 6:-8+K},
....:              8: {1: -19+K, 6:-20+K, 9:-15+K, 11:-20+K},
....:              9: {6:-12+K, 8:-15+K, 10:-14+K, 11:-12+K},
....:              10: {4:-21+K, 6:-17+K, 9:-14+K, 11:-18+K},
....:              11: {8:-20+K, 9:-12+K, 10:-18+K},
....:               }, sparse = True)
sage: G.matching()
[(1, 8, 481), (2, 3, 492), (4, 7, 493), (5, 6, 499), (9, 11, 488)]

I’m not entirely sure that’s the right Graph I was supposed to use but it’s what I have left over in my Sage notebook…

A packing problem:

I had remembered seeing that packing problems were implemented in Sage so as we were running back from collecting the clue I yelled: this will take 3 minutes!

Sadly, this wasn’t the case as the only implementation involves bins of the same size (which isn’t the case here). As I read around the docs the rest of my team was able to solve this heuristically which left us with the last challenge and at this point nicely in the lead!

The problem of doom that made everyone cry:

After having pulled in the data here is about as far as Sage got us:

p = list_plot(healthy, color='red', size=30)
p += list_plot(pathological, color='green',size=30)
p

This was a tricky problem. We had no wifi in our hotel so downloading a relevant R package was out of the question.

We eventually formulated a non linear integer program but sadly our solver didn’t seem able to handle it in time. With two minutes to go (the deadline was dinner) one of the teams who had been doing things very steadily ran over claiming that they might have a solution. Everyone went quiet and walked over as their solution was checked and verified. It was a really nice moment as everyone clapped and cheered. Here’s a picture of a bunch of us crowded around the solution understanding how they took a stab at fixing some of the variables so that the solver could get the solution.

This was a phenomenal piece of fun. You can find another description with a bunch of funny photos of us all struggling here.

If anyone has some suggestions to how we could have solved any problems any faster I’d be delighted to hear them :)

June 27, 2014 12:00 AM

June 26, 2014

Simon Spicer

Fun with elliptic curve L-function explicit formulae

Although I gave a brief exposition of the "baseline" explicit formula for the elliptic curve $L$-functions in a previous post, I wanted to spend some more time showing how one can use this formula to prove some useful statements about the elliptic curves and their $L$-functions. Some of these are already implemented in some way in the code I am building for my GSoC project, and I hope to include more as time goes on.

First up, let's fix some notation. For the duration of this post we fix an elliptic curve $E$ over the rationals with conductor $N$. Let $L_E(s)$ be its associated $L$-function, and let $\Lambda_E(s)$ be the completed $L$-function thereof, i.e.
$\Lambda_E(s) = N^{s/2}(2\pi)^s\Gamma(s)L_E(s)$, where $\Gamma$ is the usual Gamma function on $\mathbb{C}$. If you want to know more about how $E$, $L_E$ and $\Lambda_E$ are defined and how we compute with them, some of my previous posts (here and here) go into their exposition in more detail.

Let's recap briefly how we derived the baseline explicit formula for the $L$-function of $E$ (see this post for more background thereon). Taking logarithmic derivatives of the above formula for $\Lambda_E(s)$ and shifting to the left by one gives us the following equality:
$$\frac{\Lambda_E^{\prime}}{\Lambda_E}(1+s) = \log\left(\frac{\sqrt{N}}{2\pi}\right) + \frac{\Gamma^{\prime}}{\Gamma}(1+s) + \frac{L_E^{\prime}}{L_E}(1+s).$$
Nothing magic here yet. However, $\frac{\Lambda_E^{\prime}}{\Lambda_E}$, $\frac{\Gamma^{\prime}}{\Gamma}$ and $\frac{L_E^{\prime}}{L_E}$ all have particularly nice series expansions about the central point. We have:

  • $\frac{\Lambda_E^{\prime}}{\Lambda_E}(1+s) = \sum_{\gamma} \frac{s}{s^2+\gamma^2}$, where $\gamma$ ranges over the imaginary parts of the zeros of $L_E$ on the critical line; this converges for any $s$ not equal to a zero of $L_E$.
  • $\frac{\Gamma^{\prime}}{\Gamma}(1+s) = -\eta + \sum_{k=1}^{\infty} \frac{s}{k(k+s)}$, where $\eta$ is the Euler-Mascheroni constant $=0.5772156649\ldots$ (this constant is usually denoted by the symbol $\gamma$ - but we'll be using that symbol for something else soon enough!); this sum converges for all $s$ not equal to a negative integer.
  • $\frac{L_E^{\prime}}{L_E}(1+s) = \sum_{n=1}^{\infty} c_n n^{-s}$; this only converges absolutely in the right half plane $\Re(s)>\frac{1}{2}$.

Here
$$ c_n = \begin{cases}
\left[p^m+1-\#E(\mathbb{F}_{p^m})\right]\frac{\log p}{p^m}, & n = p^m \mbox{ is a perfect prime power,} \\
0 & \mbox{otherwise.}\end{cases} $$

Assembling these equalities gives us the aforementioned explicit formula:
$$ \sum_{\gamma} \frac{s}{s^2+\gamma^2} = \left[-\eta+\log\left(\frac{\sqrt{N}}{2\pi}\right)\right] + \sum_{k=1}^{\infty} \frac{s}{k(k+s)} + \sum_n c_n n^{-s}$$
which holds for any $s$ where all three series converge. It is this formula which we will use repeatedly to plumb the nature of $E$.

For ease of exposition we'll denote the term in the square brackets $C_0$. It pitches up a lot in the math below, and it's a pain to keep writing out!

Some things to note:

  • $C_0 = -\eta+\log\left(\frac{\sqrt{N}}{2\pi}\right)$ is easily computable (assuming you know $N$). Importantly, this constant depends only on the conductor of $E$; it contains no other arithmetic information about the curve, nor does it depend in any way on the complex input $s$.
  • The sum $\sum_{k=1}^{\infty} \frac{s}{k(k+s)}$ doesn't depend on the curve at all. As such, when it comes to computing values associated to this sum we can just hard-code the computation before we even get to working with the curve itself.
  • The coefficients $c_n$ can computed by counting the number of points on $E$ over finite fields up to some bound. This is quick to do for any particular prime power.

Good news: the code I've written can compute all the above values quickly and efficiently:

sage: E = EllipticCurve([-12,29])
sage: Z = LFunctionZeroSum(E)
sage: N = E.conductor()
sage: print(Z.C0(),RDF(-euler_gamma+log(sqrt(N)/(2*pi))))
(2.0131665172, 2.0131665172)
sage: print(Z.digamma(3.5),RDF(psi(3.5)))
(1.10315664065, 1.10315664065)
sage: Z.digamma(3.5,include_constant_term=False)
1.68037230555
sage: Z.cn(389)
-0.183966457901
sage: Z.cn(next_prime(1e9))
0.000368045198812
sage: timeit('Z.cn(next_prime(1e9))')
625 loops, best of 3: 238 µs per loop

So by computing values on the right we can compute the sum on the left - without having to know the exact locations of the zeros $\gamma$, which in general is hard to compute.

Now that we have this formula in the bag, let's put it to work.

NAÏVE RANK ESTIMATION


If we multiply the sum over the zeros by $s$ and letting $\Delta = 1/s$, we get
$$\sum_{\gamma} \frac{\Delta^{-2}}{\Delta^{-2}+\gamma^2} = \sum_{\gamma} \frac{1}{1+(\Delta\gamma)^2},$$
Note that for large values of $\Delta$, $\frac{1}{1+(\Delta\gamma)^2}$ is small but strictly positive for all nonzero $\gamma$, and 1 for the central zeros, which have $\gamma=0$. Thus the value of the sum when $\Delta$ is large gives a close upper bound on the analytic rank $r$ of $E$. That is, we can bound the rank of $E$ from above by choosing a suitably large value of $\Delta$ and computing the quantity on the right in the inequality below:
$$r < \sum_{\gamma} \frac{1}{1+(\Delta\gamma)^2} = \frac{1}{\Delta}\left[C_0 + \sum_{k=1}^{\infty} \frac{1}{k(1+\Delta k)} + \sum_n c_n n^{-1/\Delta}\right]. $$
Great! Right? Wrong. In practice this approach is not a very good one. The reason is the infinite sum over $n$ only converges absolutely for $\Delta<2$, and for Delta values as small as this, the obtained bound won't be very good. A value of $\Delta=2$, for example, gives us the zero sum $\sum_{\gamma} \frac{1}{1+(2\gamma)^2}$. If a curve's $L$-function has a zero with imaginary part about 0.5, for example - as many $L$-functions do - then such a zero will contribute 0.5 to the sum. And because zeros come in symmetric pairs, the sum value will then be at least a whole unit larger than the actual analytic rank. In general, for $\Delta<2$ the computed sum can be quite a bit bigger than the actual analytic rank of $E$.

Moreover, even though the Generalized Riemann Hypothesis predicts that the sum $\sum_n c_n n^{-1/\Delta}$ does indeed converge for any positive value of $\Delta$, in practice the convergence is so slow that we end up needing to compute inordinate amounts of the $c_n$ in order to hope to get a good approximation of the sum. So no dice; this approach is just too inefficient to be practical.

A graphic depiction of the convergence problems we run into when trying to evaluate the sum over the $c_n$. For the elliptic curve $E: y^2 + y = x^3 - 79*x + 342$ the above plot shows the cumulative sum $\sum_{n<T} c_n n^{-1/4}$ for $T$ up to 100000; this corresponds to $\Delta=4$. Note that even when we include this many terms in the sum, its value still varies considerably. It's unlikely the final computed value is correct to a single decimal digit yet.

A BETTER APPROACH (THE ONE IMPLEMENTED IN MY CODE)


We can get around this impasse by evaluating a modified sum over the zeros, one which requires us to only ever need to compute finitely many of the $c_n$. Here's how we do it. Start with the sum $\sum_{\gamma} \frac{s}{s^2+\gamma^2}$, and divide by $s^2$ to get $\frac{1}{s(s^2+\gamma^2)}$. Now hearken back to your college sophomore math classes and take the inverse Laplace transform of this sum. We get
$$ \mathcal{L}^{-1} \left[\sum_{\gamma} \frac{1}{s(s^2+\gamma^2)}\right](t) = \sum_{\gamma} \mathcal{L}^{-1} \left[\frac{1}{s(s^2+\gamma^2)}\right](t) = \frac{t^2}{2} \sum_{\gamma} \left(\frac{\sin(\frac{t}{2}\gamma)}{\frac{t}{2}\gamma}\right)^2 $$
Letting $\Delta = \frac{t}{2\pi}$ we get the sum
$$ \mathcal{L}^{-1} \left[\sum_{\gamma} \frac{s}{s^2+\gamma^2}\right](\Delta) = 2\pi^2\Delta^2 \sum_{\gamma} \text{sinc}^2(\Delta\gamma), $$
where $\text{sinc}(x) = \frac{\sin(\pi x)}{\pi x}$, and $\text{sinc}(0) = 1$.

Note that as before, $\text{sinc}^2(\Delta\gamma)$ exaluates to 1 for the central zeros, and is small but positive for all nonzero $\gamma$ when $\Delta$ is large. So again, this sum will give an upper bound on the analytic rank of $E$, and that this bound converges to the true analytic rank as $\Delta \to \infty$.

If we do the same - divide by $s^2$ and take inverse Laplace transforms - to the quantities on the right, we get the following:

  • $\mathcal{L}^{-1} \left[\frac{C_0}{s^2} \right] = 2\pi\Delta C_0;$
  • $ \mathcal{L}^{-1} \left[\sum_{k=1}^{\infty} \frac{1}{sk(k+s)}\right] = \sum_{k=1}^{\infty} \frac{1}{k^2}\left(1-e^{-2\pi\Delta k}\right);$
  • $\mathcal{L}^{-1} \left[ \sum_{n=1}^{\infty} c_n \frac{n^{-s}}{s^2} \right] = \sum_{\log n<2\pi\Delta} c_n\cdot(2\pi\Delta - \log n). $
Note that the last sum is finite: for any given value of $\Delta$, we only need to compute the $c_n$ for $n$ up to $e^{2\pi\Delta}$. This is the great advantage here: we can compute the sum exactly without introducing any truncation error. The other two quantities are also readly computable to any precision.

Combining the above values and dividing by $2\pi^2\Delta^2$, we arrive at the rank bounding formula which is implemented in my GSoC code, hideous as it is:
\begin{align*}
r < \sum_{\gamma} \text{sinc}^2(\Delta\gamma) =  \frac{1}{\pi\Delta} &\left[ C_0 + \frac{1}{2\pi\Delta}\sum_{k=1}^{\infty} \frac{1}{k^2}\left(1-e^{-2\pi\Delta k}\right) \right. \\
&\left. + \sum_{n<e^{2\pi\Delta}} c_n\left(1 - \frac{\log n}{2\pi\Delta}\right) \right] \end{align*}
Of course, the number of terms needed on the right hand side is still exponential in $\Delta$, so this limits the tightness of the sum we can compute on the left hand side. In practice a personal computer can compute the sum with $\Delta=2$ in a few seconds, and a more powerful cluster can handle $\Delta=3$ in a similar time. Beyond Delta values of about 4, however, the number of $c_n$ is just too great to make the sum effectively computable.

THE DENSITY OF ZEROS ON THE CRITICAL LINE


Explicit formula-type sums can also be used to answer questions about the distribution of zeros of $L_E$ along the critical line. Let $T$ be a positive real number, and $M_E(T)$ be the counting function that gives the number of zeros in the critical strip with imaginary part at most $T$ in magnitude. By convention, when $T$ coincides with a zero of $L_E$, then we count that zero with weight. In other words, we can write
$$ M_E(T) = \sum_{|\gamma|<T} 1 + \sum_{|\gamma|=T} 1/2.$$
There is a more mathematically elegant way to write this sum. Let $\theta(x)$ be the Heaviside function on $x$ given by
$$ \theta(x) = \begin{cases} 0, & x<0 \\
\frac{1}{2}, & x=0 \\
1, & x>0. \end{cases}$$
Then we have that
$$M_E(T) = \sum_{\gamma} \theta(T^2-\gamma^2). $$
We have written $M_E(T)$ as a sum over the zeros of $L_E$, so we expect that it comprises the left-hand-side of some explicit formula-type sum. This is indeed this the case. Using Fourier transforms we can show that
$$M_E(T) = \frac{2}{\pi}\left[C_0 T + \sum_{k=1}^{\infty} \left(\frac{T}{k} - \arctan \frac{T}{k}\right) + \sum_{n=1}^{\infty} \frac{c_n}{\log n}\cdot \sin(T\log n) \right] $$

With this formula in hand we can start asking questions on how the zeros of $L_E$ are distributed. For example, how does average zero density on the critical line scale as a function of $T$, the imaginary part of the area in the critical strip we're considering? How does zero density scale with the curve's conductor $N$?

If one assumes the Generalized Riemann Hypothesis, we can show that $\sum_{n=1}^{\infty} \frac{c_n}{\log n}\cdot \sin(T\log n) = O(\log T)$. Moreover, this sum is in a sense equidistributed about zero, so we expect its contribution for a sufficiently large randomly chosen value of $T$ to be zero. The other two quantities are more straightforward. The $C_0 T$ term is clearly linear in $T$. Going back to the definition of $C_0$, we see that $C_0 T = \frac{1}{2}T\log N + $constant$\cdot T$. Finally, we can show that the sum over $k$ equals $T\log T + O(\log T)$. Combining this gives us the estimate
$$ M_E(T) = \frac{1}{\pi} T (\log N + 2\log T+a) + O(\log T) $$
for some explicitly computable constant $a$ independent of $E$, $N$ or $T$ . That is, the number of zeros with imaginary part at most $T$ in magnitude is "very close" to $\frac{1}{\pi}T(\log N + 2\log T)$. Another way to put it is than the number of zeros in the interval $[T,T+1]$ is about $\frac{1}{2}\log N + \log T$.

The value of $M_E(T)$ for $T$ up to 30, for the elliptic curve $E: y^2 + y = x^3 - x$. The black line is just the 'smooth part' of $M_E(T)$, given by $\frac{2}{\pi}\left(C_0 T + \sum_{k=1}^{\infty} \left(\frac{T}{k} - \arctan \frac{T}{k}\right)\right)$. This gives us the 'expected number of zeros up to $T$', before we know any data about $E$ other than its conductor. The blue line is what we get when we add in $\sum_{n=1}^{\infty} \frac{c_n}{\log n}\cdot \sin(T\log n)$, which tells us the exact locations of the zeros: they will be found at the places where the blue curve jumps by 2.

Note that the sum over $n$ converges *very* slowly. To produce the above plot, I included terms up to $n=1 000 000$, and there is still considerable rounding visible in the blue curve. If I could some how evaluate the $c_n$ sum in all its infinite glory, then resulting plot would be perfectly sharp, comprised of flat lines that jump vertically by 2 at the loci of the zeros.


These are two of the uses of explicit formula-type sums in the context of elliptic curve $L$-functions. If you want to read more about them, feel free to pick up my PhD thesis - when it eventually gets published!

by Simon Spicer (noreply@blogger.com) at June 26, 2014 04:23 PM

June 19, 2014

Simon Spicer

The structure of my GSoC code and a short demo

I'm at the point where I can explain the layout of the code that I've written so far, and even demonstrate some of its functionality.

As mentioned in previous posts, the initial goal of my Google Summer of Code project is to implement in Sage functionality that will allow us to bound the analytic rank of an elliptic curve $E$ via certain explicit formula-type sums relating to the $L$-function of $E$. One could certainly achieve this by simply writing a method on the existing elliptic curve class in Sage. However, the machinery we'll be using to bound rank is useful outside of the context of elliptic curves: it allows us to compute  sums over zeros of any $L$-function that comes from a modular form. In fact, so long as

  1. The $L$-function has an Euler product expansion over the primes;
  2. The Euler product coefficients - akin to the $a_p$ values we've mentioned previously - are readily computible; and
  3. The $L$-function obeys a functional equation which allows us to define and compute with the function inside of the critical strip;

then we can use this explicit-formula methodology to compute sums over the zeros of the $L$-function.

All the code is hosted on GitHub, so you can view it freely here (note: this repository is a branch of the main Sage repo, so it contains an entire copy of the Sage source. I'm only adding a tiny fraction of code in a few files to this codebase; I'll link to these files directly below).

To this effect, it makes sense to write a new class in Sage that can ultimately be made to compute zero sums for any motivic $L$-function. This is what I'll be doing. I've created a file zero_sums.py in the sage/lfunctions/ directory, inside of which I'll write an family of classes that take as input a motivic object, and contain methods to compute various sums and values related to the $L$-function of that object.

I'll start by writing two classes: LFunctionZeroSum_abstract, which will contain all the general methods that can apply to a zero sum estimator attached to any motivic $L$-function. The second class, LFunctionZeroSum_EllipticCurve, inherits from the abstract class, and contains the code specific to elliptic curve $L$-functions.

I have also added a method to the EllipticCurve_rational_field class in the sage/schemes/elliptic_curves/ell_rational_field.py file (the class modeling elliptic curves over the rationals) to access the analytic rank estimation functionality of the zero sum estimator.

Let's see the code in action. To download a copy yourself, say from within a project in cloud.sagemath.com, open up a terminal and type

~$ git clone git://github.com/haikona/GSoC_2014.git

This will download the code in the repo into a new folder GSoC_2014 in your current directory. CD into that directory, type make and hit enter to build. This will unfortunately take a couple hours to complete, but the good new is you'll only need to do that once. Alternatively, if you have an existing freshly-built copy of the sage source and good github-fu, you should be able to download just the relevant code and apply it, greatly speeding up the build time.

Once your copy of sage containing the new LFunctionZeroSum code is built, fire up that copy of sage (e.g. type ./sage while in the GSoC_2014 directory; don't just type sage, as this runs the system-wide copy of sage - not the one we want). This command-line interface will have all the functionality of any other current copy of Sage, but with the extra methods and classes I've written.

Let's start by estimating the rank of some elliptic curves:

sage: E = EllipticCurve([0,1,1,-2,0]); E
Elliptic Curve defined by y^2 + y = x^3 + x^2 - 2*x
over Rational Field
sage: E.rank(), E.conductor()
(2, 389)
sage: E.analytic_rank_bound()
2.0375000846

In the code above, I've defined an elliptic curve given by the Weierstrass equation $y^2 +y = x^3+x^2-2x$; for those of you in the know, this is the curve with Cremona label '389a', the rank 2 curve with smallest conductor. I've then printed out the rank of the curve, which is indeed 2. Then I've called my analytic_rank_bound() method on $E$, and it returns a value which is strictly greater than the analytic rank of $E$. Since the analytic rank of an elliptic curve is always a non-negative integer, we can conclude that the analytic rank of $E$ is at most 2.

How long did this rank estimation computation take? Thankfully Sage has a timing function to answer this question:

sage: timeit('E.analytic_rank_bound()')
125 loops, best of 3: 3.33 ms per loop

Neat; that was quick. However, this elliptic curve has a small conductor - 389, hence it's Cremona label - so computing with its $L$-function is easy. The great thing with the zero sum code is that you can run it on curves with really large conductors. Here we reproduce the commands above on the curve $y^2 + y = x^3 - 23737x + 960366$, which has rank 8 and conductor $N \sim 4.6\times 10^{14}$:

sage: E = EllipticCurve([0,0,1,-23737,960366]); E
Elliptic Curve defined by y^2 + y = x^3 - 23737*x + 960366 
over Rational Field
sage: E.rank(), E.conductor()
(8, 457532830151317)
sage: E.analytic_rank_bound()
8.51446122452
sage: timeit('E.analytic_rank_bound()')
125 loops, best of 3: 3.37 ms per loop

So we see then that this zero sum code doesn't care very much about the size of the conductor of $E$; this is one of its huge plusses. There are of downsides, of course - but we'll save those for another post.

Now let's look at the LFunctionZeroSum code.

sage: E = EllipticCurve([23,-100])
sage: Z = LFunctionZeroSum(E); Z
Zero sum estimator for L-function attached to 
Elliptic Curve defined by y^2 = x^3 + 23*x - 100 
over Rational Field

Right now we don't have too much beyond that which you can access from the EllipticCurve class. For one, we can compute the coefficients of the logarithmic derivative, since they are needed to compute the rank-estimating zero sum. We can also return and/or compute one or two other values associated to the zero sum class. As time goes on we'll flesh out more functionality for this class.

sage: for n in range(1,11):
....:     print(n,Z.logarithmic_derivative_coefficient(n))
....: 
(1, 0)
(2, 0.0)
(3, -1.09861228867)
(4, 0.0)
(5, 1.28755032995)
(6, 0)
(7, -0.277987164151)
(8, 0.0)
(9, -0.366204096223)
(10, 0)
sage: Z.elliptic_curve()
Elliptic Curve defined by y^2 = x^3 + 23*x - 100
over Rational Field
sage: Z.elliptic_curve().rank()
1
sage: Z.rankbound(Delta=1.7,function="gaussian")
1.58647787024

All of this code is of course written to Sage spec. That means that every method written has documentation, including expected input, output, examples, and notes/warnings/references where necessary. You can access the docstring for a method from the command line by typing that method followed by a question mark and hitting enter.

All my code will initially be written in Python. Our first goal is to replicate the functionality that has already been written by Jonathan Bober in c. As such we're not aiming for speed here; instead the focus is to make sure all the code is mathematically correct.

Once that's up and running there are two natural directions to take the project:

  1. Speed the code up. Rewrite key parts in Cython, so that computations run faster, and thus analytic rank can be estimated for curves of larger conductor.
  2. Generalize the code. Write classes that model zero sum estimators for $L$-functions attached to eigenforms (those modular forms with Euler product expansions) of any level and weight, and allow for more types of sums to be computed. Ultimately write code that can be used to compute sums for any motivic $L$-function.

I'm going to attack speeding up the code first, since the elliptic curve rank estimation functionality will probably see the most use. If there's time at the end of the project I can look to generalizing the code as much as I can.

by Simon Spicer (noreply@blogger.com) at June 19, 2014 02:48 PM

The explicit formula and bounding analytic rank from above

Wednesday's post gave some detail on how one can numerically evaluate $L_E(s)$, the $L$-function attached to $E$, at any given complex input $s$. This is certainly the first step in determining the analytic rank of $E$ i.e. the order of vanishing of the $L_E(s)$ at the central point $s=1$.

However, we immediately run into two major issues. The first, as mentioned previously, is that we have no proven lower bound on the size of the coefficients of the Taylor expansion of $L_E(s)$ at $s=1$, so we can never truly ascertain if the $n$th derivative there vanishes, or is just undetectibly small.

Today's post focuses on a method that at least partially overcomes the second issue: that evaluating $L_E(s)$ and its derivatives takes at least $O(\sqrt{N})$ time, where $N$ is the conductor of $E$. This means that working with elliptic curve $L$-functions directly becomes computationally infeasible if the curve's conductor is too big.

To see how we can bypass this problem, we'll have to talk about logarithmic derivatives. The logarithmic derivative of a function  $f(s)$ is just
$$ \frac{d}{ds} \log(f(s)) = \frac{f^{\prime}(s)}{f(s)}. $$
Logarithmic derivatives have some nice properties. For one, they turn products into sums: if $f(s) = g(s)h(s)$, then $\frac{f^{\prime}}{f} = \frac{g^{\prime}}{g} + \frac{h^{\prime}}{h}$. Because we can write $L_E(s)$ as an Euler product indexed by the primes, we therefore have that
$$ \frac{L_E^{\prime}}{L_E}(s) = \sum_p \frac{d}{ds} \log\left(\left(1-a_p p^{-s} + \epsilon_p p^{-2s}\right)^{-1}\right).  $$
When you multiply this out, you rather amazingly get a Dirichlet series whose coefficients can be described in a very mathematically elegant way:
$$ \frac{L_E^{\prime}}{L_E}(1+s) = \sum_{n=1}^{\infty} c_n n^{-s}, $$
where
$$ c_n = \begin{cases}
\left(p+1-\#E(\mathbb{F}_{p^m})\right)\frac{\log p}{p^m}, & n = p^m \mbox{a perfect prime power,} \\
0 & \mbox{otherwise.}\end{cases} $$

That is, the $n$th coefficient of $\frac{L_E^{\prime}}{L_E}(1+s)$ is $0$ when $n$ is not a prime power, and when it is, the coefficient relates directly to the number of points on the reduced curve over the finite field of $n$ elements. [Note we're shifting the logarithmic derivative over to the left by one unit, as this puts the central point at the origin i.e. at $s=0$.]

How does this yield a way to estimate analytic rank? Via something called the Explicit Formula for elliptic curve $L$-functions - which, confusingly, is really more of a suite of related methods than any one specific formula.

Remember the Generalized Riemann Hypothesis? This asserts, in the case of elliptic curve $L$-functions at least, that all nontrivial zeros of $L_E(s)$ lie on the line $\Re(s) = 1$, and two zeros can never lie on top of each other except at the central point $s=1$. The number of such zeros at the central point is precisely the analytic rank of $E$. Each nontrivial zero can therefore be written as $1\pm i\gamma$ for some non-negative real value of $\gamma$ - it turns out that whenever $L_E(1+i\gamma)=0$, then $L_E(1-i\gamma)=0$ too, so noncentral zeros always come in pairs. We believe GRH to be true with a high degree of confidence, since in the 150 years that $L$-functions have been studied, no-one has ever found a non-trivial zero off the critical line.

We can invoke GRH with logarithmic derivatives and something called the Hadamard product representation of an entire function to get another formula for the logarithmic derivative of $L_E(s)$:
$$ \frac{L_E^{\prime}}{L_E}(1+s) = -\log\left(\frac{\sqrt{N}}{2\pi}\right) - \frac{\Gamma^{\prime}}{\Gamma}(1+s) + \sum_{\gamma} \frac{s}{s^2+\gamma^2}. $$
Here $\frac{\Gamma^{\prime}}{\Gamma}(s)$ is the logarithmic derivative of the Gamma function (known as the digamma function; this is a standard function in analysis that we know a lot about and can easily compute), and $\gamma$ runs over the imaginary parts of the zeros of $L_E(s)$ on the critical strip $\Re(s)=1$.

What does this get us? A way to equate a certain sum over the zeros of $L_E(s)$ to other more tractable functions and sums. When you combine the two formulae for $\frac{L_E^{\prime}}{L_E}(1+s)$ you get the following equality:

$$ \sum_{\gamma} \frac{s}{s^2+\gamma^2} = \log\left(\frac{\sqrt{N}}{2\pi}\right) + \frac{\Gamma^{\prime}}{\Gamma}(1+s) + \sum_n \frac{c_n}{n} n^{-s}.$$
The key realization here is that while computing the zeros of $L_E$ is in general nontrivial and time-consuming, all the other quantities in the equation above are readily computable. We can therefore evaluate on the left by computing the quantities on the right to sufficient precision, none of which require knowledge of the exact locations of the zeros of $L_E(s)$.

The above is the first and perhaps most basic example of an explicit formula for elliptic curve $L$-functions, but it is not the one we want. Specificially, trouble still arises in the fact that when were still dealing with an infinite sum on the right, and when we look close to the central point $s=0$, convergence is unworkably slow. We can, however, work with related explicit formulae that reduce the sum on the right to a finite one.

The formula we'll use in our code can be obtained from the above equation by dividing both sides by $s^2$ and taking inverse Laplace transforms; we can also formulate it in terms of Fourier transforms. Specifically, we get the following. Recall the sinc function $\mbox{sinc}(x) = \frac{\sin(\pi x)}{\pi x}$ with $\mbox{sinc}(0)=1$. Then we have:
$$ \sum_{\gamma} \mbox{sinc}(\Delta\gamma)^2 = Q(\Delta,N) + \sum_{\log n<2\pi\Delta} c_n\cdot\left(2\pi\Delta-\log n\right), $$
where $Q(\Delta,N)$ is a certain quantity depending only on $\Delta$ and $N$ that is easy to compute.

Let's look at the left hand side. Since $\mbox{sinc}(\Delta\gamma)^2$ is non-negative for any $\gamma$, and $\mbox{sinc}(0)^2 = 1$, when we sum over all zeros $\gamma$'s we get a value which must be strictly greater than the number of zeros at the central point (i.e. the analytic rank of $E$). Moreover, as we increase $\Delta$, the contribution to the sum from all the noncentral zeros goes to zero, as $\mbox{sinc}(x) \to 0$ as $x \to \infty$.

A graphic representation of the sum on the left side of the equation for the $L$-function attached to the elliptic curve $y^2 = x^3 + 103x - 51$ for three increasing values of the parameter $\Delta$. Vertical lines have been plotted at $x=\gamma$ whenever $L_E(1+i\gamma) = 0$, and the height of each line is given by the black curve $\mbox{sinc}(\Delta x)^2$. Thus summing up the length of the vertical lines gives you the value of the sum on the left side of the equation. We see that as $\Delta$ increases, the contribution from the blue lines - corresponding to noncentral zeros - goes to zero, while the contribution from the central zeros in red remain at 1 apiece. Since there are two central zeros (plotted on top of each other here), the sum will limit to 2 from above as $\Delta$ increases.

Now consider the right hand side of the sum. Note that the sum over $n$ is finite, so we can compute it to any given precision without having to worry about truncation error. Since $Q(\Delta,N)$ is easy to compute, this allows us to evaluate the entire right hand side with (relatively) little fuss. We therefore are able, via this equality, to compute a quantity that converges to $r$ from above as $\Delta \to \infty$.

A graphical representation of the finite sum $\sum_{\log n<2\pi\Delta} c_n\cdot\left(2\pi\Delta-\log n\right)$ for the same curve, with $\Delta=1$. The black lines are the triangular function $y = \pm (2\pi\Delta-x)$, whose value at $\log n$ weight the $c_n$ coefficient. Each blue vertical line is placed at $x=\log n$, and has height $c_n\cdot\left(2\pi\Delta-\log n\right)$. Summing up the signed lengths of the blue lines gives the value of the sum over $n$. Note that there are only 120 non-zero terms in this sum when $\Delta=1$, so it is quick to compute.

This formula, and others akin to it, will form the core functionality of the code I will be writing and adapting to include in Sage, as they give us a way to quickly (at least when $\Delta$ is small) show that the analytic rank of a curve can't be too big.

There are some downsides, of course, most notable of which is that the number of terms in the finite sum on the right is exponential in $\Delta$. Tests show that on a laptop one can evaluate the sum in a few milliseconds for $\Delta=1$, a few seconds for $\Delta = 2$, but the computation takes on the order of days when $\Delta=4$. Nevertheless, for the majority of the elliptic curves we'll be looking at, $\Delta$ values of 2 or less will yield rank bounds that are in fact tight, so this method still has much utility.

Another issue that warrants investigation is to analyze just how fast - or slow - the convergence to $r$ will be as a function of increasing $\Delta$; this will allow us to determine what size $\Delta$ we need to pick a priori to get a good estimate out. But this is a topic for a future date; the next post or two will be dedicated toward detailing how I'm implementing this rank estimation functionality in Sage.

by Simon Spicer (noreply@blogger.com) at June 19, 2014 11:36 AM

June 12, 2014

William Stein

SageMathCloud Task Lists

I've added task list functionality to SageMathCloud (SMC), so you can keep track of a list of things to do related to a project, paper, etc. Task lists are saved as files on the filesystem, so they can be backed up in the usual way, automatically generated, etc. I doubt anybody is going to use SMC just for the tasks lists, but for people already using SMC in order to use Sage, write LaTeX documents, use IPython notebooks, etc., having a convenient integrated task list should come in handy.
To create a task list, in a project, click "+New", name the task list, then click the "Task List" button.  Here's what a task list looks like:

The Design

I used the task list quite a bit when implementing the task list, and significantly modified the interface many, many times. I also tried out numerous other todo list programs for inspiration. I eventually settled on the following key design choices, some of which are different than anything I've seen. In particular, the design targets highly technical users, which is not something I saw with any other todo list programs I tried.
  • Markdown editor: The task description is formatted using client-side rendered Github flavored markdown (using marked), including [ ] for checkboxes. I also include full MathJax support, and spent quite a while working around various subtleties of mixing mathjax and markdown. I'll be rewriting Sage worksheets to use this code. The editor itself is Codemirror 4 in markdown mode, so it respects whatever theme you choose, has nice features like multiple cursors, paren matching, vim/emacs modes, etc. Multiple people can edit the same task at once and everybody will see the changes as they are made (note: I haven't implemented showing other cursors.)
  • Relative dates and times: All dates and times are shown relative to right now. If something is due in 20 hours, it says "due about 20 hours from now". I also included a sortable column with the last time when a task was edited, also shown relative to the current time. This display uses the timeago jquery plugin. You can of course click on the due date to see the actual date.
  • Hashtags: As I was implementing (and removing) features such as priorities, a way to indicate which task you are currently working on, folders, etc, I decided that hashtags can provide every feature. Moreover, they are very text editor/programmer friendly. When editing a task, if you put #foo, then it is interpreted as a hashtag, which you can click on to show only tasks containing that tag. To disambiguate with markdown headings, to make a heading you have to include a whitespace, so # foo. I haven't added autocomplete for hashtags yet, but will. You can easily click on hashtags anywhere to select them, or use the bar at the top.
  • User-specified task order: The default sort order for tasks is custom. There is a drag handle so you can explicitly move tasks up and down in order to indicate how important they are to you (or whatever else). You can also click an up hand or down hand to knock the currently selected task to the bottom of the list of displayed tasks.
Of course, I still have an enormous list of functionality I would like to add, but that will have to wait. For example, I need to enable a chat associated to each task list, like the chats associated to Sage worksheets and other files. I also want to make it so one can select a range of tasks and copy them, move them, paste them into another list, etc. It would also be nice to be able to export task lists to HTML, PDF, etc., which should be fairly easy using pandoc.  I'm also considering making a note list, which is like a task list but without the due date or "done" box.  Because of all the infrastructure already there, it would be easy to add code evaluation functionality, thus creating something like worksheets, but from a different point of view (with maybe hashtags determining the Python process).

Databases and Differential Synchronization

One interesting thing I noticed when implementing task lists is that there are many similarities with the original sagenb.org design (and also IPython notebook), in which a worksheet is a list of cells that get evaluated, can be manually reordered, etc. Similarly, a task list is a list of "cells" that you edit, manually reorder, etc. With sagenb we had longstanding issues involving the order of each cell and assigning an integer numerical id (0, 1, 2, ...) to the cells, which resulted in things like cells seemingly getting randomly reordered, etc. Also, having multiple simultaneous viewers with automatic synchronization is very difficult with that model. For task lists, I've introduced some new models to address these issues.

A natural way to store a task list is in a database, and I initially spent some time coming up with a good database schema and implementing basic lists using Cassandra for the backend. However, I couldn't pursue this approach further, since I was concerned about how to implement realtime synchronization, and also about the lack of easily backing up complete task lists via snapshots, in git, etc. So instead I created an "object database" API built on top of a file that is synchronized across clients (and the server) using differential synchronization. The underlying file format for the database is straightforward -- there is one line in JSON format for each record in the database. When objects are changed, the file is suitably modified, synchronized to other clients, and events are triggered.

Since differential synchronization has no trouble with files that have "a few thousand lines", this approach works well for our purposes (since personal or shared todo lists are typically fairly short). Also, having one line per record is at least more git friendly than something like a sqlite database. I'm considering rewriting my implementation of IPython notebook sync on top of this abstraction.
Since I view the task list as a database, each task has a globally unique uuid. Also, instead of viewing the task order as being defined by an integer 0,1,2,3, which leads to all manner of bugs and programming misery, race conditions, etc., instead we view the order as being determined by floating point positions. So to insert a task between tasks with positions 1 and 2, we just give the task position 1.5.

by William Stein (noreply@blogger.com) at June 12, 2014 01:31 PM

June 11, 2014

Simon Spicer

How to compute with elliptic curve L-functions and estimate analytic rank

In a previous post I defined what an elliptic curve $L$-function $L_E(s)$ is, and indicated why the behaviour of the function at the central point $s=1$ is of such great interest: the order of vanishing of $L_E(s)$ at $s=1$ is conjecturally precisely equal to the algebraic rank of $E$.

Because this equality is still conjectural, we will cal the former quantity -- then number of derivitaves of $L_E(s)$ that vanish at $s=1$ - the analytic rank of $E$. The topic of this post is to address the question: given an elliptic curve $E$, how do we go about computing its analytic rank?

Before we can hope to answer this question, we need to know how to evaluate $L_E(s)$ itself for any given $s$. In the previous post I gave both the Euler product and Dirichlet series definitions for $L_E(s)$; to jog your memory, here's the Euler product of $L_E(s)$:
$$ L_E(s) = \prod_p \left(1-a_p p^{-s} + \epsilon_p p^{-2s}\right)^{-1}, $$
where the product runs over all prime numbers, $a_p = p+1-\#\{E(\mathbb{F}_p)\}$, and $\epsilon_p = 0$ if $p$ divides the conductor of $E$ and $1$ otherwise. The Dirichlet series is $L_E(s) = \sum_{n}a_n n^{-s}$, which is precisely what you get when you multiply out the Euler product. Note that we are suppresing the dependence on $E$ in both the $a_n$ and $\epsilon_p$ constants.

However, both the Euler product and Dirichlet series representations of $L_E(s)$ will only converge absolutely when the real part of $s$ exceeds $\frac{3}{2}$. Although the Sato-Tate Conjecture (now a theorem, but the name has stuck) implies that the expansions will in fact converge conditionally for $\Re{s}>\frac{1}{2}$, the convergence is so slow that attempting to evaluate $L_E(s)$ near the central point by multiplying or summing enough terms is horribly inefficient. As such, we need a better way to evaluate $L_E(s)$ - and thankfully, such better ways do indeed exit.

Remember how I mentioned in the previous post that $L$-functions obey a very nice symmetry condition? Well, here's that condition exactly: first, we need to define something called the completed $L$-function $\Lambda_E(s)$. This is just $L_E(s)$ multiplied by some extra factors. Specifically,
$$\Lambda_E(s) = N^{\frac{s}{2}}(2\pi)^{-s}\Gamma(s) L_E(s), $$
where $N$ is the conductor of $E$ and $\Gamma(s)$ is the usual Gamma function on $\mathbb{C}$ (the one that gives you $(n-1)!$ when you evaluate it at the positive integer $s=n$).

We can show that $\Lambda_E(s)$ is entire on $\mathbb{C}$; that is, it doesn't have any poles. Moreover, $\Lambda_E(s)$ obeys the glorious symmetry property
$$ \Lambda_E(s) = w_E \Lambda_E(2-s), $$
where $w_E$ is either $1$ or $-1$ and depends on $E$. This is called the functional equation for the $L$-function attached to $E$.

Another way to put it is that shifted completed $L$-function $\Lambda_E(1+s)$ is either an even or an odd function of $s$. Because the factors $N^{\frac{s}{2}}$, $(2\pi)^{-s}$ and $\Gamma(s)$ are all readily computable, this allows us to determine the value of $L_E(s)$ when the real part of $s$ is less than $\frac{1}{2}$.

What's left, then, is to figure out how to efficinetly evaluate $L_E(s)$ in the strip $\frac{1}{2} \le \Re(s) \le \frac{3}{2}$. This is called the critical strip for the $L$-function, and it is here that the behaviour of the function is most interesting.

[Aside: we can, for example, show that $\Lambda_E(1+s)$ is never zero outside of the critical strip. The Generalized Riemann Hypothesis in fact asserts that elliptic curve $L$-functions are only ever zero along the exact center of this strip, the critical line $\Re(s)=1$. We'll get back to the zeros of elliptic curve $L$-functions in a later post.]

To evaluate $\Lambda_E(s)$ in the critical strip, we make use of the modularity of $E$. The modularity theorem states that elliptic curves are modular: for every elliptic curve $E$ over the rationals there exists a weight 2 cuspidal eigenform $f_E$ of level $N$ (where $N$ is precisely the conductor of $E$), such that the $a_n$ of $E$ as defined previously equal the Fourier coefficients of $f_E$. If you haven't studied modular forms before, the crucial piece of knowledge is that there is a natural procedure for constructing $L$-functions from modular forms in such a way that the definition makes sense for all complex inputs $s$, and that the $L$-function attached to the cusp form $f_E$ will exactly equal the elliptic curve $L$-function $L_E(s)$. This is in fact how we show that elliptic curve $L$-functions can be analytically continued to all of $\mathbb{C}$.

The take-away is that via modularity there is an infinite sum representation for $L_E(s)$ which converges absolutely for all $s$. Here it is: define the auxiliary function
$$\lambda_E(s) = \left(\frac{\sqrt{N}}{2\pi}\right)^{s} \sum_{n=1}^\infty a_n n^{-s}\Gamma \left(s,\frac{2\pi}{\sqrt{N}}\cdot n\right), $$
where all the quantities are as defined previously, and $\Gamma(s,x)$ is the upper incomplete Gamma function (note that since $\Gamma(s,x)$ goes to zero exponentially as $x$ goes to infinity, this sum will converge absolutely for any $s$, with rate of convergence scaling with $\sqrt{N}$). Then we have
$$ \Lambda_E(s) = \lambda_E(s) + w_E \lambda_E(2-s). $$
Because we know how to compute incomplete Gamma functions, this gives us a way to evaluate $\Lambda_E(s)$, and thus $L_E(s)$, in the critical strip. The upside with this formula and variations thereof is that it works for any value of $s$ you stick in - including values near $s=1$. Similar formulae exist for the derivatives of $L_E(s)$, so we can in theory compute $L_E^{(n)}(1)$, the $n$th derivative of $L_E(s)$ at the central point, to any degree of accuracy for any $n$.

Thus if we want to compute a curve's analytic rank, what's stopping us from just evaluating successive derivatives of $L_E(s)$ at $s=1$ until we hit one which is not zero?

Two reasons. The first is that there's no way around the fact that you need about $\sqrt{N}$ terms to compute $L_E(s)$ or its derivatives to decent precision. If the conductor of the curve is too big, as is often the case, it takes an inordinate amount of time to simply evaluate the $L$-function near the central point. This makes direct evaluation impractical for all but the smallest-conductor curves -- and for those curves we can usually compute rank via other methods anyway.

The second reason is a more subtle one: how do you tell numerically if the $n$th derivative of $L_E(s)$ at $s=1$ is zero? If you think about it, it's easy to answer this question in one direction only: if you evaluate $L_E^{(n)}(1)$ to some precision and get digits that aren't all zero, then (assuming your code is correct) the $n$th derivative of $L_E(s)$ does not vanish. However, no matter how many digits of precision we compute getting all zeros, the possibility will always remain that the next digit along might not be zero.

In general, there is no numerical way to determine if the value of a black-box complex-valued function at a point is zero, or just really really close to zero. This is why, if you look in the literature, you'll find "methods to estimate analytic rank", but never to compute the quantity exactly. It's impossible to do without extra knowledge. Specifically, we'd need to have some sort of a computable lower bound on the size of the derivatives of $L_E(s)$ at the central. Unfortunately, no such theorems currently exist, so we're stuck with estimating analytic rank for now.

Thankfully, the $\sqrt{N}$-dependence issue is more hopeful. The next post will detail a method that provides good estimates for the analytic rank that scales much more slowly with the curve's conductor.

by Simon Spicer (noreply@blogger.com) at June 11, 2014 04:57 PM

Mathematical background: elliptic curves and L-functions

Day 1 of my Google Summer of Code project! I will try post updates at least once a week; I thought a good starting point would be to give an introduction to the mathematical objects that this project revolves around. The next post will then give a more detailed description of the project itself, and the structure of the code that I'm going to adapt, write and include in Sage.

To that end, for this post I will assume some  knowledge of elementary number theory, algebra and  complex analysis, but nothing more complicated than that.

Let $E$ be an elliptic curve over the rational numbers. We can think of $E$ as the set of rational solutions $(x,y)$ to a two-variable cubic equation in the form:
$$ E: y^2 = x^3 + Ax + B $$
for some integers $A$ and $B$, along with an extra "point at infinity". An important criterion is that the $E$ be a smooth curve; this translates to the requirement that the discriminant of the curve, given by $-16(4A^3+27B^2)$, is not zero.

One of the natural questions to ask when considering an elliptic curve is "how many rational solutions are there?" It turns out elliptic curves fall in that sweet spot where the answer could be zero, finitely many or infinitely many - and figuring out which is the case is a deeply non-trivial problem.

The rational solutions form an abelian group with a well-defined group operation that can be easily computed. By a theorem of Mordell, the group of rational points on an elliptic curve $E(\mathbb{Q})$ is finitely generated; we can therefore write
$$ E(\mathbb{Q}) \approx T \times \mathbb{Z}^r,$$
where $T$ is a finite group (called the torsion subgroup of $E$), and $r$ is denoted the algebraic rank of $E$.

The elliptic curve group law explained visually: three points in a straight line add to zero; because the point at infinity is the identity element, this means that the sum R of two points P and Q is the reflection about the real axis of the other point on the curve on the straight line connecting P and Q.

Determining the torsion subgroup of $E$ is a relatively straightforward endeavor. By a theorem of Mazur, rational elliptic curves have torsion subgroups that are (non-canonically) isomorphic to one of precisely fifteen possibilities: $\mathbb{Z}/n\mathbb{Z}$ for $n = 1$ through $10$ or $12$; or $\mathbb{Z}/2\mathbb{Z}\oplus \mathbb{Z}/2n\mathbb{Z}$ for $n = 1$ though $4$. Computing the rank $r$ - the number of independent rational points on $E$ - is the hard part, and it is towards this end that this project hopes to contribute.

Perhaps surprisingly, we can translate the algebraic problem of finding rational solutions to $E$ to an analytic one - at least conjecturally. To understand this we'll need to know what an elliptic curve $L$-function is. These are holomorphic functions defined on the whole complex plane that somehow encode a great deal of information about the elliptic curve they're attached to.

The definition goes as follows: for each prime $p$, count the number of solutions to the elliptic curve equation modulo $p$; we'll call this number $N_p(E)$. Then define the number $a_p(E)$ by
$$ a_p(E) = p+1 - N_p(E). $$
Hasse's Theorem states that $a_p(E)$ is always less that $2\sqrt{p}$ in magnitude for any $p$, and the Sato-Tate conjecure (recently proven by Taylor et al) states that for a fixed elliptic curve, the $a_p$ (suitably transformed) are asymptotically distributed in a semi-circular distribution about zero.

Next, for a given $p$ define the local factor $L_p(s)$ to be the function of the complex variable $s$ as follows:
$$ L_p(s) = \left(1-a_p(E)p^{-s} + \epsilon(p)p^{-2s}\right)^{-1}, $$
where $\epsilon(p)$ is 0 if $p$ divides the conductor of $E$, and 1 otherwise.

The conductor of $E$ is a positive integer that encodes the bad reduction of $E$: for a finite list of primes $p$, when we reduce the curve modulo $p$, we don't get a smooth curve but rather a singular one instead. The conductor is just the product of these primes to the first or second power (or in the cases $p=2$ or $3$, up to the eighth and fifth powers respectively). If you're unfamiliar with elliptic curves, the thing to note is that the conductor always divides the discriminant of $E$, namely the quantity $-16(4A^3+27B^2)$ mentioned previously.

Finally we can define the $L$-function attached to $E$:
$$ L_E(s) = \prod_p L_p(s) = \prod_p \left(1-a_p(E)p^{-s} + \epsilon(p)p^{-2s}\right)^{-1}. $$
The above representation of $L_E(s)$ is called the Euler product form of the $L$-function. If we multiply out the terms and use power series inversion we can also write $L_E(s)$ as a Dirichlet series:
$$ L_E(s) = \sum_{n=1}^{\infty} a_n(E) n^{-s}, $$
where for non-prime $n$ the coefficients $a_n$ are defined to be exactly the integers you get when you multiply out the Euler expansion.

If you do some analysis, using Hasse's bound on the size of the $a_p(E)$ and their distribution according to Sato-Tate, one can show that the above to representations only converge absolutely when the real part of $s$ is greater than $\frac{3}{2}$, and conditionally for $\Re(s)>\frac{1}{2}$. However, the modularity theorem states that these elliptic curve $L$-functions can actually be analytically continued to the entire complex plane. That is, for every elliptic curve $L$-function $L_E(s)$ as defined above, there is an entire function on $\mathbb{C}$ which agrees with the Euler product/Dirichlet series definition for $\Re(s)>1$, but is also defined - and readily computable - for all other complex values of $s$. This entire function is what we actually call the $L$-function attached to $E$.

The way we analytically continue $L_E(s)$ yields that the function is highly symmetric about the line $\Re(s)=1$; moreover, because the function is defined by real coefficients $L_E(s)$ also obeys a reflection symmetry along the real axis. The point $s=1$ is in a very real sense therefore the central value for the $L$-function. It thus makes sense to investigate the behaviour of the function around this point.

Because $L_E(s)$ is entire, it has a Taylor expansion at any given point. We can ask what the Taylor expansion of $L_E(s)$ about the point $s=1$ is, for example. One of the central unproven conjectures in modern-day number theory is the Birch and Swinnerton-Dyer Conjecture: that the order of vanishing of the Taylor expansions of $L_E(s)$ about the point $s=1$ is precisely $r$, the algebraic rank of the elliptic curve $E$. That is, if we let $z=s-1$ so that
$$ L_E(s) = c_0 + c_1 z + c_2 z^2 + \ldots $$
is the expansion of $L_E(s)$ about the central point, the BSD conjecture holds that $c_0$ through $c_{r-1}$ are all zero, and $c_r$ is not zero.

The values of three elliptic curve L-functions along the critical line $1+it$ for $-10<=1<=10$. Blue corresponds to the curve $y^2 = x^3 - 13392x - 1080432$, a rank 0 curve, red is that of $y^2 = x^3 - 16x + 16$, a rank 1 curve, and green is $y^2 = x^3 - 3024x + 46224$, a rank 2 curve. Note that close to the origin the graphs look like non-zero constant function, a straight line through the origin and a parabola respectively.

Thus if we can compute the order of vanishing of the elliptic curve $L$-function at the central point, we can at least conjecturally compute the rank of the curve. This converts an algebraic problem into a numerical one, which is perhaps more tractible.

The techniques we'll use to attempt to do this will be the subject of the next blog post. Unfortunately there are still plenty of challenges to this approach - the least of which boils down to: how do you numerically determine if the value of a function is zero, or just really, really close to zero? The short answer is that without theorems to back you up, you can't -- but we can still make considerable progress toward the problem of computing an elliptic curve's rank.

by Simon Spicer (noreply@blogger.com) at June 11, 2014 02:48 PM

June 10, 2014

Simon Spicer

How to develop for Sage using Sage Math Cloud
 and Git

I will be using Sage Math Cloud to write all the code for my Google Summer of Code project. The advantage of choosing the cloud-based route should be clear: you don’t need to install or store anything locally, and it frees you up to use any device with a web browser you choose to write code. The downside, as Sage Math Cloud is still new, is that tutorials and how-tos on the setup process in order to do so are a bit lacking. Since I couldn’t find any single unified source on how to do it, I thought I might to create a step-by-step guide on getting set up in Sage Math Cloud to write code in for inclusion in the Sage codebase.

Much of what I've done follows the Git the Hard Way tutorial for Sage development. However, there are significant departures. I recommend reading through that tutorial first; if you do then what follows below will be a piece of cake.

First, you’ll need an account at cloud.sagemath.com. Good news: the whole service is completely free to use, and there’s no annoying back-and-forthing via emailed confirmation links. It’ll take you two minutes tops.

The cloud.sagemath.com homepage.

Your work on your Sage Math Cloud account is project-based; each account can have a large number of projects associated to it. Each project acts like it’s own Linux-based file system inside of which you can create and edit any number of files of any type. Create a project in which you'll house your code.

The project root view, before any files have been created.

Next, you'll want a local installation of Sage. We're going to initiate a local copy of the Sage master repo, as hosted on GitHub. Click New --> Terminal to open up a new terminal window - we'll be doing everything via terminal commands. Then type the following three commands:

~$ git clone git://github.com/sagemath/sage.git

~$ cd sage

~/sage$ make

The first will download a copy of the Sage codebase into the new folder 'sage' (it should appear in the project root directory after the download is complete); the second changes into the directory, and the third initiated the compiling of the Sage source.
Note that the building of Sage usually takes about 3 hours, so get this going and then kick back and take inspiration from this.

While your code is compiling you'll want to make sure you have a GitHub account. If you don't follow the steps there to create one. While not required, hosting any projects on GitHub is highly recommended: it will allow multiple people to work on a project simultaneously, as well as giving you access to sophisticated revision control functionality that will prevent you from accidentally deleting code or data.

GitHub repo creation page.

Create an empty repository - we'll fill it with things later. So long as the repo exists and you know its name you're good to go.

The next thing we'll want to do is register the Sage Trac repository, where all the active tickets are hosted for Sage development. Once you're done creating an empty repo on your GitHub account, go back to your Sage Math Cloud project and wait for Sage to finish building. When this is done, in the terminal window type the following:



~/sage$ git remote add trac git@trac.sagemath.org:sage.git -t master


~/sage$ git remote -v


origin  git://github.com/sagemath/sage.git (fetch)


origin  git://github.com/sagemath/sage.git (push)


trac    git@trac.sagemath.org:sage.git (fetch)


trac    git@trac.sagemath.org:sage.git (push)



The git remote add command registers the Sage trac repo as a remote repository (trac is the name it is registered as locally; you can of course call it anything you want). The git remote -v command simply lists what repos you have now registered. We won't be using Sage trac for now. If you want more practice with trac, see the Git the Hard Way tutorial mentioned previously.

Note the inclusion of the word master in the command above; this means we will only track the master branch of the remote repo. A branch is an instanced copy of the codebase which you can work on and edit without altering other branches. The master branch should be kept pristine, so we will want to create and switch to a new branch. This is accomplished with the git checkout command:


~/sage$ git checkout -b demo


Switched to a new branch 'demo'


~/sage$ git branch


demo


  master


The -b parameter creates the branch, while checkout switches to that branch. Typing git branch then shows you what branches currently exist locally, as well as the one you're currently on.

We're now in the position to register the demo branch with our fresh repo on GitHub:


~/sage$ git remote add demo https://github.com/haikona/demo.git


~/sage$ git remote -v


demo    https://github.com/haikona/demo.git (fetch)


demo    https://github.com/haikona/demo.git (push)


origin  git://github.com/sagemath/sage.git (fetch)


origin  git://github.com/sagemath/sage.git (push)


trac    git@trac.sagemath.org:sage.git (fetch)


trac    git@trac.sagemath.org:sage.git (push)


And finally, we can push to our repo to sync the local copy therewith. For this you will need your GitHub password - so not just any old Harry can push to your repo.


~/sage$ git push demo demo
Username for 'https://github.com': haikona
Password for 'https://haikona@github.com':
Counting objects: 4, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 317 bytes | 0 bytes/s, done.
Total 3 (delta 1), reused 0 (delta 0)
To https://github.com/haikona/demo.git
   2b1d88b..21cddb8  demo -> demo

You're now synced and ready to start writing code! From hereon you should be able to follow one of the many tutorials on working with git; the hard part - setting up - is all done.

by Simon Spicer (noreply@blogger.com) at June 10, 2014 03:21 PM

June 05, 2014

William Stein

The Official Go Tutorial as a (41-page) SageMathCloud worksheet


Do you like using interactive SageMathCloud worksheets and want to learn the basics of the Go Programming language? I've added a %go magic to SMC worksheets, and translated the official Go tutorial into a single long worksheet.
  1. Open a SageMathCloud project and restart the project server if necessary (project settings --Restart).
  2. Click +New and paste in this URL (then press enter): https://github.com/sagemath/cloud-examples.git
  3. You'll get a large collection of example worksheets in a directory cloud-examples. Navigate to the "Go" subdirectory and open go.sagews.
You can also directly browse a PDF of the worksheet here: https://github.com/sagemath/cloud-examples/blob/master/go/go.pdf?raw=true


by William Stein (noreply@blogger.com) at June 05, 2014 06:42 PM

June 02, 2014

William Stein

Update to SageMathCloud - Codemirror 4.2, etc.

I've made the following updates to https://cloud.sagemath.com:

User interface changes (refresh your browser to get)

Sage Worksheets

  • Don't show spinner unless computation is taking more than a second.
  • Streamlined evaluation a little bit (never get stuck with the grey spinner when you're online)
  • 2d graphics now display using svg by default, since browser bugs have been fixed

Upgrade from Codemirror 3.20 to CodeMirror version 4.2, which is much better, faster, and has new features:

  • Multiple cursors (use control or command+click to create multiple cursors)
  • Sublime keyboard bindings
  • New editor features (you can turn these off in account settings):
    • Auto brackets: automatically close brackets, quotes, etc.
    • Show trailing whitespace: show spaces at ends of lines
Here's a 2-minute screencast that illustrates some of the above UI features: http://youtu.be/ykb12MGHOuk

Task Lists: There is now a preliminary task list implementation. To use it, make a file that ends in .tasks.

  • Task editing now uses full screen width
  • Fixed task deletion bug
  • Markdown list autocomplete

Backend changes (restart project server to get):

  • Automatically delete project log entries when the log grows beyond 7500 lines. In some cases of very active projects, the log would get enormous (say 30MB+) and just loading it would slow down everything for a while.
  • Clear a certain logfile that was getting huge whenever the project server is restarted.

by William Stein (noreply@blogger.com) at June 02, 2014 11:07 AM

May 31, 2014

Nikhil Peter

Sage Android Week 2-Making JSON parsing less painful,the Gson way

The second week of GSoC involved a lot of staring at JSON replies, discarded code and unecessary classes, but I am reasonably pleased with the final product.

The main agenda during the week was to redo the JSON parsing code(which currently uses the default Android implementation) with a serializer/deserializer, in this case Gson. Hence, the main requirements were:

  • Write the model Java classes corresponding the the replies from the Sage server.
  • Add appropriate getters,setters and utility functions to said classes.
  • Basic testing.

Why use Gson ?

Consider a  JSON reply from the server, in this case the status reply from the server.

{
"content": {
"execution_state": "busy"
},
"msg_id": "42c63936-be03-47a3-9892-b48af8246a33",
"parent_header": {
"msg_id": "92d67417-ded1-4460-a852-9903d392c50d",
"session": "ae7cc4cb-8139-455a-b041-b4ffa44a8882",
"username": "",
"msg_type": "execute_request"
},
"header": {
"msg_id": "42c63936-be03-47a3-9892-b48af8246a33",
"username": "kernel",
"date": "2014-05-31T08:05:17.951510",
"session": "b86fbd1b-2a1c-456b-b745-504ad2f79f2f",
"msg_type": "status"
},
"metadata": {},
"msg_type": "status"
}

Now normal java code for obtaining, for example, the msg_id in “header” field would look like

try{

JSONObject response = new JSONObject(str); //str contains the response from server in string format.

String msg_id= response.getJSONObject("header").getString("msg_id");

}catch(JSONException e){
e.printStackTrace();
}

Which works passably well in most cases, but there are a few caveats.

  1. We have to catch what is in most cases, a useless exception since all JSON operations throw JSONException.
  2. If the nesting is deep, we will have to chain a lot of method calls
  3. We are using raw strings (“foo”) everywhere, high risk of human errors like spelling mistakes.
  4. This is too much work to obtain one value, and even more when considering that this is one of the simplest replies from the server.

Now, using GSON, we could do the following:

StatusReply.java

class StatusReply{

private String msg_id;
private String msg_type;
private Header header;
private Header parent_header;
private MetaData metadata;
private StatusContent content;

// +Getters and Setters

public static class Header{
private String msg_id;
private String session;
private String username;
private String msg_type;

//+Getters and Setters
}

public static class StatusContent{
private String execution_state;

//+Getters and Setters
}

public static class MetaData{
//Empty for now
}

}

And then we simply call:

 StatusReply reply = new Gson().fromJson(str,StatusReply.class);
String msg_id=reply.getHeader().getMessageID();

And that’s basically it.

That one line converts the entire JSON string into a POJO(Plain old Java object). As we can see it is far more concise,useful and pleasing to the eye.  No more raw Strings floating around, and no more unecessary exception handling.

Now the Sage server request & replies are fairly consistent in some fields but is wildly different in others.This presented a design challenge wherein I had to find common elements amongst the reply to group in a base class and extend the other, individual replies from this base class. Due to the inherent differences in the various replies, I feel I was only partially successful in doing so,  but with some tweaking, I managed to bring down the number of model classes from an initial 30ish(!) to a much more manageable 13.

The only major trouble I had was with  the reply to an @interact input:

{
    "content": {
    "data": {
    "application\/sage-interact": {
    "new_interact_id": "17bc0888-c7ed-4aad-877d-b230aca49943",
    "controls": {
    "n": {
    "update": true,
    "raw": true,
    "control_type": "slider",
    "display_value": true,
    "values": [
    "1",
    "2",
    "3",
    "4",
    "5",
    "6",
    "7",
    "8",
    "9",
    "10"
    ],
    "default": 0,
    "range": [
    0,
    9
    ],
    "subtype": "discrete",
    "label": "n",
    "step": 1
    }
    },
    "readonly": false,
    "locations": null,
    "layout": [
    [
    [
    "n",
    1
    ]
    ],
    [
    [
    "_output",
    1
    ]
    ]
    ]
    },
    "text\/plain": "Sage Interact"
    },
    "source": "sagecell"
    },
    "msg_id": "ad34a0c6-8109-4314-85e7-698ba6704bbf",
    "parent_header": {
    "msg_id": "92d67417-ded1-4460-a852-9903d392c50d",
    "session": "ae7cc4cb-8139-455a-b041-b4ffa44a8882",
    "username": "",
    "msg_type": "execute_request"
    },
    "header": {
    "msg_id": "ad34a0c6-8109-4314-85e7-698ba6704bbf",
    "username": "kernel",
    "date": "2014-05-31T08:05:17.955924",
    "session": "b86fbd1b-2a1c-456b-b745-504ad2f79f2f",
    "msg_type": "display_data"
    },
    "metadata": {},
    "msg_type": "display_data"
    }

We can see that the msg_type,msg_id,header,parent_header and content fields all remain unchanged, but the remaining are vastly different. However, deserializing these with Gson is still relatively simple, except the one part,viz the “n” object. Now, this key is dynamic, since it depends on the variables the user has used in their input, and so it’s not really possible to make a class for this. However, Gson provides a solution to this problem in the form of a custom deserializer.  A few(~150) lines of code later, and this was done.

What seems easy in writing, was not so in actual implementation.There are a number of ways I could have gone about this(and I tried most of them) and this was the only way which seemed natural and did not rely on hacks to obtain the reply.This took most of 4 days to do, and I finalized the classes & methods in the last two days.

I’ve never needed to write a custom deserializer before, so that was an exciting and informative challenge.

Advantages

Why did I just add a bunch of classes which currently serve not much purpose other than to serve as a template for the JSON replies ?

  • We’ve now separated the design from the main code logic. If the Sage server responses ever change in the future, all we have to do is change the appropriate fields in the models, with little to no change in the main code.
  • It much faster(to write and to process) JSON using a deserializer/serializer. Gson isn’t the fastest,(it is in fact, one of the slowest) that distinction belongs to Jackson, which is a whopping 3 orders of magnitude faster, depending on the type of data to be parsed. Here I chose Gson because:
    • I’m more familiar with it
    • It is arguably more popular and widespread than Jackson, and is used in a wider variety of Open Source libraries and projects.
    • It’s maintained by Google(and is also open source). Nuff said.
  • It removes unecessary processing and functions from the main logic, which otherwise have no other purpose other than to just parse the responses.

Future Agenda

Next week’s goals include:

  • Replace the default HttpClient(Android has either the Apache-based client or the Java one, both of which I personally find too cumbersome) by something better and easier to use. This “something” is I STILL haven’t got around to finalizing, but there are a few options:
    1. Volley: Debuted in Google I/O last year, highly memory efficient(since it’s designed by Google) but lacking in everything else: documentation, proper support etc etc.
    2. Retrofit: One of the most popular libraries for HTTP requests, the only problems being I haven’t personally used this before and it’s annotation based, which I’m not 100% sure about.
    3. Other misc (mostly) asynchronous libraries: Things which come to mind are async-http,Ion,RoboSpice etc etc.
  • Replace the current websocket library. This is something i HAVE decided on, it’s gonna be AndroidAsync, which is pretty popular and well maintained(the author is one of the lead developers at CyanogenMod).
  • Use this and Gson together and remove all the older code.

I hope I can get this done in a week, but since I am reasonably ahead at the moment w.r.t my proposal, I do have some extra time available.

I’ll have a chat with Volker about the library and hopefully together we can come to a decision on what to use.

Thats all from me, thanks for reading!

by hav3n at May 31, 2014 01:51 PM

May 21, 2014

Nikhil Peter

GSoC 2014 – The First Week

It’s the first week of GSoC, and so I decided that maybe I should stop procrastinating and update this blog.

Here we go then!

Agendas during this week include:

  • Get familiar with the remaining code(Mostly done)
  • Refactor the code into packages so it’s more readable and developer friendly.
  • Gradle Migration:
    • The problem with migrating to gradle is that the entire project structure would have to be changed, which would make it incompatible with Eclipse. For the time being, I’ve decided to keep it Gradle and Eclipse compatible, which will involve some duplication of effort, but nothing too major.
  • Error Reporting:ACRA
    • The choice of error reporting on the client side is usually limited to two choices, Google’s own Analytics, which is more metric centric than crash-report oriented, or ACRA, which is much more simple to use and implement.
    • Having used ACRA before, it was my natural choice…however there are some limitations, especially for an Open Source project such as this:
      1. The error reports must be sent somewhere. If the project was closed source, the natural choice would be to send it to the dev’s email address or to some script accessible to the developer.
      2. Ideally we would need something like OctoCrash ,which could directly post the crash as a GitHub issue. Unfortunately, this is only for iOS, so it’s not an option for us.
  • Currently work is mostly on the CellDialog which adds a new cell. Features to be added here include:
    • More intuitive group selection
    • Add an option to delete cells from the CellListFragment directly(also implement multiple deletions), without having to go into the SageActivity each time to do so.
    • Currently, the only option is to edit the cells(apart from the options in overflow menu), so it would be better to have the delete and edit options in a CAB(Contextual Action Bar), to better conform to Android Guidelines.

by hav3n at May 21, 2014 08:05 AM

May 15, 2014

David Horgan (quantumtetrahedron)

Enhanced by Zemanta

This week I have  continued working on the spectrum of the Ricci operator for a monochromatic 4 – valent node dual to an equilateral tetrahedron. This blog post reports on work in progress,

In my last post I indicated how far I had got in porting the code for the monochromatic 4-valent node from Mathematica to sagemath. This is essentially complete now and I have just got to output a graph of eigenvalues of curvature versus spin.

mono1
Sagemath code for the spectrum of the Ricci Operator for a monochromatic 4-valent node dual to an equilateral  tetrahedron

The curvature is written as a combination of the length of a hinge and the deficit angle around it.

 

curveequ32

As yet the code is unoptimised. Below is a  sample of the output so far:

 

 

So now the eigenvalues have been found I can start to make use of them in calculations of the Hamiltonian Constraint Operator.

mono2

Update:

Eigenenvalent vs spin for 4 valent

Enhanced by Zemanta

by David Horgan at May 15, 2014 09:20 PM

May 11, 2014

Vince Knight

Wizards, Giants, Linear Programming and Sage

I've been playing a game on my phone for a couple of months now called Clash of clans. It's one of those freemium games that tells me to do something on my phone every now and then:




In essence during the game you build your base with buildings to obtain resources, create troops and defend against attacks.

Here's a screen shot of my base at the moment:


As well as building your base, you can also put together a raiding party and attacking other bases.
In this post, I'll describe the use of a mathematical technique called linear programming so as to get the best combination of Giants, Wizards etc in a raiding party.

To train a warrior costs Elixir (a resource that you mine and/or plunder) but also costs space in your army camps.

Here are the various warriors available to me at the moment (as you build better buildings you get more warriors etc...).

1. Barbarian (Damage per second: 18, Hitpoints: 78, Housing Space: 1, Elixir Cost: 80):


2. Archer (Damage per second: 16, Hitpoints: 33, Housing Space: 1, Elixir Cost: 160)


3. Goblin (Damage per second: 19, Hitpoints: 36, Housing Space: 1, Elixir Cost: 60)


4. Giant (Damage per second: 24, Hitpoints: 520, Housing Space: 5, Elixir Cost: 2000) 


 5. Wall Breaker (Damage per second: 32, Hitpoints: 35, Housing Space: 2, Elixir Cost: 2500)


6. Balloon (Damage per second: 72, Hitpoints: 280, Housing Space: 5, Elixir Cost: 3500)


7. Wizards (Damage per second: 125, Hitpoints: 130, Housing Space: 4, Elixir Cost: 3000)

8. Healer (Damage per second: NA, Hitpoints: 600, Housing Space: 14, Elixir Cost: 6000)


9. Dragon (Damage per second: 140, Hitpoints: 1900, Housing Space: 20, Elixir Cost: 25000)




To summarise the information about the players I'm going to put the costs in a vector \(c\) and the housing space in a vector \(h\) (indexed using the same order as the images above):

\[c=(80,160,60,2000,2500,3500,3000,6000,25000)\]
\[h=(1,1,1,5,2,5,4,14,20)\]

Currently I have a capacity of 50 housing spaces per camp and I have 4 camps so I have an upper bound on the total amount of housing space of \(K=200\).

If I let the vector \(x\) denote a raiding party so that \(x_i\) is the number of warriors of type \(\i) (using the ordering above) then we have the corresponding housing space constraint:

\[\sum_{i=1}^9h_ix_i\leq K\]

I can now try and do two things:
  1. Get a full set of camps by minimising the overall cost.
  2. Get the most powerful raiding party that can fit in my camps.
I will solve both of these problems using what is called Integer Linear Programming. The integer part of that is due to the fact that I must have whole parts of warriors.

  1. Minimising my costs

The cost of a given rading party \(x\) is given by:

\[C(x) = \sum_{i=1}^9c_ix_i \]

Thus to minimise this cost and fill the camps we need to solve the following minimisation problem:

\[\text{minimise }C(x) = \sum_{i=1}^9c_ix_i\]

such that:

\[H(x)=\sum_{i=1}^9h_ix_i= K\]
and
\[x_i\in\mathbb{Z}_{\geq0}\]

How do we solve this? I've blogged about a similar type of optimisation problem that I use to schedule my class assessment and for that I used the awesome sagemath. You can read that blog post here. Here I will do the same thing.

The following Sage code is what is needed to create the linear program and also obtain the solution:

# Set up parameters:
K = 200 # Set the upper bound
c = [80,160,60,2000,2500,3500,3000,6000,25000] # The the cost vector
h = [1,1,1,5,2,5,4,14,20] # Set the 'housing space' constraints vector
n = len(c) # Get the number of warriors

C = lambda x: sum([x[i]*c[i] for i in range(n)]) # Define the cost function
H = lambda x: sum([x[i]*h[i] for i in range(n)]) # Define the housing capacity function

# Create and solve the optimisation problem
p = MixedIntegerLinearProgram(maximization=False) # Create an optimisation problem with minimization as the goal
x = p.new_variable(integer=True) # Create a variable with the requirement that it must be integer
p.add_constraint(H(x)==K) # Set the housing constranit
p.set_objective(C(x)) # Set the objective function
p.show() # Show the optimisation problem
p.solve() # Solve the optimisation problem

# Solve the optimisation problem

print 'Has solution:' 
for i, v in p.get_values(x).iteritems(): 
    print 'x_%s = %s' % (i, int(round(v))) 
print 'Housing spaces used: %s' % H(p.get_values(x))

The output is (disappointing but expected): \(x=(0,0,200,0,0,0,0,0)\) which implies that I should have 200 Goblins.

If you're familiar with the game this is not a very sensible way to go as Goblins are really only good at getting loot from defenceless bases.

We can add a couple more constraints so that we don't have too many goblins (let's limit it to 10) and also so that we have at least one healer:

p = MixedIntegerLinearProgram(maximization=False)  # Create an optimisation problem with minimization as the goal
x = p.new_variable(integer=True) # Create a variable with the requirement that it must be integer
p.add_constraint(H(x)==K)
p.add_constraint(x[2]<=10) # Don't want more than 10 goblins
p.add_constraint(x[7]>=1) # Want 1 healer at least
p.set_objective(C(x))
p.show()
p.solve()
print 'Has solution:'
for i, v in p.get_values(x).iteritems():
print 'x_%s = %s' % (i, int(round(v)))
print 'Housing spaces used: %s' % H(p.get_values(x))

This gives us \(x=(176,0,10,0,0,0,0,1,0)\).

This is again not very satisfying as we're basically filling our raiding party with the cheapest options. That was the purpose of this approach though. 

The next thing we'll look at is how to actually get a good raiding part.

  1. Getting the 'best' raiding party.

As you can see on the images, every warrior is different: some fly, some are better against buildings, some have more hitpoints than others etc...

One way of getting the 'best' raiding party could be to maximise the damage per second of the raiding party. To do this, let's define the following vector (which is just the damage per second that can be read off of the images above):

\[d=(18, 16, 19, 24, 32, 72, 125, 0, 140)\]

Now our maximisation problem can be written as:

\[\text{maximise } \sum_{i=1}^9d_ix_i\]

such that:

\[H(x)=\sum_{i=1}^9h_ix_i\leq K\]
and
\[x_i\in\mathbb{Z}_{\geq0}\]

Here's the Sage code to do this:

damagepersecond = [18, 16, 19, 24, 32, 72, 125, 0, 140]
C = lambda x: sum([x[i]*damagepersecond[i] for i in range(n)])

p = MixedIntegerLinearProgram() # Create an optimisation problem with minimization as the goal
x = p.new_variable(integer=True) # Create a variable with the requirement that it must be integer
p.add_constraint(H(x)<=K)
p.add_constraint(x[2]<=10) # Don't want more than 10 goblins
p.add_constraint(x[7]>=1) # Want 1 healer at least
p.set_objective(C(x))
p.show()
p.solve()
print 'Has solution:'
for i, v in p.get_values(x).iteritems():
print 'x_%s = %s' % (i, int(round(v)))
print 'Housing spaces used: %s' % H(p.get_values(x))

The result of the above is \(x=(0,0,2,0,0,0,46,1,0)\). I.e take 2 goblins, 46 wizards and 1 healer.
The problem with this raiding party is that wizards are actually quite fragile (archers towers and cannons can pretty much just pick them off).

So perhaps instead of maximising the damage per second we could maximise the total hit points of the party.

To do this we need to define the vector of hitpoints:

\[p=(78, 33, 36, 520, 35, 280, 130, 600, 1900)\]

Now our maximisation problem can be written as:

\[\text{maximise } \sum_{i=1}^9p_ix_i\]

such that:

\[H(x)=\sum_{i=1}^9h_ix_i\leq K\]
and
\[x_i\in\mathbb{Z}_{\geq0}\]

This gives us \(x=(0,0,0,0,93,0,0,1,0)\): a raiding part of wall breakers.

The problem with this is that wall breakers are good for just that: breaking walls. They carry a bomb up to a wall and then they explode. So let's add some more constraints to our optimisation problem so as to get something a bit more useful.
  • Let's make sure that we always have at least 20 housing capacity worth of distance attackers (Archers or Wizards):
\[x_1h_1+x_6h_6 \geq 20 \]
  • Let's ensure that we have at least 1 air attacker (so that once the air defenses are out of the way we can finish off the base):
\[x_5+x_8\geq1\]
  • Let's ensure that we always have some wall breakers to clear the walls for our ground units (1 per 5 giants and 1 per 20 barbarians say):
\[5x_4\geq x_3\]

\[20x_4\geq x_0\]
    The code to solve this is given here:

    p = MixedIntegerLinearProgram()  # Create an optimisation problem with minimization as the goal
    x = p.new_variable(integer=True) # Create a variable with the requirement that it must be integer
    p.add_constraint(H(x) <= K)
    p.add_constraint(x[2] <= 10) # Don't want more than 10 goblins
    p.add_constraint(x[7] >= 1) # Want 1 healer at least
    p.add_constraint(x[1] * h[1] + x[6] * h[6]>=20) # Want at least 20 capacity worth of distance attackers
    p.add_constraint(x[5] + x[8]>=1) # Want at least 1 of air attackers
    p.add_constraint(5 * x[4] >= x[3]) # Want at least 1 wall breaker per 5 giants
    p.add_constraint(20 * x[4] >= x[0]) # Want at least 1 wall breaker per 20 barbarians
    p.set_objective(C(x))
    p.show()
    p.solve()
    print 'Has solution:'
    for i, v in p.get_values(x).iteritems():
    print 'x_%s = %s' % (i, int(round(v)))
    print 'Housing spaces used: %s' % H(p.get_values(x))

    This gives \(x=(1,20,0,23,5,0,0,1,2)\): so take 1 Barbarian, 20 Archers, 23 Giants, 5 Wall Breakers, 1 Healer and 2 Dragons.

    The above would give us the most hit points but now we're missing out on the damage power of wizards.

    Perhaps we could try and balance both hitpoints damage.

    In essence we have what's called a multi objective optimisation problem. One approach to solve this is to simply weight both objectives.

    We're going to use \(\alpha\) to denote the importance of the hitpoints and \(1-\alpha\) the importance of the damage.

    Our optimisation function then becomes:

    \[\alpha\sum_{i=1}^9p_ix_i+(1-\alpha)\sum_{i=1}^9d_ix_i\]

    Using the same constraints as before and taking \(\alpha=.5\) (so that both damage and hit points have equal weighting) we get \(x=(11,0,0,25,5,0,5,1,1)\). Thus we should be taking in to battle: 11 Barbarians, 25 Giants, 5 Wall Breakers, 5 Wizards, 1 Healer and 1 Dragon.

    This feels much more balanced than anything we've had so far and not far off what I've been using as my own raiding party.

    Given that everything is nicely coded we can quickly obtain the make up of the optimal raiding party for a range of values of \(\alpha\):



    We see that as \(\alpha\) increases (so that we care more about hit points) the number of Wizards we should use slowly decreases (at one point the constraint for ranged attacking units kicks in and we replace the Wizards with Archers).

    You could easily tweak the constraints to perhaps ensure your party always has some Goblins or otherwise. I think this is a nice example where a mathematical model can be used to solve a problem but at the same time must not be taken far from the problem itself. There might be some players who only choose to attack bases with a particular type of defence so they can tweak/add constraints to the problem accordingly.

    Feel free to play around with the code in the Sage cell below:


    The mathematical techniques used in this post rely on Linear Algebra, Higher Dimensional Geometry and obviously the ability to write code :)

    (On another note I'm still not in a clan on the game so if anyone has any space in a clan let me know!)

    by Vincent Knight (noreply@blogger.com) at May 11, 2014 10:06 AM

    May 06, 2014

    David Horgan (quantumtetrahedron)

    Enhanced by Zemanta

    This week thanks to some colleagues I have  been working on the spectrum of the Ricci operator for a monochromatic 4 – valent node dual to an equilateral tetrahedron. This blog post reports on working in progress,

     

    I have been reviewing the papers seen in the posts:

    Basically I am porting Mathematica code over to sagemath so that I can then use it it the calculation of the matrix elements of the  LQG Hamiltonian Constraint operator discussed in the  in the posts:

    So far I have written code for a number of operators, but I still have the same number still to do, After this I’ll need to join them together.

    The matrix defining the operator Q {e1, e2, e3} used in the definition of the volume operator

    mono1

    H ithe matrix defining the operator δij.Yi.Yj used to define the length operator expressed in the intertwiner basis

    mono2

    And B the intertwiner basis

    mono3

    When complete I’ll be able to produce graphs such as those below which is a plot of the spectrum of R as a function of the spin. This can then e used in The numerical investigation of the LQG Hamiltonian Constraint Operator.

     

    mono5

     

     

    Enhanced by Zemanta

    by David Horgan at May 06, 2014 07:18 PM

    William Stein

    Update to Differential Synchronization in SageMathCloud

    I've just pushed out a major update to how synchronization works in https://cloud.sagemath.com.

    This change is pretty significant behind the scenes, but the only difference you should notice is that everything should be better. In particular:

    • evaluation of code in Sage worksheet should feel a little snappier and more robust,
    • various random and hard to reproduce issues with synchronized editing should be fixed, e.g. chat messages out of order, etc.
    • everything should generally be a bit faster and more scalable overall.

    Here's a short technical description of what changed. The basic architecture of SageMathCloud is that there are many web browsers connected to many hubs, which are in turn connected to your project (and to many other projects too):

      [web browser] <- websocket ----\/
    [web browser] <------------> [hub]<------ tcp -------\/
    [project]
    [web browser] <------------> [hub]<------------------/\

    Until today, the differential synchronization implementation involved having a copy of the document you're editing on:

    1. each hub pictured above,
    2. in each browser, and
    3. in the project itself.

    In particular, there were three slightly different implementations of differential synchronization running all over the place. The underlying core code is the same for all three, but the way it is used in each case is different, due to different constraints. The implementations:

    • browser: running in a web browser, which mainly has to worry about dealing with the CodeMirror editor and a flakie Internet connection.
    • hub: running in a node.js server that's also handling a lot of other stuff, including worrying about auth, permissions, proxying, logging, account creation, etc.
    • project: running in the project, which doesn't have to worry about auth or proxying or much else, but does have to worry about the filesystem.

    Because we're using Node.js, all three implementations are written in the same language (CoffeeScript), and run the same underlying core code (which I BSD licensed at https://github.com/sagemath/cloud/blob/master/diffsync.coffee). The project implementation was easiest to write, since it's very simple and straightforward, and has minimal constraints. The browser implementation is mainly difficult, since the Internet comes and goes (as laptops suspend/resume), and it this involves patching and diff'ing a CodeMirror editor instance; CodeMirror is difficult, because it views the document as a line instead of a single long string, and we want things to work even for documents with hundreds of thousands of lines, so converting back and forth to a string is not an option! Implementing the hub part of synchronization is the hardest, for various reasons -- and debugging it is particularly hard. Moreover, computing diffs can be computationally expensive if the document is large, so doing anything involving differential sync on the hub can result in nontrivial locking cpu usage, hence slower handling of other user messages (node.js is single threaded). The hub part of the above was so hard to get right that it had some nasty locking code, which shouldn't be needed, and just looked like a mess.

    A lot of issues people ran into with sync involved two browsers connected to different hubs, who then connected to the same document in a project. The two hubs' crappy synchronization would appear to work right in this part of the picture "[web browser] <------------> [hub]", but have problems with this part "[hub]<-------------->[project]", which would lead to pain later on. In many cases, the only fix was to restart the hub (to kill its sync state) or for the user to switch hubs (by clearing their browser cookies).

    Change: I completely eliminated the hub from the synchronization picture. Now the only thing the hub does related to sync is forward messages back and forth between the web browser and local hub. Implementing this was harder than one might think, because the the project considered each client to be a single tcp connection, but now many clients can connect a project via the same tcp connection, etc.

    With this fix, if there are any bugs left with synchronization, they should be much easier to debug. The backend scalability and robustness of sync have been my top priorities for quite a while now, so I'm happy to get this stuff cleaned up, and move onto the next part of the SMC project, which is better collaboration and course support.

    by William Stein (noreply@blogger.com) at May 06, 2014 10:38 AM

    May 01, 2014

    William Stein

    What can SageMathCloud (SMC) do?

    The core functionality of SageMathCloud:

    • Color command-line Terminal with many color schemes, which several people can interact with at once, with state that survives browser refresh.
    • Editing of documents: with syntax highlighting, auto-indent, etc., for files with the following extensions:
      c, c++, cql, cpp, cc, conf, csharp, c#, coffee, css, diff, dtd, e, ecl, f, f90, f95, h, hs, lhs, html, java, jl, js, lua, m, md, mysql, patch, gp, go, pari, php, py, pyx, pl, r, rst, rb, ru, sage, sagews, scala, sh, spyx, sql, txt, tex, toml, bib, bbl, xml, yaml.
    (It's easy for me to add more, as CodeMirror supports them.) There are many color schemes and Emacs and Vim bindings.
    • Sage Worksheets: a single document interactive way to evaluate Sage code. This is highly extensible, in that you can define % modes by simply making a function that takes a string as input, and use %default_mode to make that mode the default. Also, graphics actually work in the %r automatically, exactly as in normal R (no mucking with devices or png's).
    • IPython notebooks: via an IPython session that is embedded in an iframe. This is synchronized, so that multiple users can interact with a notebook simultaneously, which was a nontrivial addition on top of IPython.
    • LaTeX documents: This fully supports sagetex, bibtex, etc. and the LaTeX compile command is customizable. This also has forward and inverse search, i.e., double click on preview to get to point in tex file and alt-enter in tex file to get to point in LaTeX document. In addition, this editor will work fine with 150+ page documents by design. (Editing multiple document files are not properly supported yet.)
    • Snapshots: the complete state of all files in your project are snapshotted (using bup, which is built on git) every 2 minutes, when you're actively editing a file. All of these snapshots are also regularly backed up to encrypted disks offsite, just in case. I plan for these highly efficient deduplicated compressed snapshots to be saved indefinitely. Browse the snapshots by clicking "Snapshots" to the right when viewing your files or type cd ~/.snapshots/master in the terminal.
    • Replication: every project is currently stored in three physically separate data centers; if a machine or data center goes down, your project pops up on another machine within about one minute. A computer at every data center would have to fail for your project to be inaccessible. I've had zero reports of projects being unavailable since I rolled out this new system 3 weeks ago (note: there was a project that didn't work, but that was because I had set the quota to 0% cpu by accident).

    Sage

    The Sage install contains the following extra packages (beyond what is standard in Sage itself). When you use Sage or IPython, this will all be available.
    basemap, biopython, biopython, bitarray, brian, cbc, chomp, clawpack, cluster_seed, coxeter3, cryptominisat, cunningham_tables, database_cremona_ellcurve, database_gap, database_jones_numfield, database_kohel, database_odlyzko_zeta, database_pari, database_symbolic_data, dot2tex, fabric, gap_packages, gnuplotpy, greenlet, guppy, h5py, httplib2, kash3, lie, lrs, lxml, mahotas, mercurial, mpld3, munkres, mysql-python, nauty, netcdf4, neuron, normaliz, nose, nose, numexpr, nzmath, oct2py, p_group_cohomology, pandas, paramiko, patsy, patsy, phc, plotly, psutil, psycopg2, pybtex, pycryptoplus, pyface, pymongo, pyproj, pyx, pyzmq, qhull, quantlib, redis, requests, rpy2, scikit_learn, scikits-image, scimath, shapely, simpy, snappy, statsmodels, stein-watkins-ecdb, tables, theano, topcom, tornado, traits, xlrd, xlwt, zeromq

    R

    Also, I install the following extra packages into the R that is in Sage:
    KernSmooth, Matrix, Rcpp, cairo, car, circular, cluster, codetools, e1071, fields, ggplot2, glmnet, lattice, mgcv, mvtnorm, plyr, reshape2, rpart, stringr, survival, zoo

    It's Linux

    SMC can do pretty much anything that doesn't require X11 that can be done with an Ubuntu-14.04 Linux can be done. I've pre-installed the following packages, and if people want others, just let me know (and they will be available to all projects henceforth):
    vim git wget iperf dpkg-dev make m4 g++ gfortran liblzo2-dev libssl-dev libreadline-dev  libsqlite3-dev libncurses5-dev git zlib1g-dev openjdk-7-jdk libbz2-dev libfuse-dev pkg-config libattr1-dev libacl1-dev par2 ntp pandoc ssh python-lxml  calibre  ipython python-pyxattr python-pylibacl software-properties-common  libevent-dev xfsprogs lsof  tk-dev  dstat emacs vim texlive texlive-* gv imagemagick octave mercurial flex bison unzip libzmq-dev uuid-dev scilab axiom yacas octave-symbolic quota quotatool dot2tex python-numpy python-scipy python-pandas python-tables libglpk-dev python-h5py zsh python3 python3-zmq python3-setuptools cython htop ccache python-virtualenv clang libgeos-dev libgeos++-dev sloccount racket libxml2-dev libxslt-dev irssi libevent-dev tmux sysstat sbcl gawk noweb libgmp3-dev ghc  ghc-doc ghc-haddock ghc-mod ghc-prof haskell-mode haskell-doc subversion cvs bzr rcs subversion-tools git-svn markdown lua5.2 lua5.2-*  encfs auctex vim-latexsuite yatex spell cmake libpango1.0-dev xorg-dev gdb valgrind doxygen haskell-platform haskell-platform-doc haskell-platform-prof  mono-devel mono-tools-devel ocaml ocaml-doc tuareg-mode ocaml-mode libgdbm-dev mlton sshfs sparkleshare fig2ps epstool libav-tools python-software-properties software-properties-common h5utils libnetcdf-dev netcdf-doc netcdf-bin tig libtool iotop asciidoc autoconf bsdtar attr  libicu-dev iceweasel xvfb tree bindfs liblz4-tool tinc  python-scikits-learn python-scikits.statsmodels python-skimage python-skimage-doc  python-skimage-lib python-sklearn  python-sklearn-doc  python-sklearn-lib python-fuse cgroup-lite cgmanager-utils cgroup-bin libpam-cgroup cgmanager cgmanager-utils cgroup-lite  cgroup-bin r-recommended libquantlib0 libquantlib0-dev quantlib-examples quantlib-python quantlib-refman-html quantlib-ruby r-cran-rquantlib  libf2c2-dev libpng++-dev libcairomm-1.0-dev r-cran-cairodevice x11-apps mesa-utils libpangox-1.0-dev
    I've also put extra effort (beyond just apt-get) to install the following:
    polymake, dropbox, aldor/"AXIOM", Macaulay2, Julia, 4ti2

    Functionality that is currently under development

    We're working hard on improving SageMathCloud right now.

    • Streamlining document sync: will make code evaluation much faster, eliminate some serious bugs when the network is bad, etc.
    • Geographic load balancing and adding data centers, so that, e.g., if you're in Europe or Asia you can use SMC with everything happening there. This will involve DNS load balancing via Amazon Route 53, and additionally moving projects to run on the DC that is nearest you on startup, rather than random. Right now all computers are in North America.
    • Mounting a folder of one project in another project, in a way that automatically fixes itself in case a machine goes down, etc. Imagine mounting the projects of all 50 students in your class, so you can easily assign and collect homework, etc.
    • Homework assignment and grading functionality with crowdsourcing of problem creation, and support for peer and manual grading.
    • BSD-licensed open source single-project version of SMC.
    • Commercial software support and instructions for how to install your own into SMC (e.g., Mathematica, Matlab, Magma, etc.)
    • ssh access into projects easily

    by William Stein (noreply@blogger.com) at May 01, 2014 02:27 PM

    April 25, 2014

    William Stein

    The SageMathCloud Roadmap

    Everything below is subject to change.

    Implementation Goals

    • (by April 27) Major upgrades -- update everything to Ubuntu 14.04 and Sage-6.2. Also upgrade all packages in SMC, including Haproxy, nginx, stunnel, etc.

    • (by May 4) Streamline doc sync: top priority right now is to clean up sync, and eliminate bugs that show up when network is bad, many users, etc.

    • (by May 30) Snapshots:
      • more efficient way to browse snapshot history (timeline view)
      • browse snapshots of a single file (or directory) only
    • (by May 30) User-owned backups
      • way to download complete history of a project, i.e,. the underlying bup/git repository with snapshot history.
      • way to update offline backup, getting only the changes since last download.
      • easy way to download all current files as a zip or tarball (without the snapshots).
    • (by June 30) Public content
      • Ability to make a read-only view of the project visible publicly on the internet. Only works after the account is at least n days old.
      • By default, users will have a "report spammer" button on each page. Proven 'good users' will have button removed. Any valid reported users will be permanently banned.
      • Only users with validated .edu accounts will be allowed to publish for starters. Maybe allow gmail.com at some point.
    • (by June 30) Fix all major issues (none major) that are listed on the github page: https://github.com/sagemath/cloud/issues

    • (by July 31) Group/social features:
      • Support for mounting directories between projects
      • Group management: combine a group of users and projects into a bigger entity:
        • a University course -- see feature list below
        • a research group: a collection of projects with a theme that connects them, where everybody has access to everybody else's projects
      • A feed that shows activity on all projects that users care about, with some ranking. Better notifications about chat messages and activity

    Commercial Products

    We plan four distinct products of the SMC project: increased quotas, enhanced university course support, license and support to run a private SMC cloud, supported open source BSD-licensed single-user version of SMC (and offline mode).

    • PRODUCT: Increase the quota for a specific project (launch by Aug 2014)
      • cpu cores
      • RAM
      • timeout
      • disk space
      • number of share mounts
    • Remarks:
      • There will be an option in the UI to change each of the above parameters that some project collabs see (maybe only owners initially).
      • Within moments of making a change it goes live and billing starts.
      • When the change is turned off, billing stops. When a project is not running it is not billed. (Obviously, we need to add a stop button for projects.)
      • There is a maximum amount that the user can pre-spend (e.g., $500 initially).
      • At the end of the month, the user is given a link to a Univ of Washington website and asked to pay a certain amount, and register there under the same email as they use with SMC.
      • When they pay, SMC receives an email and credits their account for the amount they pay.
      • There will also be a limit soon on the number of projects that can be associated with an account (e.g., 10); pay a small monthly to raise this.
    • PRODUCT: University course support (launch by Aug 2014 in time for Fall 2014 semester)

      • Free for the instructor and TA's
      • Each student pays $20 in exchange for:
        • one standard project (they can upgrade quotas as above), which TA and instructor are automatically collabs on
        • student is added as collaborator to a big shared project
        • in student's private project they get homework assignments (assigned, collected)
      • Instructor's project has all student projects as mounted shares
      • Instructor has a student data spreadsheet with student grades, project ids (links), etc.
      • Powerful modern tool for designing homework problems that can be automatically graded, with problems shared in a common pool, with ratings, and data about their usage.
      • A peer grading system for more advanced courses.
      • Tools to make manual grading more fun.
    • PRODUCT: License to run a private SMC cloud in a research lab, company, supercomputer, etc. (launch a BETA version by July 2014, with caveats about bugs).

      • base fee (based on organization size)
      • technical support fee
      • site visit support: install, run workshop/class
    • PRODUCT: Free BSD-licensed single-user account version of SMC (launch by December 2014)

      • a different way to do LaTeX editing, manage a group of IPython notebooks, use Sage worksheets, etc.
      • be included with Sage.
      • be included in many Linux distros
      • the doc synchronization code, local_hub, CoffeeScript client, terminal.
      • mostly Node.js application (with a little Python for Sage/IPython).
      • ability to sync with a cloud-hosted SMC project.
      • sell pre-configured (or just support) for a user to install standalone-SMC on some cloud host such as EC2 or Digital Ocean

    by William Stein (noreply@blogger.com) at April 25, 2014 12:08 PM

    April 23, 2014

    Nikhil Peter

    GSoC Selection 2014

    I was recently lucky enough to be selected for GSoC 2014 for SageMath, specifically on improvements to the Sage Android App, both internal and external.

    The current agenda during the Community Bonding Period(upto 18th May) is:

    • Read and understand the current codebase in greater detail.
    • Fix some already existing issues on the git repo which aren’t part of the already proposed fixes to get a better feel for the code.
    • Have a meetup with my mentor, Volker Braun and possibly thrash out some of the finer points of the timeline and the proposed fixes.
    • Decide on some of the libraries to be used, again, during the above meetup.
    • Get this blog up and running

    I also have my end semester exams intermittently during 5th May-17th May,so some balancing will also be necessary.

    Other than that I’m very thrilled,happy and pleasantly surprised that I actually got this opportunity.

    Here’s to a great summer!

    by hav3n at April 23, 2014 03:04 PM

    April 22, 2014

    Harald Schilly

    Sage GSoC 2014 Projects

    This year, Sage is happy to announce that it will be running five Google Summer of Code projects. Welcome everyone to the Sage community and all the best to your summer filled with exiting projects ;-)

    EDIT: lmonade related projects are listed here.

    Nikhil Peter Raj (Volker Braun):

    Improvements to the Sage Android App

    This project aims to improve upon the existing app by improving both internal structure and external user experience, the former by replacing and redoing certain modules of the app (JSON Parsing, HTTP Requests and Exception Handling) and the latter by redesigning the UI so it conforms to the latest standards and specifications of Android Design, as well as adding extra features to the code input such as Syntax Highlighting and options to record and store custom inserts.

    Amit Jamadagni (Miguel Angel Marco-Buzunariz)

    Knot theory implementation

    The project mainly deals with the implementation of various links, knots presentations and calculating various related invariants. The main aim of the project is to achieve: Conversion between different representations of Knots and Links to mention a few: Gauss code to Braid Word, Braid word to DT code, Gauss code to DT code. Implementation of various invariants: Alexander Polynomial, Conway Polynomial. Stiefert Matrix, Jones Polynomial

    Jayant Apte (Stefan van Zwam)

    Efficient class-specific membership checks, extensions and visualization 

    First goal of this proposal is to implement efficient testing of a matroid for membership of various classes and to perform single element extensions of matroids that exploit the knowledge that a given matroid belongs to a certain class, to have lower complexity. We also propose implementation of visualization techniques and an application of matroid theory in form of enumeration of network codes for a multisource network coding problem which is one of the timely problems in information theory.

    Simon Spicer (William Stein, Jonathan Bober)

    Efficient exact analytic rank computation for elliptic curves in Sage

    My project would be to implement functionality in Sage to compute the analytic rank of an elliptic curve exactly modulo standard conjectures, with better scaling than existing analytic rank estimation methods. This would necessitate both writing (significant amounts of) new Python and Cython code, as well as porting existing code into the Sage codebase.

    Daniel Bell (Ivan Andrus)

    iSage - improving the Sage iOS apps

    There are several aspects of the Sage iOS apps that needs to be refreshed, improved or implemented. I propose to develop the ability to interact with several Sage services from the app, as well refresh the user interface for the app.


    by Harald Schilly (noreply@blogger.com) at April 22, 2014 06:10 PM

    Martin Albrecht

    LMonade GSoC 2014 Accepted Projects

    The list of accepted projects of this year’s Google Summer of Code is out. For the list of accepted projects for Sage see here, for the LMonade project see below, for all other accepted projects see Google’s site. I am going to mentor William’s M1RI project together with Clément Pernet. It’s going to be a […]

    by martinralbrecht at April 22, 2014 10:15 AM

    April 15, 2014

    David Horgan (quantumtetrahedron)

    Enhanced by Zemanta

    This week I have been continuing my collaborative work on the matrix elements of the Hamiltonian Constraint in LQG, see the posts:

    So I have been reviewing knot theory especially skein relations, SU(2) recoupling theory and Lieb-Temperley algebra. And reviewing my old sagemath calculations which I’ve written up as a paper and will post as an eprint on arXiv.

    The calculations I have  done this week are based on the paper ‘Matrix Elements of Thiemann’s Hamiltonian Constraint in Loop Quantum Gravity by Borissov , De Pietri and Rovelli‘.

    In this paper the authors present an explicit computation of matrix elements of the hamiltonian constraint operator in non-perturbative quantum gravity. They look at the euclidean term of Thiemann’s version of the constraint and compute its action on trivalent states, for all its natural orderings. The calculation is performed using
    graphical techniques from the recoupling theory of colored knots and links. They  give the matrix elements of the Hamiltonian constraint operator in the spin network basis in compact algebraic form.

    Thiemann’s Hamiltonian constraint operator.
    The Hamiltonian constraint of general relativity can be written as

    matrixelementsequ1a

    matrixelementsequ1

    The first term in the right hand side of  defines
    the hamiltonian constraint for gravity with euclidean signature – accordingly, he is called the euclidean hamiltonian constraint.

    matrixelementsfig0

    For trivalent gauge-invariant vertices,  (r, p, q) denotes the
    colours associated to the edges I, J,K of a vertex v and we have:

    matrixelementsequ6.3

    where i = 1, 2 for the two orderings  studied. The explicit values of the matrix elements  a1 and a2are:

    matrixelementsequ6.4

    where 

    matrixelementsequ5.8

     

    matrixelementsequ5.9and

    matrixelementsequa17

    Using Sage Mathematics Software I coded these the trivalent vertex matrix elements :matrixelementsfig1

    And then checked the resulting calculations against the values tabulated in the paper.

    matrixelementsfig2

    here we can see  A1 and A2 values corresponding to the values of :

    matrixelementsequ6.4

     

    matrixelementsfig3

    My Sage Mathematics Software  code produces precisely the numerical values of A1 and A2 as indicated in the figure above.

    For example, for (p,q,r)  = (1,0,1) I have A1 =0.000 and A2 = 0,07433 which is to 4 digit accuracy the tabulated value of 101.

    From the point of view of my own research this is great and I can be confident in the robustness of my coding techniques. I will continue working through papers on the matrix elements of the LQG Hamiltonian constraints coding up the analyses working towards coding the full hamiltonian constraint and curvature reviewed in the posts:

    Enhanced by Zemanta

    by David Horgan at April 15, 2014 07:58 PM

    William Stein

    SageMathCloud's new storage architecture

    Keywords: ZFS, bup, rsync, Sage

    SageMathCloud (SMC) is a browser-based hosted cloud computing environment for easily collaborating on Python programs, IPython notebooks, Sage worksheets and LaTeX documents. I spent the last four months wishing very much that less people would use SMC. Today that has changed, and this post explains some of the reasons why.

    Consistency Versus Availability

    Consistency and availability are competing requirements. It is trivial to keep the files in a SageMathCloud project consistent if we store it in exactly one place; however, when the machine that project is on goes down for any reason, the project stops working, and the users of the project are very unhappy. By making many copies of the files in a project, it's fairly easy to ensure that the project is always available, even if network switches in multiple data centers completely fail, etc. Unfortunately, if there are too many users and the synchronization itself puts too heavy of a load on the overall system, then machines will fail more frequently, and though projects are available, files do not stay consistent and data is lost to the user (though still "out there" somewhere for me to find).

    Horizontal scalability of file storage and availability of files are also competing requirements. If there are a few compute machines in one place, then they can all mount user files from one central file server. Unfortunately, this approach leads to horrible performance if instead the network is slow and has high latency; it also doesn't scale up to potentially millions of users. A benchmark I care about is downloading a Sage binary (630MB) and extracting it (creating over 70,000 files); I want this to take at most 3 minutes total, which is hard using a networked filesystem served over the general Internet between data centers. Instead, in SMC, we store the files for user projects on the compute machines themselves, which provides optimal speed. Moreover, we use a compressed filesystem, so in many cases read and write speeds are nearly twice as fast as they might be otherwise.

    New Architecture of SageMathCloud

    An SMC project with id project_id consists of two directories of files, replicated across several machines using rsync:
    1. The HOME directory: /projects/project_id
    2. A bup repository: /bup/bups/project_id
    Users can also create files they don't care too much about in /scratch, which is a compressed and deduplicated ZFS filesystem. It is not backed up in any way, and is local to that compute.

    The /projects directory is one single big ZFS filesystem, which is both lz4 compressed and deduplicated. ZFS compression is just plain awesome. ZFS deduplication is much more subtle, as deduplication is tricky to do right. Since data can be deleted at any time, one can't just use a bloom filter to very efficiently tell whether data is already known to the filesystem, and instead ZFS uses a much less memory efficient data structure. Nonetheless, deduplication works well in our situation, since the compute machines all have sufficient RAM (around 30-60GB), and the total data stored in /projects is well under 1TB. In fact, right now most compute machines have about 100GB stored in /projects.
    The /bup/bups directory is also one single big ZFS filesystem; however, it is neither compressed nor deduplicated. It contains bup repositories, where bup is an awesome git-based backup tool written in Python that is designed for storing snapshots of potentially large collections of arbitrary files in a compressed and highly deduplicated way. Since the git pack format is already compressed and deduplicated, and bup itself is highly efficient at deduplication, we would gain almost nothing by using compression or deduplication directly on this ZFS filesystem. When bup deduplicates data, it does so using a sliding window through the file, unlike ZFS which simply breaks the file up into blocks, so bup does a much better job at deduplication. Right now, most compute machines have about 50GB stored in /bup/bups.

    When somebody actively uses a project, the "important" working files are snapshotted about once every two minutes. These snapshots are done using bup and stored in /bup/bups/project_id, as mentioned above. After a snapshot is successfully created, the files in the working directory and in the bup repository are copied via rsync to each replica node. The users of the project do not have direct access to /bup/bups/project_id, since it is of vital importance that these snapshots cannot be corrupted or deleted, e.g., if you are sharing a project with a fat fingered colleague, you want peace of mind that even if they mess up all your files, you can easily get them back. However, all snapshots are mounted at /projects/project_id/.snapshots and browseable by the user; this uses bup's FUSE filesystem support, enhanced with some patches I wrote to support file permissions, sizes, change times, etc. Incidentally, the bup snapshots have no impact on the user's disk quota.

    We also backup all of the bup archives (and the database nodes) to a single large bup archive, which we regularly backup offsite on encrypted USB drives. Right now, with nearly 50,000 projects, the total size of this large bup archive is under 250GB (!), and we can use it efficiently recover any particular version of any file in any project. The size is relatively small due to the excellent deduplication and compression that bup provides.

    In addition to the bup snapshots, we also create periodic snapshots of the two ZFS filesystems mentioned above... just in case. Old snapshots are regularly deleted. These are accessible to users if they search around enough with the command line, but are not consistent between different hosts of the project, hence using them is not encouraged. This ensures that even if the whole replication/bup system were to somehow mess up a project, I can still recover everything exactly as it was before the problem happened; so far there haven't been any reports of problems.

    Capacity

    Right now there are about 6000 unique weekly users of SageMathCloud and often about 300-400 simultaneous users, and there are nearly 50,000 distinct projects. Our machines are at about 20% disk space capacity, and most of them can easily be expanded by a factor of 10 (from 1TB to 12TB). Similarly, disk space for our Google compute engine nodes is $0.04 GB / month. So space-wise we could scale up by a factor of 100 without too much trouble. The CPU load is at about 10% as I write this, during a busy afternoon with 363 clients connected very actively modifying 89 projects. The architecture that we have built could scale up to a million users, if only they would come our way...

    by William Stein (noreply@blogger.com) at April 15, 2014 04:02 PM

    April 11, 2014

    Sébastien Labbé

    My status report at Sage Days 57 (RecursivelyEnumeratedSet)

    At Sage Days 57, I worked on the trac ticket #6637: standardize the interface to TransitiveIdeal and friends. My patch proposes to replace TransitiveIdeal and SearchForest by a new class called RecursivelyEnumeratedSet that would handle every case.

    A set S is called recursively enumerable if there is an algorithm that enumerates the members of S. We consider here the recursively enumerated set that are described by some seeds and a successor function succ. The successor function may have some structure (symmetric, graded, forest) or not. Many kinds of iterators are provided: depth first search, breadth first search or elements of given depth.

    TransitiveIdeal and TransitiveIdealGraded

    Consider the permutations of \(\{1,2,3\}\) and the poset generated by the method permutohedron_succ:

    sage: P = Permutations(3)
    sage: d = {p:p.permutohedron_succ() for p in P}
    sage: S = Poset(d)
    sage: S.plot()
    
    /~labbe/Files/2014/poset_123.png

    The TransitiveIdeal allows to generates all permutations from the identity permutation using the method permutohedron_succ as successor function:

    sage: succ = attrcall("permutohedron_succ")
    sage: seed = [Permutation([1,2,3])]
    sage: T = TransitiveIdeal(succ, seed)
    sage: list(T)
    [[1, 2, 3], [2, 1, 3], [1, 3, 2], [2, 3, 1], [3, 2, 1], [3, 1, 2]]
    

    Remark that the previous ordering is neither breadth first neither depth first. It is a naive search because it stores the element to process in a set instead of a queue or a stack.

    Note that the method permutohedron_succ produces a graded poset. Therefore, one may use the TransitiveIdealGraded class instead:

    sage: T = TransitiveIdealGraded(succ, seed)
    sage: list(T)
    [[1, 2, 3], [2, 1, 3], [1, 3, 2], [2, 3, 1], [3, 1, 2], [3, 2, 1]]
    

    For TransitiveIdealGraded, the enumeration is breadth first search. Althougth, if you look at the code (version Sage 6.1.1 or earlier), we see that this iterator do not make use of the graded hypothesis at all because the known set remembers every generated elements:

    current_level = self._generators
    known = set(current_level)
    depth = 0
    while len(current_level) > 0 and depth <= self._max_depth:
        next_level = set()
        for x in current_level:
            yield x
            for y in self._succ(x):
                if y == None or y in known:
                    continue
                next_level.add(y)
                known.add(y)
        current_level = next_level
        depth += 1
    return
    

    Timings for TransitiveIdeal

    sage: succ = attrcall("permutohedron_succ")
    sage: seed = [Permutation([1..5])]
    sage: T = TransitiveIdeal(succ, seed)
    sage: %time L = list(T)
    CPU times: user 26.6 ms, sys: 1.57 ms, total: 28.2 ms
    Wall time: 28.5 ms
    
    sage: seed = [Permutation([1..8])]
    sage: T = TransitiveIdeal(succ, seed)
    sage: %time L = list(T)
    CPU times: user 14.4 s, sys: 141 ms, total: 14.5 s
    Wall time: 14.8 s
    

    Timings for TransitiveIdealGraded

    sage: seed = [Permutation([1..5])]
    sage: T = TransitiveIdealGraded(succ, seed)
    sage: %time L = list(T)
    CPU times: user 25.3 ms, sys: 1.04 ms, total: 26.4 ms
    Wall time: 27.4 ms
    
    sage: seed = [Permutation([1..8])]
    sage: T = TransitiveIdealGraded(succ, seed)
    sage: %time L = list(T)
    CPU times: user 14.5 s, sys: 85.8 ms, total: 14.5 s
    Wall time: 14.7 s
    

    In conlusion, use TransitiveIdeal for naive search algorithm and use TransitiveIdealGraded for breadth search algorithm. Both class do not use the graded hypothesis.

    Recursively enumerated set with a graded structure

    The new class RecursivelyEnumeratedSet provides all iterators for each case. The example below are for the graded case.

    Depth first search iterator:

    sage: succ = attrcall("permutohedron_succ")
    sage: seed = [Permutation([1..5])]
    sage: R = RecursivelyEnumeratedSet(seed, succ, structure='graded')
    sage: it_depth = R.depth_first_search_iterator()
    sage: [next(it_depth) for _ in range(5)]
    [[1, 2, 3, 4, 5],
     [1, 2, 3, 5, 4],
     [1, 2, 5, 3, 4],
     [1, 2, 5, 4, 3],
     [1, 5, 2, 4, 3]]
    

    Breadth first search iterator:

    sage: it_breadth = R.breadth_first_search_iterator()
    sage: [next(it_breadth) for _ in range(5)]
    [[1, 2, 3, 4, 5],
     [1, 3, 2, 4, 5],
     [1, 2, 4, 3, 5],
     [2, 1, 3, 4, 5],
     [1, 2, 3, 5, 4]]
    

    Elements of given depth iterator:

    sage: list(R.elements_of_depth_iterator(9))
    [[5, 4, 2, 3, 1], [4, 5, 3, 2, 1], [5, 3, 4, 2, 1], [5, 4, 3, 1, 2]]
    sage: list(R.elements_of_depth_iterator(10))
    [[5, 4, 3, 2, 1]]
    

    Levels (a level is a set of elements of the same depth):

    sage: R.level(0)
    [[1, 2, 3, 4, 5]]
    sage: R.level(1)
    {[1, 2, 3, 5, 4], [1, 2, 4, 3, 5], [1, 3, 2, 4, 5], [2, 1, 3, 4, 5]}
    sage: R.level(2)
    {[1, 2, 4, 5, 3],
     [1, 2, 5, 3, 4],
     [1, 3, 2, 5, 4],
     [1, 3, 4, 2, 5],
     [1, 4, 2, 3, 5],
     [2, 1, 3, 5, 4],
     [2, 1, 4, 3, 5],
     [2, 3, 1, 4, 5],
     [3, 1, 2, 4, 5]}
    sage: R.level(3)
    {[1, 2, 5, 4, 3],
     [1, 3, 4, 5, 2],
     [1, 3, 5, 2, 4],
     [1, 4, 2, 5, 3],
     [1, 4, 3, 2, 5],
     [1, 5, 2, 3, 4],
     [2, 1, 4, 5, 3],
     [2, 1, 5, 3, 4],
     [2, 3, 1, 5, 4],
     [2, 3, 4, 1, 5],
     [2, 4, 1, 3, 5],
     [3, 1, 2, 5, 4],
     [3, 1, 4, 2, 5],
     [3, 2, 1, 4, 5],
     [4, 1, 2, 3, 5]}
    sage: R.level(9)
    {[4, 5, 3, 2, 1], [5, 3, 4, 2, 1], [5, 4, 2, 3, 1], [5, 4, 3, 1, 2]}
    sage: R.level(10)
    {[5, 4, 3, 2, 1]}
    

    Recursively enumerated set with a symmetric structure

    We construct a recursively enumerated set with symmetric structure and depth first search for default enumeration algorithm:

    sage: succ = lambda a: [(a[0]-1,a[1]), (a[0],a[1]-1), (a[0]+1,a[1]), (a[0],a[1]+1)]
    sage: seeds = [(0,0)]
    sage: C = RecursivelyEnumeratedSet(seeds, succ, structure='symmetric', algorithm='depth')
    sage: C
    A recursively enumerated set with a symmetric structure (depth first search)
    

    In this case, depth first search is the default algorithm for iteration:

    sage: it_depth = iter(C)
    sage: [next(it_depth) for _ in range(10)]
    [(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (0, 6), (0, 7), (0, 8), (0, 9)]
    

    Breadth first search. This algorithm makes use of the symmetric structure and remembers only the last two levels:

    sage: it_breadth = C.breadth_first_search_iterator()
    sage: [next(it_breadth) for _ in range(10)]
    [(0, 0), (0, 1), (0, -1), (1, 0), (-1, 0), (-1, 1), (-2, 0), (0, 2), (2, 0), (-1, -1)]
    

    Levels (elements of given depth):

    sage: sorted(C.level(0))
    [(0, 0)]
    sage: sorted(C.level(1))
    [(-1, 0), (0, -1), (0, 1), (1, 0)]
    sage: sorted(C.level(2))
    [(-2, 0), (-1, -1), (-1, 1), (0, -2), (0, 2), (1, -1), (1, 1), (2, 0)]
    

    Timings for RecursivelyEnumeratedSet

    We get same timings as for TransitiveIdeal but it uses less memory so it might be able to enumerate bigger sets:

    sage: succ = attrcall("permutohedron_succ")
    sage: seed = [Permutation([1..5])]
    sage: R = RecursivelyEnumeratedSet(seed, succ, structure='graded')
    sage: %time L = list(R)
    CPU times: user 24.7 ms, sys: 1.33 ms, total: 26.1 ms
    Wall time: 26.4 ms
    
    sage: seed = [Permutation([1..8])]
    sage: R = RecursivelyEnumeratedSet(seed, succ, structure='graded')
    sage: %time L = list(R)
    CPU times: user 14.5 s, sys: 70.2 ms, total: 14.5 s
    Wall time: 14.6 s
    

    by Sébastien Labbé at April 11, 2014 05:15 PM

    April 07, 2014

    The Matroid Union

    Matroid Computation with Sage: comparing matroids

    I’m back with more on matroid computation in Sage. Previous installments are here, here, and here. As in the previous post, clicking the “Evaluate” buttons below will execute the code and return the answer. The various computations are again linked, so execute them in the right order.

    Today we will look at various ways to ask (and answer) the question “Are these two matroids equal?” There are some subtle aspects to this, since we had to balance efficiency of the code and mathematical interpretation. The result is that we have various ways to compare matroids.

    Isomorphism

    Matroids $M$ and $N$ are isomorphic if there is a bijection $\phi: E(M) \to E(N)$ mapping bases to bases and nonbases to nonbases. In Sage, we check this as follows:

    In other words, every matroid in Sage has a method .is_isomorphic() taking another matroid as input and returning True or False. Currently there is no way to extract the actual isomorphism (it is on our wish list), but if you guessed one, you can check its correctness:

    We defined $\phi$ using a dictionary: for instance, phi['c'] equals 2. If you defined your map differently (e.g. as a function or a permutation), Sage will probably understand that too.

    Equality

    Matroids $M$ and $N$ are equal if $E(M) = E(N)$ and the identity map is an isomorphism. We can check for equality as follows:

    ==

    The standard way to compare two objects in Sage is through the comparison operator ==. For instance,

    When writing the matroid code, we chose to interpret the question M == N to mean “Is the internal representation of the matroid $M$ equal to the internal representation of $N$?” This is a very restricted view: the only time M == N will return True is if

    • $M$ and $N$ have the same internal data structure (i.e. both are of type BasisMatroid or both are of type LinearMatroid or …)
    • $M$ and $N$ are equal as matroids
    • The internal representations are equivalent (this is at the moment only relevant for linear matroids).

    Let’s consider a few examples:

    So only matroids $M_1$, $M_2$, and $M_4$ pass the equality test. This is because they are all linear matroids over the field $\mathrm{GF}(2)$, have the same ground set, and the matrices are projectively equivalent: one can be turned into the other using only row operations and column scaling. Matrix $M_3$ is isomorphic to $M_1$ but not equal; matroid $M_5$ is represented over a different field; matroid $M_6$ is represented by a list of bases, and matroid $M_7$ is represented by a list of “circuit closures”.

    This notion of equality has consequences for programming. In Python, a set is a data structure containing at most one copy of each element.

    So $S$ actually contains $M_3, M_5, M_6, M_7$, and one copy of $M_1, M_2, M_4$.

    This means, unfortunately, that you cannot rely on Python’s default filtering tools for sets if you want only matroidally equal objects, or only isomorphic objects. But those equality tests are way more expensive computationally, whereas in many applications the matroids in a set will be of the same type anyway.

    Inequivalent representations

    As mentioned above, each instance of the LinearMatroid class is considered a represented matroid. There are specialized methods for testing projective equivalence and projective isomorphism. Recall that matroids are projectively equivalent (in Sage’s terminology, field-equivalent) if the representation matrices are equal up to row operations and column scaling. So below, matroids $M_1$ and $M_3$ are field-equivalent; matroids $M_1$ and $M_4$ are field-isomorphic; and matroid $M_2$ has an inequivalent representation:

    I probably caused a lot of confusion, so feel free to ask questions in the comments below. Also, the documentation for the functions discussed above gives more explanations and examples.

    by Stefan van Zwam at April 07, 2014 09:32 PM

    April 04, 2014

    David Horgan (quantumtetrahedron)

    Enhanced by Zemanta

    This post is just a bit of informal exploration of the LQG curvature operator which I’ve been doing in preparation for a more formal attack during the next week.

    curveequ32

    which produces results like those below:

    curvefig5

     

    Informally exploring some ideas in sagemath using work already done on length, angle and volume operators.

    See posts

    sage17a

    sage17b

    sage17c

    sage17d

    This informal work produces reasonable results in terms of magnitudes and form and I’ll refine this over time.

    Enhanced by Zemanta

    by David Horgan at April 04, 2014 09:25 PM

    Vince Knight

    A list of stuff for my student to look at before getting down to some Sage development

    +James Campbell will be working with me this Summer on a project aimed at developing game theoretical stuff in to +Sage Mathematical Software System. James just emailed me asking for a list of stuff he could/should read up on before he starts. I thought more knowledgeable people than me might be able to contribute so I've lazily copied my email to him here: 

    ------------ email ------------

    Code:

    - git
    - sage development (tickets, documentation etc... This is something I don't know much about myself so read about it on the Safe site and watch videos on youtube there are a bunch of them)
    - cython (http://cython.org/ - watch this intro to Sage lecture by +William Steinhttps://www.youtube.com/watch?v=fI4NlMfGHC0 that's the first lecture in a class he's currently giving you also could watch the rest)
    - C (to help with cython - you don't necessarily need to be an expert I think)
    Test driven development: (watch all this and you will know what I mean: https://www.youtube.com/playlist?list=PL5859017B018F03F4)
    - ssh and *nix (so that you're comfortable to jump on to one of my machines if necessary - depending on time we might also get you to tweak the Cardiff server)
    - matplotlib (the python library that Sage uses to plot stuff, good to know it from a python pov so as to be able to get Sage to make it do what we want - we might or might not use this)
    - How Sage plots graphs (graph theory graphs like I used for this: http://goo.gl/KHGYk7 - we might or might not need this)

    Game Theory:


    We'll talk about this but 1 of the above (easy to code: 30 minutes of work) will be a gentle appetiser to the 'piece de resistance': normal form games,

    - Normal form games (first 6 chapters of http://www.vincent-knight.com/teaching/gametheory/)
    - The lrs algorithm (there is an implementation of this written in c that we either want to re-write to get working in Sage so you'll need to understand it or get Sage to talk to it / use it, I know Sage kind of has this as an optional library but I'm not entirely sure how to 'get at it' http://cgm.cs.mcgill.ca/~avis/C/lrs.html)
    - Polytopes, you want to be comfortable-ish with the vocabulary around polytopes to be able to understand the lrs algorithm a bit. 

    Books:

    - In general I'd say don't spend much money on Python books. Like most OSS stuff there's an awesome amount of stuff online. Take a look at: http://pythonbooks.revolunet.com/ (a list of free Python books). There are various exceptions to this rule though.

    - With regards to Sage I don't think you need a book for this project (as it's about building stuff for Sage so mainly reading about the development process and watching youtube videos is the way to go), I have a kindle copy of http://goo.gl/q9s9da, it's not bad but really everything is online. If you haven't already take a look at http://sagemath.org/help.html and join the relevant discussion groups on there.

    - With regards to Game Theory there's a reading list on my website (all that follow are linked to on there). Webb's book is a gentle introduction, Algorithmic Game Theory is great and more about what you will be looking at. Finally there's a newish book by Maschler which looks really nice but I've not had time to work through it yet. In general my course site should suffice (reading/working through those books could take you years) with regards to what you need to know for game theory and I am certainly not asking you to spend money on a book. If there's a book (GT or code) that you really think would be useful let's talk.

    ------------

    James is a pretty good coder with a good knowledge of a git based workflow already as his first year project (during which he actually learnt to code) has led his group to develop: http://thefightclub.herokuapp.com/ which is also on github (if you've made your way this far, please click on that and waste a minute or 2 of your life).

    If there's anything missing from this list please add it to the comments :)

    I'm looking forward to this project.

    by Vincent Knight (noreply@blogger.com) at April 04, 2014 11:57 AM

    March 27, 2014

    Vince Knight

    Scheduling group presentations using Graph Theory and Sage.

    Yesterday (2014-03-26) I spoke at the Embedded Enterprise Exchange about my new first year module. I used a Hangout on air during the talk and you can see it here:

    That's not what I'm going to talk about here though. The course I talked baout ends with all the 'companies' (groups of 4 students) giving a $\approx 25$ minute talk.

    I need to schedule 40ish talks and this needs to fit around the student availability as well as my own. In this post I'll describe how I did this using a combination of Doodle, +Sage Mathematical Software System and Graph Theory.

    The beginning of this post will be some of the boring details but towards the end I start talking about the mathematics (so feel free to skip to there...).

    First of all I needed to know my students availability.

    For this I simply used Doodle: https://doodle.com. I kind of use Doodle for every meeting I have to schedule (they also offer a cool tool that lets you show your availability so students/colleagues have an indication of when I might be able to meet with them).

    Here's a screenshot of the responses:


    You can't really see anything there as I had to zoom out a lot to grab the whole picture. Doodle allows you to download the information for any given poll in .xls format so I could relatively easily obtain the biadjacency matrix $M$ for my problem. Where $M_{ij}$ is 1 if group $i$ is available for schedule slot $j$ and 0 otherwise.


    The mathematics and code needed.

    Once I've got a .csv file (by tweaking the .xls file) of the biadjacency matrix I import that in to +Sage Mathematical Software System  and convert it to an instance of the `Matrix` class using the following:

    import csv 
    f=open(DATA+'availabilitymatrix','r') 
    data = [[int(j) for j in row] for row in csv.reader(f)] 
    f.close() 
    M = Matrix(data)

    I then need to remove any particular scheduled slots that are not picked by any company:

    M = matrix([row for row in M.transpose() if max(row) != 0]).transpose()

    Once I've done this I can define the bi-partite graph (bi-partite simply means that the vertices can be separated in to 2 non adjacent collections):

    g = BipartiteGraph(M)

    We can then get a get a picture of this, I do this using a 'partition' (a graph colouring) that will colour the groups (red) and the schedule slots (blue):

    g = BipartiteGraph(M)
    p = g.coloring()
    g.show(layout='circular',dist=1,vertex_size=250, graph_border=True, figsize=[15,15],partition=p)

    The various options I pass to the `show` command are simply to get the circular arrangement (and other minor things):



    The above looks quite messy and what I essentially want is get as many pairwise matchings between the blue vertices (slots) and red vertices (companies) so that each schedule slot is attributed at most 1 company and every company has at least 1 schedule slot.

    On any given graph $G=(V,E)$ this problem is known as looking for a maximal matching and can be written down mathematically:

    Max:  $\sum_{e \in E(G)} m_e$
    Such that:  $\forall v$ $\sum_{e \in E(G) \atop v \sim e} m_e \leq 1$

    We are in essence finding a subset of edges of our original graph in such a way as to maximise the number of edges such that no vertex has more than 1 edge.

    This is all explained extremely well at the +Sage Mathematical Software System documentation pages here.

    Furthermore at the documentation the code needed to solve the problem is also given:

    p = MixedIntegerLinearProgram()  
    matching = p.new_variable(binary=True)
    p.set_objective(sum(matching[e] for e in g.edges(labels=False)))
    for v in g:
          p.add_constraint(sum(matching[e]
              for e in g.edges_incident(v, labels=False)) <= 1)
    p.solve()

    When I run the above, `p` is now a solved Mixed Integer Linear Program (corresponding to the matching problem described). To obtain the solution:

    matching = p.get_values(matching)
    schedule = [e for e,b in matching.iteritems() if b == 1]

    Calling `schedule` gives a set of edges (denoted by the corresponding vertex numbers):

    [(5, 57), (0, 45), (23, 50), (4, 42), (38, 60), (26, 56), (34, 62), (16,
    68), (1, 43), (7, 40), (9, 44), (36, 58), (12, 49), (35, 71), (28, 66),
    (25, 47), (24, 53), (6, 46), (3, 64), (39, 67), (17, 69), (22, 55), (13,
    48), (33, 41), (10, 63), (21, 61), (30, 52), (29, 65), (37, 70), (15,
    54), (19, 51), (11, 59)]

    It is then really easy to draw another graph:

    B=Graph(schedule)
    p = B.coloring()
    B.show(layout='circular',dist=1,vertex_size=250, graph_border=True, figsize=[15,15],partition=p)

    Which gives:



    You can see that the obtained graph has all the required properties and most importantly is a lot less congested.

    Some details.

    I'm leaving out some details, for example I kept track of the names of the companies and also the slots so that the final output of all this looked like this:

    4InARow: Fri1100
    Abacus: Tue0930
    Alpha1: Thu1130
    Alpha2: Mon0930
    AusfallLtd: Fri1230
    AxiomEnterprise: Thu1000
    Batduck: Thu1430
    CharliesAngles: Thu1500
    CwmniRhifau: Mon1330
    EasyasPi: Fri1130
    Evolve: Thu1300
    HSPLLtd.: Mon1030
    JECT: Tue1200
    JJSL: Thu1030
    JNTL: Mon1400
    JennyCash: Tue1630
    KADE: Fri1330
    MIAS: Fri1300
    MIPE: Thu1100
    MLC: Tue1600
    Nineties: Mon0900
    Promis: Fri1400
    R.A.C.H: Tue1530
    RYLR: Tue1230
    SBTP: Fri1030
    Serendipity: Mon1230
    UniMath: Tue1300
    VectorEnterprises: Tue1330
    Venus: Thu1400
    codeX: Mon1300
    dydx: Wed1630
    eduMath: Thu0930

    (BatDuck is my favourite company name by far...)

    Why did I do this this way?

    There are 3 reasons:

    1. I shared the schedule with my students through a published Sage sheet on our server. That way they can see a direct applied piece of mathematics and can also understand some of the code if they wanted to.
    2. "Point and click doesn't scale" - I'm sure I could have solved this one instance of my problem with pen and paper and some common sense faster than it took me to write the code to solve the problem. The thing is next year when I need to schedule these talks again it will at most take me 3 minutes as the code is all here and ready to go. (Most readers of this blog won't need that explained but if any of my students find their way here: that's an important message for you).
    3. It was fun.

    by Vincent Knight (noreply@blogger.com) at March 27, 2014 03:35 PM

    March 14, 2014

    Lee Worden

    Sage and WorkingWiki demo: High-level analysis of a population dynamics model

    Publish Date: 
    Fri, 03/14/2014 - 13:39

    State-space diagram produced by the Sage framework

    I've been developing a framework in Sage to work with the kind of dynamic models that my collaborators and I use a lot of the time in population biology and social-science projects. Here's a demo of what it can do:

    by worden at March 14, 2014 08:55 PM

    March 06, 2014

    Sébastien Labbé

    Demo of the IPython notebook at Sage Paris group meeting

    Today I am presenting the IPython notebook at the meeting of the Sage Paris group. This post gathers what I prepared.

    Installation

    First you can install the ipython notebook in Sage as explained in this previous blog post. If everything works, then you run:

    sage -ipython notebook
    

    and this will open a browser.

    Turn on Sage preparsing

    Create a new notebook and type:

    In [1]: 3 + 3
    6
    In [2]: 2 / 3
    0
    In [3]: matrix
    Traceback (most recent call last):
    ...
    NameError: name 'matrix' is not defined
    

    By default, Sage preparsing is turn off and Sage commands are not known. To turn on the Sage preparsing (thanks to a post of Jason on sage-devel):

    %load_ext sage.misc.sage_extension
    

    Since sage-6.2, according to sage-devel, the command is:

    %load_ext sage
    

    You now get Sage commands working in ipython:

    In [4]: 3 + 4
    Out[4]: 7
    In [5]: 2 / 3
    Out[5]: 2/3
    In [6]: type(_)
    Out[6]: <type 'sage.rings.rational.Rational'>
    In [7]: matrix(3, range(9))
    Out[7]:
    [0 1 2]
    [3 4 5]
    [6 7 8]
    

    Scroll and hide output

    If the output is too big, click on Out to scroll or hide the output:

    In [8]: range(1000)
    

    Sage 3d Graphics

    3D graphics works but open in a new Jmol window:

    In [9]: sphere()
    

    Sage 2d Graphics

    Similarly, 2D graphics works but open in a new window:

    In [10]: plot(sin(x), (x,0,10))
    

    Inline Matplotlib graphics

    To create inline matplotlib graphics, the notebook must be started with this command:

    sage -ipython notebook --pylab=inline
    

    Then, a matplotlib plot can be drawn inline (example taken from this notebook):

    import matplotlib.pyplot as plt
    import numpy as np
    x = np.linspace(0, 3*np.pi, 500)
    plt.plot(x, np.sin(x**2))
    plt.title('A simple chirp');
    

    Or with:

    %load http://matplotlib.org/mpl_examples/showcase/integral_demo.py
    

    According to the previous cited notebook, it seems, that the inline mode can also be decided from the notebook using a magic command, but with my version of ipython (0.13.2), I get an error:

    In [11]: %matplotlib inline
    ERROR: Line magic function `%matplotlib` not found.
    

    Use latex in a markdown cell

    Change an input cell into a markdown cell and then you may use latex:

    Test $\alpha+\beta+\gamma$
    

    Output in latex

    The output can be shown with latex and mathjax using the ipython display function:

    from IPython.display import display, Math
    def my_show(obj): return display(Math(latex(obj)))
    y = 1 / (x^2+1)
    my_show(y)
    

    ipynb format

    Create a new notebook with only one cell. Name it range_10 and save:

    In [1]: range(10)
    Out[1]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
    

    The file range_10.ipynb is saved in the directory. You can also download it from File > Download as > IPython (.ipynb). Here is the content of the file range_10.ipynb:

    {
     "metadata": {
      "name": "range_10"
     },
     "nbformat": 3,
     "nbformat_minor": 0,
     "worksheets": [
      {
       "cells": [
        {
         "cell_type": "code",
         "collapsed": false,
         "input": [
          "range(10)"
         ],
         "language": "python",
         "metadata": {},
         "outputs": [
          {
           "output_type": "pyout",
           "prompt_number": 1,
           "text": [
            "[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]"
           ]
          }
         ],
         "prompt_number": 1
        },
        {
         "cell_type": "code",
         "collapsed": false,
         "input": [],
         "language": "python",
         "metadata": {},
         "outputs": []
        }
       ],
       "metadata": {}
      }
     ]
    }
    

    ipynb is just json

    A ipynb file is written in json format. Below, we use json to open the file `range_10.ipynb as a Python dictionnary.

    sage: s = open('range_10.ipynb','r').read()
    sage: import json
    sage: D = json.loads(s)
    sage: type(D)
    dict
    sage: D.keys()
    [u'nbformat', u'nbformat_minor', u'worksheets', u'metadata']
    sage: D
    {u'metadata': {u'name': u'range_10'},
     u'nbformat': 3,
     u'nbformat_minor': 0,
     u'worksheets': [{u'cells': [{u'cell_type': u'code',
         u'collapsed': False,
         u'input': [u'range(10)'],
         u'language': u'python',
         u'metadata': {},
         u'outputs': [{u'output_type': u'pyout',
           u'prompt_number': 1,
           u'text': [u'[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]']}],
         u'prompt_number': 1},
        {u'cell_type': u'code',
         u'collapsed': False,
         u'input': [],
         u'language': u'python',
         u'metadata': {},
         u'outputs': []}],
       u'metadata': {}}]}
    

    Load vaucanson.ipynb

    Download the file vaucanson.ipynb from the last meeting of Paris Sage Users. You can view the complete demo including pictures of automaton even if you are not able to install vaucanson on your machine.

    IPython notebook from a Python file

    In a Python file, separate your code with the following line to create cells:

    # <codecell>
    

    For example, create the following Python file. Then, import it in the notebook. It will get translated to ipynb format automatically.

    # -*- coding: utf-8 -*-
    # <nbformat>3.0</nbformat>
    
    # <codecell>
    
    %load_ext sage.misc.sage_extension
    
    # <codecell>
    
    matrix(4, range(16))
    
    # <codecell>
    
    factor(2^40-1)
    

    More conversion

    Since release 1.0 of IPython, many conversion from ipynb to other format are possible (html, latex, slides, markdown, rst, python). Unfortunately, the version of IPython in Sage is still 0.13.2 as of today but the version 1.2.1 will be in sage-6.2.

    by Sébastien Labbé at March 06, 2014 10:47 AM

    February 25, 2014

    The Matroid Union

    Google Summer of Code 2014

    Google’s Summer of Code is an annual event in which Google will pay a bunch of students to spend the summer writing code for open-source projects.

    As last year, Sage is one of the mentoring organizations, and as last year, we are hoping to have one of about 4 students work on Sage’s matroid capabilities. We are looking for an undergraduate or graduate student who

    • Knows a bit of matroid theory (some possible topics require more knowledge than others)
    • Has some programming experience, preferably in Python
    • Wants to work on building exciting mathematics software for 3 months during the summer
    • Would like the word “Google” on his or her CV
    • Is interested in earning 5500 US dollars and a t-shirt

    On the ideas page, a list of possible features to work on can be found. Students must produce a detailed proposal including a timeline with milestones. Early interaction with the mentors is important: we want to know you’re serious about this, and able to deliver. The place for this is the sage-gsoc Google group.

    If you know an interested student, send her or him to this post. If you are an interested student, study the links above to figure out the requirements, timeline, etc. One important date: applications must be in on

    MARCH 21

    and we’d like to start working on the proposal with you well before then.

    by Stefan van Zwam at February 25, 2014 07:44 PM

    February 21, 2014

    Vince Knight

    Best responses to mixed strategies in class

    On Monday my Game Theory class and I took a look the connection between extensive form games and normal form games (leading to subgame perfection) which correspond to these two chapters of my course: C7 and C8 but before starting that we took another look at best responses to mixed strategies (this Chapter of my course).

    We have been using this game quite a bit in class:

    \[\begin{pmatrix}
    (2,-2) & (-2,2)\\
    (-1,1) & (1,-1)
    \end{pmatrix}\]

    We played it before and I blogged about it here. This is a slight modification of the matching pennies game where the 1st strategy corresponds to playing Heads (\(H\)) and the second to playing Tails (\(T\))

    If player 1 (the row player) is playing a mixed strategy \(\sigma_1=(x, 1-x)\) then the utility to player 2 when playing player 2 plays $H$ (the first column) can be written as:

    \[
    u_2(\sigma_1,H)=-2x+1-x=1-3x
    \]

    and when player 2 plays $T$:

    \[
    u_2(\sigma_1,T)=2x-1+x=3x-1
    \]

    We can plot these two utilities here (using +Sage Mathematical Software System):

    It is immediate to note that when \(x < 1/3\) player 2 should play $T$. In fact we can write down player 2's best response \(s_2^*\) to any \(\sigma_1\):

    \[
    s_2^*=\begin{cases}
    H,&x < 1/3\\
    T,&x > 1/3\\
    \text{indifferent},&
    \end{cases}
    \]

    Using all this I played the following game in class:


    • I handed out sheets asking students to play against 3 separate mixed strategies \(\sigma_1\in\{(.2,.8),(.9,.1),(1/3,2/3)\}\). I will refer to these 3 rounds as R1, R2 and R3;
    • Students (acting as player 2) filled in their strategies;
    • I then used the following interact to sample mixed strategies according to \(\sigma_1\):

    I changed the value of \(x\) as required.

    Here are the three row strategies that were sampled:


    • R1: TTTTTH 
    • R2: HHHTHH 
    • R3: TTHTTT 


    This is obviously not giving the exact proportions dictated by the mixed strategy \(\sigma_1\) but that's also kind of the point. By round, here are the results.

    R1

    Here's a plot of the mixed strategy that was played by the entire class during round 1:


    This corresponds to \(\sigma_2=(.70,.30)\), so most students seemed willing to 'trust the theory' that one should play $H$ against this mixed strategy.

    4 students scored the highest score (\(7\)) and here's the strategy they all played: \(HHHHHT\), in essence they got lucky and maxed out what they could have had. If they had played the theoretical best response (to only play $H$) they would have scored: 3.

    The expected value of playing the theoretical best response (always pick \(H\) against this mixed strategy is: \(6(1-3\times.2)=2.4\) (recall that \(\sigma_1=(.2,.8)\) for this round).

    The mean score for this round was 1.4 and here's a distribution of the scores:



    47 students who 'won' ie scored a positive score (this is a zero zum game) played \(\sigma_2=(.83,.17)\). 18 'lost' (scored a negative score) playing \(.37,.63\).

    It's nice to see that there was a large amount of students who did in fact score 3.

    R2

    Here's a plot of the mixed strategy that was played by the entire class during round 2:


    This corresponds to \(\sigma_2=(.12,.88)\), which is again pretty close to the theoretical best response.

    2 students scored the highest score: 11. They once again lucked out and played the perfect response: \(TTTHTT\). If they had played the theoretical best response they would have scored 9.

    The expected value of playing the theoretical best response (always pick \(T\) against this mixed strategy is: \(6(3\times.9-1)=10.2\) (recall that \(\sigma_1=(.9,.1)\) for this round).

    The mean score for this round was  6.9 and here's a distribution of the scores:


    60 students 'won' ie scored a positive score (this is a zero zum game) playing \(\sigma_2=(.07,.93)\). 5 'lost' (scored a negative score) playing \(.77,.23\).

    R3

    The third round had the students playing against a mixed strategy for which they should have been indifferent. Here's how they played:


    This corresponded to \(\sigma_2=(0.62,.38)\).

    There were 10 winners for this game and they scored 10 (quite a few strategy profile gave this score so I won't list them but they mainly took advantage of the fact that mostly $T$ was sampled). (The theoretical utility is in fact 0 as you can see with one of the plots above).

    The mean score for this round was was .4 (which is quite close to the theoretical value of 0). Here's the distribution of the scores:




    28 scored positively playing \(\sigma_2=(.64,.36)\) and 37 scored negatively playing \(\sigma_2=(.77,.23)\).

    What's nice to see here is that this 3rd round is a bit more random, with an almost (stretching the definition of the word almost) equal distribution between the number of students who won and lost.

    Here's a distribution of the overall scores:

    The overall winner of the game (who scored the most over the 3 rounds) was Becky who played:

    • R1: \(TTHHHH\)
    • R2: \(TTTTTT\)
    • R3: \(HHTHTH\)

    For a cumulative score of: 21

    This was good fun to analyse and was hopefully useful to my students to see what is meant by best responses to mixed strategies. It was particularly cool to see an 'indifferent' (again stretching the definition of the word indifferent) response to the third round.

    (Like with everything for this course you can find the data, analysis scripts and everything else at this github repo)

    by Vincent Knight (noreply@blogger.com) at February 21, 2014 03:58 AM