June 25, 2015

Michele Borassi

Edge Connectivity through Boost Graph Library

After two weeks, we have managed to interface Boost and Sagemath!

However, the interface was not as simple as it seemed. The main problem we found is the genericity of Boost: almost all Boost algorithms work with several graph implementations, which differ in the data structures used to store edges and vertices. For instance, the code that implements breadth-first search works if the adjacency list of a vertex v is a vector, a list, a set, etc. This result is accomplished by using templates [1]. Unfortunately, the only way to interface Sagemath with C++ code is Cython, which is not template-friendly, yet. In particular, Cython provides genericity through fused types [2], whose support is still experimental, and which do not offer full integration with templates [3-5].

After a thorough discussion with David, Nathann, and Martin (thank you very much!), we have found a solution: for the input, we have defined a fused type "BoostGenGraph", including all Boost graph implementations, and all functions that interface Boost and Sagemath use this fused type. This way, for each algorithm, we may choose the most suitable graph implementation. For the output, whose type might be dependent on the input type, we use C++ to transform it into a "standard" type (vector, or struct).

We like this solution because it is very clean, and it allows us to exploit Boost genericity without any copy-paste. Still, there are some drawbacks:
1) Cython fused types do not allow nested calls of generic functions;
2) Boost graphs cannot be converted to Python objects: they must be defined and deleted in the same Cython function;
3) No variable can have a generic type, apart from the arguments of generic functions.

These drawbacks will be overcome as soon as Cython makes templates and generic types interact: this way, we will be able create a much stronger interface, by writing a graph backend based on Boost, so that the user might create, convert, and modify Boost graphs directly from Python. However, for the moment, we will implement all algorithms using the current interface, which already provides genericity, and which has no drawback if the only goal is to "steal" algorithms from Boost.

As a test, we have computed the edge connectivity of a graph through Boost: the code is available in ticket 18564 [6]. Since the algorithm provided by Sagemath is not optimal (it is based on linear programming), the difference in the running time is impressive, as shown by the following tests:

sage: G = graphs.RandomGNM(100,1000)
sage: %timeit G.edge_connectivity()
100 loops, best of 3: 1.42 ms per loop
sage: %timeit G.edge_connectivity(implementation="sage")
1 loops, best of 3: 11.3 s per loop


sage: G = graphs.RandomBarabasiAlbert(300,3)
sage: %timeit G.edge_connectivity(implementation="sage")
1 loops, best of 3: 9.96 s per loop
sage: %timeit G.edge_connectivity()
100 loops, best of 3: 3.33 ms per loop


Basically, on a random Erdos-Renyi graph with 100 vertices and 1000 edges, the new algorithm is 8,000 times faster, and on a random Barabasi-Albert graph with 300 nodes and average degree 3, the new algorithm is 3,000 times faster! This way, we can compute the edge connectivity of much bigger graphs, like a random Erdos-Renyi graph with 5,000 vertices and 50,000 edges:

sage: G = graphs.RandomGNM(5,000, 50,000)
sage: %timeit G.edge_connectivity()
1 loops, best of 3: 16.2 s per loop


The results obtained with this first algorithm are very promising: in the next days, we plan to interface several other algorithms, in order to improve both the number of available routines and the speed of Sagemath graph library!

[1] https://en.wikipedia.org/wiki/Template_%28C%2B%2B%29
[2] http://docs.cython.org/src/userguide/fusedtypes.html
[3] https://groups.google.com/forum/#!topic/cython-users/qQpMo3hGQqI
[4] https://groups.google.com/forum/#!searchin/cython-users/fused/cython-users/-7cHr6Iz00Y/Z8rS03P7-_4J
[5] https://groups.google.com/forum/#!searchin/cython-users/fused$20template/cython-users/-7cHr6Iz00Y/Z8rS03P7-_4J
[6] http://trac.sagemath.org/ticket/18564

by Michele Borassi (noreply@blogger.com) at June 25, 2015 12:56 AM

Vince Knight

On testing degeneracy of bi-matrix games

We (James Campbell and Vince Knight are writing this together) have been working on implementing code in Sage to test if a game is degenerate or not. In this post we’ll prove a simple result that is used in the algorithm that we are/have implemented.

Bi-Matrix games

For a general overview of these sorts of things take a look at this post from a while ago on the subject of bi-matrix games in Sage. A bi-matrix is a matrix of tuples corresponding to payoffs for a 2 player Normal Form Game. Rows represent strategies for the first player and columns represent strategies for the second player, and each tuple of the bi-matrix corresponds to a tuple of payoffs. Here is an example:

We see that if the first player plays their third row strategy and the second player their second column strategy then the first player gets a utility of 6 and the second player a utility of 1.

This can also be written as two separate matrices. A matrix \(A\) for Player 1 and \(B\) for Player 2.

Here is how this can be constructed in Sage using the NormalFormGame class:

sage: A = matrix([[3,3],[2,5],[0,6]])
sage: B = matrix([[3,2],[2,6],[3,1]])
sage: g = NormalFormGame([A, B])
sage: g
Normal Form Game with the following utilities: {(0, 1): [3, 2], (0, 0): [3, 3],
(2, 1): [6, 1], (2, 0): [0, 3], (1, 0): [2, 2], (1, 1): [5, 6]}

Currently, within Sage, we can obtain the Nash equilibria of games:

sage: g.obtain_nash()
[[(0, 1/3, 2/3), (1/3, 2/3)], [(4/5, 1/5, 0), (2/3, 1/3)], [(1, 0, 0), (1, 0)]]

We see that this game has 3 Nash equilibria. For each, we see that the supports (the number of non zero entries) of both players’ strategies are the same size. This is, in fact, a theoretical certainty when games are non degenerate.

If we modify the game slightly:

sage: A = matrix([[3,3],[2,5],[0,6]])
sage: B = matrix([[3,3],[2,6],[3,1]])
sage: g = NormalFormGame([A, B])
sage: g.obtain_nash()
[[(0, 1/3, 2/3), (1/3, 2/3)], [(1, 0, 0), (2/3, 1/3)], [(1, 0, 0), (1, 0)]]

We see that the second equilibrium has supports of different sizes. In fact, if the first player did play \((1,0,0)\) (in other words just play the first row) the second player could play any mixture of strategies as a best response and not particularly \((2/3,1/3)\). This is because the game in consideration is now degenerate.

(Note that both of the games above are taken from Nisan et al. 2007 [pdf].)

What is a degenerate game

A bimatrix game is called nondegenerate if the number of pure best responses to a mixed strategy never exceeds the size of its support. In a degenerate game, this definition is violated, for example if there is a pure strategy that has two pure best responses (as in the example above), but it is also possible to have a mixed strategy with support size \(k\) that has \(k+1\) strategies that are a best response.

Here is an example of this:

If we consider the mixed strategy for player 2: \(y=(1/2,1/2)\), then the utility to player 1 is given by:

We see that there are 3 best responses to \(y\) and as \(y\) has support size 2 this implies that the game above is degenerate.

What does the literature say about degenerate games

The original definition of degenerate games was given in Lemke, Howson 1964 [pdf] and their definition was dependent on the labeling polytope that they used for their famous algorithm for the computation of equilibria (which is currently being implemented in Sage!). Further to this Stengel 1999 [ps] offers a nice overview of a variety of equivalent definitions.

Sadly, all of these definitions require finding a particular mixed strategy profile \((x, y)\) for which a particular condition holds. To be able to implement a test for degeneracy based on any of these definitions would require a continuous search over possible mixed strategy pairs.

In the previous example (where we take \(y=(1/2,1/2)\) we could have identified this \(y\) by looking at the utilities for each pure strategy for player 1 against \(y=(y_1, 1-y_1)\):

(\(r_i\) denotes row strategy \(i\) for player 1.) A plot of this is shown:

We can (in this instance) quickly search through values of \(y_1\) and identify the point that has the most best responses which gives the best chance of passing the degeneracy condition (\(y_1=1/2\)). This is not really practical from a generic point of view which leads to this blog post: we have identified what the particular \(x, y\) is that is sufficient to test.

A sufficient mixed strategy to test for degeneracy

The definition of degeneracy can be written as:

Def. A Normal Form Game is degenerate iff:

There exists \(x\in \Delta X\) such that \( |S(x)| < |\sigma_2| \) where \(\sigma_2\) is the support such that \( (xB)_j = \max(xB) \), for all \(j \) in \( \sigma_2\).

OR

There exists \(y\in \Delta Y\) such that \( |S(x)| < |\sigma_1| \) where \(\sigma_1\) is the support such that \( (Ay)_i = \max(Ay) \), for all \(i \) in \( \sigma_1\).

(\(X\) and \(Y\) are the pure strategies for player 1 and 2 and \(\Delta X, \Delta Y\) the corresponding mixed strategies spaces.

The result we are implementing in Sage aims to remove the need to search particular mixed strategies \(x, y\) (a continuous search) and replace that by a search over supports (a discrete search).

Theorem. A Normal Form Game is degenerate iff:

There exists \( \sigma_1 \subseteq X \) and \( \sigma_2 \subseteq Y \) such that \( |\sigma_1| < |\sigma_2| \) and \( S(x^*) = \sigma_1 \) where \( x^* \) is a solution of \( (xB)_j = \max(xB) \), for all \(j \) in \( \sigma_2 \) (note that a valid \(x^*\) is understood to be a mixed strategy vector).

OR

There exists \( \sigma_1 \subseteq X \) and \( \sigma_2 \subseteq Y \) such that \( |\sigma_1| > |\sigma_2| \) and \( S(y^*) = \sigma_2 \) where \( y^* \) is a solution of \( (Ay)_i = \max(Ay) \), for all \(i \) in \( \sigma_1 \).

Using the definition given above the proof is relatively straightforward but we will include it below (mainly to try and convince ourselves that we haven’t made a mistake).

We will only consider the first part of each condition (the ones for the first player). The result follows in the same way for the second player.

Proof \(\Leftarrow\)

Assume a game defined by \(A, B\) is degenerate, by the above definition without loss of generality this implies that there exists an \(x\in \Delta X\) such that \( |S(x)| < |\sigma_2| \) where \(\sigma_2\) is the support such that \( (xB)_j = \max(xB) \), for all \(j \) in \( \sigma_2\).

If we denote \(S(x)\) by \(\sigma_1\) then the definition implies that \(|\sigma_1| < |\sigma_2| \) and further more that \( (xB)_j = \max(xB) \), for all \(j \) in \( \sigma_2 \) as required.

Proof \(\Rightarrow\)

If we now assume that we have \(\sigma_1, \sigma_2, x^*\) as per the first part of the theorem then we have \(|\sigma_1|<|\sigma_2|\) and taking \(x=x^*\) implies that \(|S(x)|<|\sigma_2|\). Furthermore as \(x^*\) is a solution of \( (xB)_j = \max(xB) \) the result follows (by the definition given above).

Implementation

This result implies that we simply need to consider all potential pairs of supports. Depending on the relative size of the supports we can use one of the two conditions of the result. If we ordered the supports by size the situation for the two player game looks somewhat like this:

Note that for an \(m\times n\) game there are \((2^m-1)\) potential supports for player 1 (the size of the powerset of strategy set without the empty set) and \((2^n-1)\) potential supports of for player 2. Thus the rectangle drawn above has dimension \((2^m-1)\times(2^n-1)\). Needless to say that our implementation will not be efficient (testing degeneracy is after all an NP complete problem in linear programming (see Chandrasekaran 1982 - [pdf]) but at least we have identified exactly which mixed strategy we need to test for each support pair.

References

  • Chandrasekaran, R., Santosh N. Kabadi, and Katta G. Murthy. “Some NP-complete problems in linear programming.” Operations Research Letters 1.3 (1982): 101-104. [pdf]
  • Lemke, Carlton E., and Joseph T. Howson, Jr. “Equilibrium points of bimatrix games.” Journal of the Society for Industrial & Applied Mathematics 12.2 (1964): 413-423. [pdf]
  • N Nisan, T Roughgarden, E Tardos, VV Vazirani Vol. 1. Cambridge: Cambridge University Press, 2007. [pdf]
  • von Stengel, B. “Computing equilibria for two person games.” Technical report. [ps]

June 25, 2015 12:00 AM

June 14, 2015

Vince Knight

Python, natural language processing and predicting funny

Every year there is a big festival in Edinburgh called the fringe festival. I blogged about this a while ago, in that post I did a very basic bit of natural language processing aiming to try and identify what made things funny. In this blog post I’m going to push that a bit further by building a classification model that aims to predict if a joke is funny or not. (tldr: I don’t really succeed but but that’s mainly because I have very little data - having more data would not necessarily guarantee success either but the code and approach is what’s worth taking from this post… 😪).

If you want to skip the brief description and go straight to look at the code you can find the ipython notebook on github here and on cloud.sagemath here.

The data comes from a series of BBC articles which reports (more or less every year since 2011?) the top ten jokes at the fringe festival. This does in fact only give 60 odd jokes to work with…

Here is the latest winner (by Tim Vine):

I decided to sell my Hoover… well it was just collecting dust.

After cleaning it up slightly I’ve thrown that all in a json file here. So in order to import the data in to a panda data frame I just run:

import pandas
df = pandas.read_json('jokes.json') # Loading the json file

Pandas is great, I’ve been used to creating my own bespoke classes for handling data but in general just using pandas does the exact right job. At this point I basically follow along with this post on sentiment analysis of twitter which makes use of the ridiculously powerful nltk library.

We can use the nltk library to ‘tokenise’ and get rid of common words:

commonwords = [e.upper() for e in set(nltk.corpus.stopwords.words('english'))] # <- Need to download the corpus: import nltk; nltk.download()
commonwords.extend(['M', 'VE'])  # Adding a couple of things that need to be removed
tokenizer = nltk.tokenize.RegexpTokenizer(r'\w+')  # To be able to strip out unwanted things in strings
string_to_list = lambda x: [el.upper() for el in tokenizer.tokenize(x) if el.upper() not in commonwords]
df['Joke'] = df['Raw_joke'].apply(string_to_list)

Note that this requires downloading one of the awesome corpuses (thats apparently the right way to say that) from nltk.

Here is how this looks:

joke = 'I decided to sell my Hoover... well it was just collecting dust.'
string_to_list(joke)

which gives:

['DECIDED', 'SELL', 'HOOVER', 'WELL', 'COLLECTING', 'DUST']

We can now get started on building a classifier

Here is the general idea of what will be happening:

First of all we need to build up the ‘features’ of each joke, in other words pull the words out in to a nice easy format.

To do that we need to find all the words from our training data set, another way of describing this is that we need to build up our dictionary:

df['Year'] = df['Year'].apply(int)

def get_all_words(dataframe):
    """
    A function that gets all the words from the Joke column in a given dataframe
    """
    all_words = []
    for jk in dataframe['Joke']:
        all_words.extend(jk)
    return all_words

all_words = get_all_words(df[df['Year'] <= 2013])  # This uses all jokes before 2013 as our training data set.

We then build something that will tell us for each joke which of the overall words is in it:

def extract_features(joke, all_words):
    words = set(joke)
    features = {}
    for word in words:
        features['contains(%s)' % word] = (word in all_words)
    return features

Once we have done that, we just need to decide what we will call a funny joke. For this purpose We’ll use a funny_threshold and any joke that ranks above the funny_threshold in any given year will be considered funny:

funny_threshold = 5
df['Rank'] = df['Rank'].apply(int)
df['Funny'] = df['Rank'] <= funny_threshold

Now we just need to create a tuple for each joke that puts the features mentioned earlier and a classification (if the joke was funny or not) together:

df['Labeled_Feature'] = zip(df['Features'],df['Funny'])

We can now (in one line of code!!!!) create a classifier:

classifier = nltk.NaiveBayesClassifier.train(df[df['Year'] <= 2013]['Labeled_Feature'])

This classifier will take into account all the words in a given joke and spit out if it’s funny or not. It can also give us some indication as to what makes a joke funny or not:

classifier.show_most_informative_features(10)

Here is the output of that:

Most Informative Features
     contains(GOT) = True   False : True   =  2.4 : 1.0
    contains(KNOW) = True    True : False  =  1.7 : 1.0
  contains(PEOPLE) = True   False : True   =  1.7 : 1.0
     contains(SEX) = True   False : True   =  1.7 : 1.0
   contains(NEVER) = True   False : True   =  1.7 : 1.0
      contains(RE) = True    True : False  =  1.6 : 1.0
  contains(FRIEND) = True    True : False  =  1.6 : 1.0
     contains(SAY) = True    True : False  =  1.6 : 1.0
  contains(BOUGHT) = True    True : False  =  1.6 : 1.0
     contains(ONE) = True    True : False  =  1.5 : 1.0

This immediately gives us some information:

  • If your joke is about SEX is it more likely to not be funny.
  • If your joke is about FRIENDs is it more likely to be funny.

That’s all very nice but we can now (theoretically - again, I really don’t have enough data for this) start using the mathematical model to tell you if something is funny:

joke = 'Why was 10 afraid of 7? Because 7 8 9'
classifier.classify(extract_features(string_to_list(joke), get_all_words(df[df['Year'] <= 2013])))

That joke is apparently funny (the output of above is True). The following joke however is apparently not (the output of below if False):

joke = 'Your mother is ...'
print classifier.classify(extract_features(string_to_list(joke), get_all_words(df[df['Year'] <= 2013])))

As you can see in the ipython notebook it is then very easy to measure how good the predictions are (I used the data from years before 2013 to predict 2014).

Results

Here is a plot of the accuracy of the classifier for changing values of funny_threshold:

You’ll notice a couple of things:

  • When the threshold is 0 or 1: the classifier works perfectly. This makes sense: all the jokes are either funny or not so it’s very easy for the classifier to do well.
  • There seems to be a couple of regions where the classifier does particularly poorly: just after a value of 4. Indeed there are points where the classifier does worse than flipping a coin.
  • At a value of 4, the classifier does particularly well!

Now, one final thing I’ll take a look at is what happens if I start randomly selecting a portion of the entire data set to be the training set:

Below are 10 plots that correspond to 50 repetitions of the above where I randomly sample a ratio of the data set to be the training set:

Finally (although it’s really not helpful), here are all of those on a single plot:

First of all: all those plots are basically one line of seaborn code which is ridiculously cool. Seaborn is basically magic:

sns.tsplot(data, steps)

Second of all, it looks like the lower bound of the classifiers is around .5. Most of them start of at .5, in other words they are as good as flipping a coin before we let them learn from anything, which makes sense. Finally it seems that the threshold of 4 classifier seems to be the only one that gradually improves as more data is given to it. That’s perhaps indicating that something interesting is happening there but that investigation would be for another day.

All of the conclusions about the actual data should certainly not be taken seriously: I simply do not have enough data. But, the overall process and code is what is worth taking away. It’s pretty neat that the variety of awesome python libraries lets you do this sort of thing more or less out of the box.

Please do take a look at this github repository but I’ve also just put the notebook on cloud.sagemath so assuming you pip install the libraries and get the data etc you can play around with this right in your browser:

Here is the notebook on cloud.sagemath.

June 14, 2015 12:00 AM

June 09, 2015

Michele Borassi

Performance Comparison of Different Graph Libraries

As promised in the last post, I have compared the performances of several graph libraries, in order to choose which ones should be deployed with Sagemath. Here, I provide the main results of this analysis, while more details are available on my website (see also the links below).
The libraries chosen are the most famous graph libraries written in Python, C, or C++ (I have chosen these languages because they are easier to integrate in Sagemath, using Cython). Furthermore, I have excluded NetworkX, which is already deployed with Sagemath.
First of all, I have to enforce that no graph library comparison can be completely fair, and also this comparison can be criticized, due to the large amount of available routines, to the constant evolution of libraries, and to many small differences in the outputs (for instance, one library might compute the value of a maximum s-t flow, another library might actually compute the flow, and a third one might compute all maximum flows). Despite this, I have tried to be as fair as possible, through a deeper and more detailed analysis than previous comparisons (https://graph-tool.skewed.de/performance, http://www.programmershare.com/3210372/, http://arxiv.org/pdf/1403.3005.pdf).
The first comparison deals with the number of algorithms implemented. I have chosen a set of 107 possible algorithms, trying to cover all possible tasks that a graph library should perform (avoiding easy tasks that are common to all libraries, like outputting the number of nodes, the number of edges, the neighbors of a node, etc). In some cases, two tasks were collapsed in one, if the algorithms solving these tasks are very similar (for instance, computing a maximum flow and computing a minimum cut, computing vertex betweenness and edge betweenness, etc).
The number of routines available for each library is plotted in the following chart, and a table containing all features is available in HTML or as a Google Sheet.

The results show that Sagemath has more routines than all competitors (66), closely followed by igraph (62). All other libraries are very close to each other, having about 30 routines each. Furthermore, Sagemath could be improved in the fields of neighbor similarity measures (assortativity, bibcoupling, cocitation, etc), community detection, and random graph generators. For instance, igraph contains 29 routines that are not available in Sagemath.

The second comparison analyzes the running-time of some of the algorithms implemented in the libraries. In particular, I have chosen 8 of the most common tasks in graph analysis: computing the diameter, computing the maximum flow between two vertices, finding connected components and strongly connected components, computing betweenness centrality, computing the clustering coefficient, computing the clique number, and generating a graph with the preferential attachment model. I have run each of these algorithms on 3 inputs, and I have considered the total execution time (excluding the time needed to load the graph). More details on this experiment are available here, and the results are also available in a Google Sheet.
In order to make the results more readable, I have plotted the ratio between the time needed by a given library and the minimum time needed by any library. If an algorithm was not implemented, or it needed more than 3 hours to complete, the corresponding bar is not shown.

Overall, the results show that NetworKit is the fastest library, or one of the fastest, in all routines that are implemented (apart from the generation of preferential attachment graphs, where it is very slow). Boost graph library is very close to NetworKit, and it also contains more routines. Also Sagemath is quite efficient in all tasks, apart from the computation of strongly connected components and the generation of a preferential attachment graph, where it needed more than 3 hours. However, in the latter case, the main problem was not speed but memory consumption.

In conclusion, Sagemath can highly benefit from the possibility of using algorithms from other libraries. First of all, it might improve the number of algorithms offered, especially by including igraph, and it also might improve its performance, by including Boost, NetworKit, or other fast graph libraries.

by Michele Borassi (noreply@blogger.com) at June 09, 2015 07:30 AM

June 04, 2015

Michele Borassi

Comparison of Graph Libraries

Many times, people asked me "Which is the best available graph library?", or "Which graph library should I use to compute this, or that?".
Well, personally I love to use Sage, but there are also several good alternatives. Then, the question becomes "How could we improve Sage, so that people will choose it?".

In my opinion, graph libraries are compared according to the following parameters:
  1. simplicity and documentation: people have little time, and the faster they learn how to use the library, the better;
  2. number of routines available;
  3. speed: sometimes, the input is very big, and the algorithms take much time to finish, so that a fast implementation is fundamental.
While it is very difficult to measure the first point, the others can be compared and improved. For this reason, in order to outperform other libraries, we should implement new features, and improve existing ones. You don't say!

However, this answer is not satisfactory: in principle, we could add all features available in other libraries, but this is a huge translational work, and while we are doing this work the other libraries will change, making this effort a never-ending story.

My project proposes an alternative: cooperating instead of competing. I will try to interface Sage with other libraries, and to use their algorithms when the Sage counterpart is not available, or less efficient. This way, with an affordable amount of work, we will be able to run all algorithms available in the best graph libraries!

As a first step, I have compared all the most famous C, C++, and Python graph libraries according to points 2 and 3, in order to choose which libraries should be included. The next posts will analyze the results of this comparison.

by Michele Borassi (noreply@blogger.com) at June 04, 2015 02:11 AM

Google Summer of Code: let's start!

This blog will follow my Google Summer of Code project, entitled Performance Improvements for the Graph Module of Sagemath. The complete project is available here, and related documents with partial results will be available on the same website.
In this first post, I would like to thank my mentor David Coudert and Nathann Cohen, who helped me a lot in writing this project and understanding how the graph module of Sagemath works.
With their help, and with the help of the Sage community, I hope it will be a useful and funny work! Let's start!

by Michele Borassi (noreply@blogger.com) at June 04, 2015 01:10 AM

May 29, 2015

Benjamin Hackl

Asymptotic Expressions: Motivation

\( \def\R{\mathbb{R}} \)So, as Google Summer of Code started on Monday, May 25th it is time to give a proper motivation for the project I have proposed. The working title of my project is (Multivariate) Asymptotic Expressions, and its overall goal is to bring asymptotic expressions to SageMath.

What are Asymptotic Expressions?

A motivating answer for this question comes from the theory of Taylor series. Assume that we have a sufficiently nice (in this case meaning smooth) function $f : \R \to \R$ that we want to approximate in a neighborhood of some point $x_0 \in \R$. Taylor’s theorem allows us to write $f(x) = T_n(x) + R_n(x)$ where

\[ T_n(x) = \sum_{j=0}^n \frac{f^{(j)}(x_0)}{j!}\cdot (x-x_0)^j = f(x_0) + f'(x_0)\cdot (x-x_0) + \cdots + \frac{f^{(n)}(x_0)}{n!}\cdot (x-x_0)^n,  \]

and $R_n(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!} \cdot (x-x_0)^{n+1}$, where $\xi$ lies in a neighborhood of $x_0$. Note that for $x\to x_0$, $R_n(x)$ “behaves like” $(x-x_0)^{n+1}$. In particular, we can certainly find a constant $C > 0$ such that $|R_n(x)| \leq C\cdot |x-x_0|^{n+1}$, or, in other words: for $x\to x_0$ the growth of the function $R_n(x)$ is bounded from above by the growth of $(x-x_0)^{n+1}$.

The idea of bounding the growth of a function by the growth of another function when the argument approaches some number (or $\infty$) is the central idea behind the big O notationFor function $f, g : \R \to \R$ we write $f(x) = O(g(x))$ for $x\to x_0$ if there is a constant $C > 0$ such that $|f(x)| \leq C\cdot |g(x)|$ for all $x$ in some neighborhood of $x_0$.

A case that is particularly important is the case of $x_0 = \infty$, that is if we want to compare and/or characterize the behavior of some function for $x\to\infty$, which is also called the functions asymptotic behavior. For example, consider the functions $\log x$, $x^3$ and $e^x$. All of them are growing unbounded for $x\to\infty$ — however, their asymptotic behavior differs. This can be seen by considering pairwise quotients of these functions: $\frac{x^3}{e^x} \to 0$ for $x\to\infty$, and therefore the asymptotic growth of $x^3$ can be bounded above by the growth of $e^x$, meaning $x^3 = O(e^x)$ for $x\to\infty$.

The analysis of a functions asymptotic behavior is important for many applications, for example when determining time and space complexity of algorithms in computer science, or for describing the growth of classes of combinatorial objects: take, for example, binary strings of length $2n$ that contain equally many zeros and ones. If $s_n$ denotes the number of such strings, then we have

\[ s_n = \binom{2n}{n} = \frac{4^n}{\sqrt{n\pi}} \left(1 + O\left(\frac{1}{n}\right)\right) \quad\text{ for } n\to\infty. \]

Expressions like these are asymptotic expressions. When we consider asymptotic expressions in only one variable, everything works out nicely as a total order is induced. But as soon as multiple variables are involved, we don’t have a total order any more. Consider, for example, $x^2 y$ and $xy^2$ when $x$ and $y$ approach $\infty$. These two elements cannot be compared to each other, which complicates computing with these expressions as they may contain multiple “irreducible” O-terms.

The following univariate and multivariate examples shall demonstrate how computing with such expressions looks like (all variables are assumed to go to $\infty$):

\[ x + O(x) = O(x),\quad x^2 \cdot (x + O(1)) = x^3 + O(x^2),\quad O(x^2) \cdot O(x^3) = O(x^5),  \]

\[ x y + O(x^2 y) = O(x^2y),\quad (y \log y + O(y)) (x^2 y + O(4^x \sqrt{x})) =  x^2 y^2 \log y + O(x^2 y^2) + O(4^x \sqrt{x} y \log y).   \]

Our plan is to provide an implementation based on which computations with these and more complicated expressions are possible.

Planned Structure

There are four core concepts of our implementation.

  • Asymptotic Growth Groups: These are multiplicative groups that contain growth elements like $x^2$, $\log x$, $2^x \cdot x \cdot \log x$. For starters, only univariate power growth groups will be implemented.
  • Asymptotic Term Monoids: These monoids contain asymptotic terms — in essence, these are summands of asymptotic terms. Apart from exact term monoids (growth elements with a coefficient), we will also implement O-term monoids as well as a term monoid for a deviation of O-terms. Asymptotic terms have (in addition to their group operation, multiplication) absorption as an additional operation: for example, O-terms are able to absorb all asymptotically “smaller” elements.
  • Mutable Poset: As we have mentioned above, due to the fact that multivariate asymptotic expressions do not have a total order with respect to their growth, we need a partially ordered set (“Poset”) that deals with this structure such that operations like absorbing terms can be performed efficiently. The mutable poset is the central data structure that asymptotic expressions are built upon.
  • Asymptotic Ring: This is our top-level structure which is also supposed to be the main interaction object for users. The asymptotic ring contains the asymptotic expressions, i.e. intelligently managed sums of asymptotic terms. All common operations shall be possible here. Furthermore, the interface should be intelligent enough such that admissible expressions from the symbolic ring can be directly converted into elements of the asymptotic ring.

Obviously, this “planned structure” is rather superficial. However, this is only to supplement the motivation for my project with some ideas on the implementation. I’ll go a lot more into the details of what I am currently implementing in the next few blog posts!

 

by behackl at May 29, 2015 01:34 AM

May 27, 2015

William Stein

Guiding principles for SageMath, Inc.

In February of this year (2015), I founded a Delaware C Corporation called "SageMath, Inc.".  This is a first stab at the guiding principles for the company.    It should help clarify the relationship between the company, the Sage project, and other projects like OpenDreamKit and Jupyter/IPython.

Company mission statement:

Make open source mathematical software ubiquitous.
This involves both creating the SageMathCloud website and supporting the development and distribution of the SageMath and other software, including Jupyter, Octave, Scilab, etc. Anything open source.

Company principles:

  • Absolutely all company funded software must be open source, under a GPLv3 compatible license. We are a 100% open source company.
  • Company independence and self-determination is far more important than money. A core principle is that SMI is not for sale at any price, and will not participate in any partnership (for cost) that would restrict our freedom. This means:
    • reject any offers from corp development from big companies to purchase or partner,
    • do not take any investment money unless absolutely necessary, and then only from the highest quality investors
    • do not take venture capital ever
  • Be as open as possible about everything involving the company. What should not be open (since it is dangerous):
    • security issues, passwords
    • finances (which could attract trolls)
    • private user data
What should be open:
  • aggregate usage data, e.g., number of users.
  • aggregate data that could help other open source projects improve their development, e.g., common problems we observe with Jupyter notebooks should be provided to their team.
  • guiding principles

Business model

  • SageMathCloud is freemium with the expectation that 2-5% of users pay.
  • Target audience: all potential users of cloud-based math-related software.

SageMathCloud mission

Make it as easy as possible to use open source mathematical software in the cloud.
This means:
  • Minimize onboard friction, so in less than 1 minute, you can create an account and be using Sage or Jupyter or LaTeX. Morever, the UI should be simple and streamlined specifically for the tasks, while still having deep functionality to support expert users. Also, everything persists and can be sorted, searched, used later, etc.
  • Minimize support friction, so one click from within SMC leads to a support forum, an easy way for admins to directly help, etc. This is not at all implemented yet. Also, a support marketplace where experts get paid to help non-experts (tutoring, etc.).
  • Minimize teaching friction, so everything involving software related to teaching a course is as easy as possible, including managing a list of students, distributing and collecting homework, and automated grading and feedback.
  • Minimize pay friction, sign up for a $7 monthly membership, then simple clear pay-as-you-go functionality if you need more power.

by William Stein (noreply@blogger.com) at May 27, 2015 01:03 PM

May 23, 2015

Benjamin Hackl

Google Summer of Code — Countdown

Today I received the welcome package for attending this year’s “Google Summer of Code”! Actually, it’s pretty cool; the following things were included:

  • a blue notebook with a monochromatic GSoC 15 logo (in dark blue) printed on it
  • a sticker with a colored GSoC 15 logo
  • a pen that is both a blue ballpoint pen as well as a mechanical pencil (0.5)

Here is a photo of all this stuff:

gsoc_welcome

 

The work on our project (multivariate) Asymptotic Expressions (in cooperation with Daniel Krenn and Clemens Heuberger) begins (or rather continues) on Monday, the 25th of May. Over the course of next week (probably in a $\varepsilon$-neighborhood of Monday) I will blog about the status quo, as well as about the motivation for the project. :-)

by behackl at May 23, 2015 01:58 AM

May 04, 2015

Vince Knight

Code on cake, poker and a number theory classification web app

I have just finished writing feedback and obtaining marks for my first year students’ presentations. These presentations follow 11 weeks during which students formed companies and worked together to come up with a ‘product’ which had to involve mathematics and code (this semester comes just after 11 weeks of learning Python and Sage). In this post I’ll briefly describe some of the great things that the students came up with.

I must say that I was blown away by the standard this year. Last year the students did exceptionally well but this year the standard was even higher, I am so grateful for the effort put in by more or less everyone.

Some of the great projects included:

  • A website that used a fitted utility function (obtained from questioning family, friends, flatmates) to rank parking lots in terms of price and distance from a given venue (the website was written in Django and the function fitted using Sage).

  • A commando training app, with an actual reservist marine who is a student of ours:

  • A story based game with an original storyline stemming from the zodiac. The presentation culminated in Geraint, Jason and I (who were the audience) retaliating to their Nerf gun attack with our (hidden under the desk) Nerf guns (we had a hunch that this group would ambush us…). The game mechanics itself was coded in pure Python and the UI was almost written in Django (that was the goal but they didn’t have the time to fully implement it).

  • A Django site that had a graphical timeline of mathematics (on click you had access to a quizz and info etc…). This was one I was particularly excited about as it’s a tool I would love to use.

  • An outreach/educational package based around cryptography. They coded a variety of cyphers in Python and also put together an excellent set of teaching resources with really well drawn characters etc… They even threw in my dog Auraya (the likeness of the drawing is pretty awesome :)):

  • I ask my students to find an original way of showcasing their code. I don’t actually know the right answer to that ‘challenge’. Most students showcase the website and/or app, some will talk me through some code but this year one group did something quite frankly awesome: code on cake. Here’s some of the code they wrote for their phone app (written with kivy):

  • One group built a fully functioning and hosted web app (after taking a look at Django they decided that Flask was the way to go for this particular tool). Their app takes in a natural number and classifies it against a number of categories, go ahead and try it right now: Categorising Numbers

  • One of the more fun presentations was for a poker simulation app that uses a prime number representation of a hand of poker to simulate all possible outcomes of a given state. This work remarkably fast and immediately spits out (with neat graphics of the cards) the probability of winning given the current cards. As well as an impressive app the students presented it very well and invited me to play a game of poker (I lost, their mark was not affected…):

    Here are a couple of screen shots of the app itself:

    Home screen:

    The input card screen:

I am missing out a bunch of great projects (including an impressive actual business that I will be delighted to talk about more when appropriate). I am very grateful to the efforts put in by all the students and wish them well during their exams.

May 04, 2015 12:00 AM

April 06, 2015

Vince Knight

My 5 reasons why jekyll + github is a terrible teaching tool.

For the past year or so I have been using jekyll for all my courses. If you do not know, in a nutshell, jekyll is a ruby framework that lets you write templates for pages and build nice websites using static markdown files for your content. Here I will describe what I think of jekyll from a pedagogic point of view, in 5 main points.

1. Jekyll is terrible because the tutorial is too well written and easy to follow.

First of all, as an academic I enjoy when things are difficult to read and follow. The Jekyll tutorial can get you up and running with a jekyll site in less than 5 minutes. It is far too clear and easy to follow. This sort of clear and to the point explanation is very dangerous from a pedagogic point of view as students might stumble upon it and raise their expectations of the educational process they are going through.

In all seriousness, the tutorial is well written and clear, with a basic knowledge of the command line you can modify the base site and have a website deployed in less than 10 minutes.

2. Jekyll is terrible because it works too seamlessly with github.

First of all gh-pages takes care of the hosting. Not having to use a complicated server saves far too much time. As academics we have too much free time already, I do not like getting bored.

Github promotes the sharing and openness of code, resources and processes. Using a jekyll site in conjunction with github means that others can easily see and comment on all the materials as well as potentially improve them. This openness is dangerous as it ensures that courses are living and breathing things as opposed to a set of notes/problem sheets that sit safely in a drawer somewhere.

The fact that jekyll uses markdown is also a problem. On github anyone can easily read and send a pull request (which improves things) without really knowing markdown (let alone git). This is very terrible indeed, here for example is a pull request sent to me by a student. The student in question found a mistake in a question sheet and asked me about it, right there in the lab I just said ‘go ahead and fix it :)’ (and they did). Involving students in the process of fixing/improving their course materials has the potential for utter chaos. Furthermore normalising mistakes is another big problem: all students should be terrified of making a mistake and/or trying things.

Finally, having a personal site as a github project gives you a site at the following url:

username.github.io

By simply having a gh-pages branch for each class site, this will automatically be served at:

username.github.io/class-site

This is far too sensible and flexible. Furthermore the promotion of decentralisation of content is dangerous. If one of my class sites breaks: none of my others will be affected!!! How can I expect any free time with such a robust system? This is dangerously efficient.

3. Jekyll is terrible because it is too flexible.

You can (if you want to) include:

  • A disqus.com board to a template for a page which means that students can easily comment and talk to you about materials. Furthermore you can also use this to add things to your materials in a discussion based way, for example I have been able to far too easily to add a picture of a whiteboard explaining something students have asked.

  • Mathjax. With some escaping this works out of the box. Being able to include nicely rendered mathematics misaligns students’ expectations as to what is on the web.

  • Sage cells can be easily popped in to worksheets allowing students to immediately use code to illustrate/explain a concept.

and various others: you can just include any html/javascript etc…

This promotion of interactive and modern resources by Jekyll is truly terrible as it gets students away from what teaching materials should really be about: dusty notes in the bottom of a drawer (worked fine for me).

The flexibility of Jekyll is also really terrible as it makes me forget the restrictions imposed on me by whatever VLE we are supposed to use. This is making me weak and soft, when someone takes the choice away from me and I am forced to use the VLE, I most probably won’t be ready.

(A jekyll + github setup also implis that a wiki immediately exists for a page and I am also experimenting with a gitter.im room for each class).

4. Jekyll is terrible because it gives a responsive site out of the box.

Students should consume their materials exactly when and how we want them to. The base jekyll site cames with a basic responsive framework, here is a photo of one of my class sheets (which also again shows the disgustingly beautifully rendered mathematics):

This responsive framework works right out of the box (you can also obviously use further frameworks if you want to, see my point about flexibility) from the tutorial and this encourages students to have access to the materials on whatever platform they want whenever they want. This cannot be a good thing.

5. Jekyll is terrible because it saves me too much time.

The main point that is truly worrying about jekyll is how much time it saves me. I have mentioned this before, as academics we need to constantly make sure we do not get bored. Jekyll does not help with this.

I can edit my files using whatever system I want (I can even do this on github directly if I wanted to), I push and the website is up to date.

In the past I would have a lot of time taken up by compiling a LaTeX document and uploading to our VLE. I would sit back and worry about being bored before realising (thankfully) that I had a typo and so needed to write, delete and upload again.

Furthermore, I can easily use the github issue tracker to keep on top of to do lists etc… (which I am actually beginning to do for more or less every aspect of my life). TAs can also easily fix/improve minor things without asking me to upload whatever it is they wrote.

Github + Jekyll works seamlessly and ensures that I have more time to respond to student queries and think. This time for reflection on teaching practice is dangerous: I might choose to do things differently than how they have been done for the past 100 years.

(In case my tone is unclear: I am such a huge jekyll fan and think it is a brilliant pedagogic tool. There might well be various other static site generators and other options so please do comment about them below :))

April 06, 2015 12:00 AM

March 25, 2015

Vince Knight

A one week flipped learning environment to introduce Object Oriented Programming

This post describes a teaching activity that is run for the Cardiff MSc. programmes. The activity is revolves around a two day hackathon that gets students to use Python and object oriented programming to solve a challenge. The activity is placed within a flipped learning environment and makes use of what I feel is a very nice form of assessment (we just get to know the students).

This year is the third installment of this exercise which came as a result of the MSc advisory board requesting that object oriented programming be introduced to our MSc.

Before describing the activity itself let me just put this simple diagram that describes the flipped learning environment here (if you would like more info about it be sure to talk to Robert Talbert who has always been very helpful to me):

Description of what happens

After 3 iterations and a number of discussions about the format with Paul Harper (the director of the MSc) I think the last iteration is pretty spot on and it goes something like this:

Monday: Transfer of content

We give a brief overview of Python (you can see the slides here) up until and including basic syntax for classes.

Tuesday + Wednesday: Nothing

Students can, if they want to, read up about Python, look through videos at the website and elsewhere, look through past challenges etc… This is in effect when the knowledge transfer happens

Thursday: Flying solo followed by feedback

Students are handed a challenge of some sort (you can see the past two here). Students work in groups of 4 at attempting to solve the problem. On this day, the two postgrads (Jason and Geraint) and myself observe the groups. When we are asked questions we in general ask questions back. This sometimes leads to a fair bit of frustration but is the difficult process that makes the rest of the process worthwhile.

Here is a photo of some of the groups getting to work:

At the very end of the day (starting at 1600 for about 30 minutes with each group). During this feedback session go through the code written by each group in detail, highlighting things they are having difficulty with and agreeing on a course of action for the next day. This is the point at which the class ‘flips’ so to speak: transfer of content is done and difficulties are identified and conceptualised.

Here is a photo of Jason, Geraint and I at the end of a very long day after the feedback sessions:

The other point of this day is that we start our continuous assessment: taking notes and discussing how each group is doing:

  • Where are they progress wise?
  • What difficulties do we need to look out for?
  • How are the groups approaching the problem and working together.

Here you can see a photo of Jason in front of the board that we fill up over the 2 days with notes and comments:

Friday: Sprint finish with more assistance

On the second/last day students are given slightly more assistance from Jason, Geraint and I but are still very much left to continue with their hard work. The main difference being that when students ask questions we sometimes answer them.

Here is one group who managed to crack something quite difficult on the second day:

The final part of this day is to round all the students together and announce the marks, which brings us nicely to the assessment part of this activity.

Assessment

I really enjoy assessing this activity. This is not something I say about assessment very often, but we are continuously assessing the students and are able to gain a true idea of how they do. The final piece of code is not what everything is marked on as it is in essence not terribly important.

Here is a photo of the team who did the best this year:

If I could sit with students over the 11 week period of the other courses I teach and get to know them and assess them that way, that is indeed how I would do it.

 Summary

Here is a summary of how I feel this activity fits in the original diagram I had:

As you can see despite ‘being in contact’ with students for most of Thursday I would not consider this contact time in the usual sense as most of that contact is part of the assessment.

This is always a very fun (and exhausting) two days and I look forward to next year.

March 25, 2015 12:00 AM

March 24, 2015

Vince Knight

Marrying toys and students

In class yesterday we took a look at matching games. These are sometimes referred to as stable marriage problems. To have some data for us to play with I asked for some volunteers to marry. Sadly I apparently am not allowed to ask students to rank each other in class and I also do not have the authority to marry. So, like last year I used some of my office toys and asked students to rank them.

I brought three toys to class:

  • The best ninja turtle: Donatello
  • A tech deck
  • A foam football

I asked 3 students to come down and rank them and in turn I let the toys rank the students.

We discussed possible matchings with some great questions such as:

“Are we trying to make everyone as happy as possible?”

The answer to that is: no. We are simply trying to ensure that no one has an incentive to deviate from their current matching by breaking their match for someone they prefer and who also prefers them.

Here is the stable matching we found together:

Note that we can run the Gale-Shapley value using Sage:

The 3 students got to hold on to the toys for the hour and I was half expecting the football to be thrown around but sadly that did not happen. Perhaps next year.

March 24, 2015 12:00 AM

March 23, 2015

Vince Knight

Cooperative basketball in class

Today in class we repeated the game we played last year. 3 teams of 3 students took part this year and here is a photo of the aftermath:

As a class we watched the three teams attempt to free-throw as many crumpled up pieces of paper in to the bin as possible.

Based on the total number we then tried to come up with how many each subset/coalition of players would have gotten in. So for example, 2 out of 3 of the teams had one student crumple paper while the other 2 took shots. So whilst that individual did not get any in, they contributed an important part to the team effort.

Here are the characteristic functions that show what each team did:

Here is some Sage code that gives the Shapley value for each game (take a look at my class notes or at last years post to see how to calculate this):

Let us define the first game:

If you click Evaluate above you see that the Shapley value is given by:

(This one we calculated in class)

By changing the numbers above we get the following for the other two games.

  • Game 2:

  • Game 3:

This was a bit of fun and most importantly from a class point of view gave us some nice numbers to work from and calculate the Shapley value together.

If anyone would like to read about the Shapley value a bit more take a look at the Sage documentation which not only shows how to calculate it using Sage but also goes over some of the mathematics (including another formulation).

March 23, 2015 12:00 AM

March 20, 2015

Liang Ze

Character Theory Basics

This post illustrates some of SageMath’s character theory functionality, as well as some basic results about characters of finite groups.

Basic Definitions and Properties

Given a representation $(V,\rho)$ of a group $G$, its character is a map $ \chi: G \to \mathbb{C}$ that returns the trace of the matrices given by $\rho$:

A character $\chi$ is irreducible if the corresponding $(V,\rho)$ is irreducible.

Despite the simplicity of the definition, the (irreducible) characters of a group contain a surprising amount of information about the group. Some big theorems in group theory depend heavily on character theory.

Let’s calculate the character of the permutation representation of $D_4$. For each $g \in G$, we’ll display the pairs:

(The Sage cells in this post are linked, so things may not work if you don’t execute them in order.)

Many of the following properties of characters can be deduced from properties of the trace:

  1. The dimension of a character is the dimension of $V$ in $(V,\rho)$. Since $\rho(\text{Id})$ is always the identity matrix, the dimension of $\chi$ is $\chi(\text{Id})$.
  2. Because the trace is invariant under similarity transformations, $\chi(hgh^{-1}) = \chi(g)$ for all $g,h \in G$. So characters are constant on conjugacy classes, and are thus class functions.
  3. Let $\chi_V$ denote the character of $(V,\rho)$. Recalling the definitions of direct sums and tensor products, we see that

The Character Table

Let’s ignore the representation $\rho$ for now, and just look at the character $\chi$:

This is succinct, but we can make it even shorter. From point 2 above, $\chi$ is constant on conjugacy classes of $G$, so we don’t lose any information by just looking at the values of $\chi$ on each conjugacy class:

Even shorter, let’s just display the values of $\chi$:

This single row of numbers represents the character of one representation of $G$. If we knew all the irreducible representations of $G$ and their corresponding characters, we could form a table with one row for each character. This is called the character table of $G$.

Remember how we had to define our representations by hand, one by one? We don’t have to do that for characters, because SageMath has the character tables of small groups built-in:

This just goes to show how important the character of a group is. We can also access individual characters as a functions. Let’s say we want the last one:

Notice that the character we were playing with, $[4,2,0,0,0]$, is not in the table. This is because its representation $\rho$ is not irreducible. At the end of the post on decomposing representations, we saw that $\rho$ splits into two $1$-dimensional irreducible representations and one $2$-dimensional one. It’s not hard to see that the character of $\rho$ is the sum of rows 1,4 and 5 in our character table:

Just as we could decompose every representation of $G$ into a sum of irreducible representations, we can express any character as a sum of irreducible characters.

The next post discusses how to do this easily, by making use of the Schur orthogonality relations. These are really cool relations among the rows and columns of the character table. Apart from decomposing representations into irreducibles, we’ll also be able to prove that the character table is always square!

March 20, 2015 12:00 AM

March 19, 2015

Vince Knight

Playing stochastic games in class

The final blog post I am late in writing is about the Stochastic game we played in class last week. The particular type of game I am referring to is also called a Markov game where players play a series of Normal Form games, with the next game being picked from a random distribution the nature of which depends on the strategy profiles. In other words the choice of the players does not only impact on the utility gained by the players but also on the probability of what the net game will be… I blogged about this last year so feel free to read about some of the details there.

The main idea is that one stage game corresponds to this normal form game (a prisoner’s dilemma):

at the other we play:

The probability distributions, of the form \((x,1-x)\) where \(x\) is the probability with which we play the first game again are given by:

and the probability distribution for the second game was \((0,1)\). In essence the second game was an absorption game and so players would try and avoid it.

To deal with the potential for the game to last for ever we played this with a discounting factor \(\delta=1/2\). Whilst that discounting factor will be interpreted as such in theory, for the purposes of playing the game in class we used that as a probability at which the game continues.

You can see a photo of this all represented on the board:

We played this as a team game and you can see the results here:

As opposed to last year no actual duel lasted more than one round: I had a coded dice to sample at each step. The first random roll of the dice was to see if the game continued based on the \(\delta\) property (this in effect ‘deals with infinity’). The second random sample was to find out which game we payed next and if we ever went to the absorption games things finished there.

The winner was team B who in fact defected after the initial cooperation in the first game (perhaps that was enough to convince other teams they would be cooperative).

After playing this, we calculated (using some basic algebra examining each potential pure equilibria) the Nash equilibria for this game and found that there were two pure equilibria: both players Cooperating and both players defecting.

March 19, 2015 12:00 AM

March 17, 2015

Vince Knight

Incomplete information games in class

Last week my class and I looked at the basics of games with incomplete information. The main idea is that players do not necessarily know where there are in an extensive form game. We repeated a game I played last year that you can read about here.

Here is a picture of the game we played (for details take a look at the post from last year):

We played a round robing where everyone played against everyone else and you can see the results in these two notebook pages that Jason kept track off:

We see that the winner was Reg, who on both occasions of being the second player: went with the coin.

To find the Nash equilibria for this game we can translate it in to normal form game by doing the following two things:

  1. Identify the strategy sets for the players
  2. Averaging of the outcome probabilities

This gives the following strategies:

and

The strategies for the second player correspond to a 2-vector indexed by the information sets of the second player. In other words the first letter says what to do if the coin comes up as heads and the second letter says what to do if the coin comes up as tails:

  1. \(HH\): No matter what: play heads;
  2. \(HT\): If the coin comes up as heads: play heads. If the coin comes up as tails: play tails.
  3. \(TH\): If the coin comes up as heads: play tails. If the coin comes up as tails: play heads.
  4. \(TT\): No matter what: play tails;

Once we have done that and using the above ordering we can obtain the normal form game representation:

In class we obtained the Nash equilibria for this game by realising that the third column strategy (\(TH\): always disagree with the coin) was dominated and then carrying out some indifference analysis.

Here let us just throw it at Sage (here is a video showing and explaining some of the code):

The equilibria returned confirms what we did in class: the first player can more or less randomly (with bounds on the distribution) pick heads or tails but the second player should always agree with the coin.

March 17, 2015 12:00 AM

Discussing the game theory of walking/driving in class

Last week, in game theory class we looked at pairwise contest games. To introduce this we began by looking at the particular game that one could use to model the situation of two individuals walking or driving towards each other:

The above models people walking/driving towards each other and choosing a side of the road. If they choose the same side they will not walk/drive in to each other.

I got a coupe of volunteers to simulate this and ‘walk’ towards each other having picked a side. We very quickly arrived at one of the stage Nash equilibria: both players choosing left and/or choosing right.

I wrote a blog post about this a while ago when the BBC wrote an article about social convention. You can read that here.

We went on to compute the evolutionary stability of 3 potential stable equilibria:

  1. Everyone driving on the left;
  2. Everyone driving on the right;
  3. Everyone randomly picking a side each time.

Note that the above corresponds to the three Nash equilibria of the game itself. You can see this using some Sage code immediately (here is a video I just put together showing how one can use Sage to obtain Nash equilibria) - just click on ‘Evaluate’:

We did this calculations in two ways:

  1. From first principles using the definitions of evolutionary stability (this took a while). 2 Using a clever theoretic result that in effect does the analysis for us once and for all.

Both gave us the same result: driving on a given side of the road is evolutionarily stable whereas everyone randomly picking a side is not (a nudge in any given direction would ensure people picked a side).

This kind of corresponds to the two (poorly drawn) pictures below:

To further demonstrate the instability of the ‘choose a random side’ situation here is a plot of the actual evolutionary process (here is a video that shows what is happening):

We see that if we start by walking randomly the tiniest of mutation send everyone to picking a side.

March 17, 2015 12:00 AM

March 12, 2015

Liang Ze

Animated GIFs

I really should be posting about character theory, but I got distracted making some aesthetic changes to this blog (new icon and favicon!) and creating animations like this:

harmonograph

I’m not putting this in a SageCell because this could take quite a while, especially if you increase the number of frames (by changing the parameters in srange), but feel free to try it out on your own copy of Sage. It saves an animated GIF that loops forever (iterations = 0) at the location specified by savefile.

For more information, checkout the Sage reference for animated plots.

March 12, 2015 12:00 AM

March 08, 2015

Vince Knight

Playing an infinitely repeated game in class

Following the iterated Prisoner’s dilemma tournament my class I and I played last week we went on to play a version of the game where we repeated things infinitely many times. This post will briefly describe what we got up to.

As you can read in the post about this activity from last year, the way we play for an infinite amount of time (that would take a while) is to apply a discounting factor \(\delta\) to the payoffs and to interpret this factor as the probability with which the game continues.

Before I go any further (and put up pictures with the team names) I need to explain something (for the readers who are not my students).

For every class I teach I insist in spending a fair while going over a mid module feedback form that is used at Cardiff University (asking students to detail 3 things they like and don’t like about the class). One student wrote (what is probably my favourite piece of feedback ever):

“Vince is a dick… but in a good way.”

Anyway, I mentioned that to the class during my feedback-feedback session and that explains one of the team names (which I found pretty amusing):

  • Orange
  • Where’s the gun
  • We don’t know
  • Vince is a good dick

Once we had the team names set up (and I stopped trying to stop laughing) I wrote some quick Python code that we could run after each iteration:

import random
continue_prob = .5

if random.random() < continue_prob:
    print 'Game continues'
else:
    print 'Game Over'

We started off by playing with (\delta=.5) and here are the results:

You can see the various duels here:

As you can see, very little cooperation happened this way and in fact because everyone could see what everyone else was doing Orange took advantage of the last round to create a coalition and win. We also see one particular duel that cost two teams very highly (because the ‘dice rolls’ did not really help).

After this I suggest to the class that we play again but that no one got to see what was happening to the other teams (this was actually suggested to me by students the year before). We went ahead with this and used \(delta=.25\): so the game had a less chance of carrying on.

You can see the result and duels here (this had to be squeezed on to a board that could be hidden):

Based on the theory we would expect more cooperation to be likely but as you can see this did not happen.

The tie at the end was settled with a game of Rock Paper Scissors Lizard Spock which actually gave place to a rematch of the Rock Paper Scissors Lizard Spock tournament we played earlier. Except this time Laura, lost her crown :)

March 08, 2015 12:00 AM

February 26, 2015

Sébastien Labbé

Arnoux-Rauzy-Poincaré sequences

In a recent article with Valérie Berthé [BL15], we provided a multidimensional continued fraction algorithm called Arnoux-Rauzy-Poincaré (ARP) to construct, given any vector \(v\in\mathbb{R}_+^3\), an infinite word \(w\in\{1,2,3\}^\mathbb{N}\) over a three-letter alphabet such that the frequencies of letters in \(w\) exists and are equal to \(v\) and such that the number of factors (i.e. finite block of consecutive letters) of length \(n\) appearing in \(w\) is linear and less than \(\frac{5}{2}n+1\). We also conjecture that for almost all \(v\) the contructed word describes a discrete path in the positive octant staying at a bounded distance from the euclidean line of direction \(v\).

In Sage, you can construct this word using the next version of my package slabbe-0.2 (not released yet, email me to press me to finish it). The one with frequencies of letters proportionnal to \((1, e, \pi)\) is:

sage: from slabbe.mcf import algo
sage: D = algo.arp.substitutions()
sage: it = algo.arp.coding_iterator((1,e,pi))
sage: w = words.s_adic(it, repeat(1), D)
word: 1232323123233231232332312323123232312323...

The factor complexity is close to 2n+1 and the balance is often less or equal to three:

sage: w[:10000].number_of_factors(100)
202
sage: w[:100000].number_of_factors(1000)
2002
sage: w[:1000].balance()
3
sage: w[:2000].balance()
3

Note that bounded distance from the euclidean line almost surely was proven in [DHS2013] for Brun algorithm, another MCF algorithm.

Other approaches: Standard model and billiard sequences

Other approaches have been proposed to construct such discrete lines.

One of them is the standard model of Eric Andres [A03]. It is also equivalent to billiard sequences in the cube. It is well known that the factor complexity of billiard sequences is quadratic \(p(n)=n^2+n+1\) [AMST94]. Experimentally, we can verify this. We first create a billiard word of some given direction:

sage: from slabbe import BilliardCube
sage: v = vector(RR, (1, e, pi))
sage: b = BilliardCube(v)
sage: b
Cubic billiard of direction (1.00000000000000, 2.71828182845905, 3.14159265358979)
sage: w = b.to_word()
sage: w
word: 3231232323123233213232321323231233232132...

We create some prefixes of \(w\) that we represent internally as char*. The creation is slow because the implementation of billiard words in my optional package is in Python and is not that efficient:

sage: p3 = Word(w[:10^3], alphabet=[1,2,3], datatype='char')
sage: p4 = Word(w[:10^4], alphabet=[1,2,3], datatype='char') # takes 3s
sage: p5 = Word(w[:10^5], alphabet=[1,2,3], datatype='char') # takes 32s
sage: p6 = Word(w[:10^6], alphabet=[1,2,3], datatype='char') # takes 5min 20s

We see below that exactly \(n^2+n+1\) factors of length \(n<20\) appears in the prefix of length 1000000 of \(w\):

sage: A = ['n'] + range(30)
sage: c3 = ['p_(w[:10^3])(n)'] + map(p3.number_of_factors, range(30))
sage: c4 = ['p_(w[:10^4])(n)'] + map(p4.number_of_factors, range(30))
sage: c5 = ['p_(w[:10^5])(n)'] + map(p5.number_of_factors, range(30)) # takes 4s
sage: c6 = ['p_(w[:10^6])(n)'] + map(p6.number_of_factors, range(30)) # takes 49s
sage: ref = ['n^2+n+1'] + [n^2+n+1 for n in range(30)]
sage: T = table(columns=[A,c3,c4,c5,c6,ref])
sage: T
  n    p_(w[:10^3])(n)   p_(w[:10^4])(n)   p_(w[:10^5])(n)   p_(w[:10^6])(n)   n^2+n+1
+----+-----------------+-----------------+-----------------+-----------------+---------+
  0    1                 1                 1                 1                 1
  1    3                 3                 3                 3                 3
  2    7                 7                 7                 7                 7
  3    13                13                13                13                13
  4    21                21                21                21                21
  5    31                31                31                31                31
  6    43                43                43                43                43
  7    52                55                56                57                57
  8    63                69                71                73                73
  9    74                85                88                91                91
  10   87                103               107               111               111
  11   100               123               128               133               133
  12   115               145               151               157               157
  13   130               169               176               183               183
  14   144               195               203               211               211
  15   160               223               232               241               241
  16   176               253               263               273               273
  17   192               285               296               307               307
  18   208               319               331               343               343
  19   224               355               368               381               381
  20   239               392               407               421               421
  21   254               430               448               463               463
  22   268               470               491               507               507
  23   282               510               536               553               553
  24   296               552               583               601               601
  25   310               596               632               651               651
  26   324               642               683               703               703
  27   335               687               734               757               757
  28   345               734               787               813               813
  29   355               783               842               871               871

Billiard sequences generate paths that are at a bounded distance from an euclidean line. This is equivalent to say that the balance is finite. The balance is defined as the supremum value of difference of the number of apparition of a letter in two factors of the same length. For billiard sequences, the balance is 2:

sage: p3.balance()
2
sage: p4.balance() # takes 2min 37s
2

Other approaches: Melançon and Reutenauer

Melançon and Reutenauer [MR13] also suggested a method that generalizes Christoffel words in higher dimension. The construction is based on the application of two substitutions generalizing the construction of sturmian sequences. Below we compute the factor complexity and the balance of some of their words over a three-letter alphabet.

On a three-letter alphabet, the two morphisms are:

sage: L = WordMorphism('1->1,2->13,3->2')
sage: R = WordMorphism('1->13,2->2,3->3')
sage: L
WordMorphism: 1->1, 2->13, 3->2
sage: R
WordMorphism: 1->13, 2->2, 3->3

Example 1: periodic case \(LRLRLRLRLR\dots\). In this example, the factor complexity seems to be around \(p(n)=2.76n\) and the balance is at least 28:

sage: from itertools import repeat, cycle
sage: W = words.s_adic(cycle((L,R)),repeat('1'))
sage: W
word: 1213122121313121312212212131221213131213...
sage: map(W[:10000].number_of_factors, [10,20,40,80])
[27, 54, 110, 221]
sage: [27/10., 54/20., 110/40., 221/80.]
[2.70000000000000, 2.70000000000000, 2.75000000000000, 2.76250000000000]
sage: W[:1000].balance()  # takes 1.6s
21
sage: W[:2000].balance()  # takes 6.4s
28

Example 2: \(RLR^2LR^4LR^8LR^{16}LR^{32}LR^{64}LR^{128}\dots\) taken from the conclusion of their article. In this example, the factor complexity seems to be \(p(n)=3n\) and balance at least as high (=bad) as \(122\):

sage: W = words.s_adic([R,L,R,R,L,R,R,R,R,L]+[R]*8+[L]+[R]*16+[L]+[R]*32+[L]+[R]*64+[L]+[R]*128,'1')
sage: W.length()
330312
sage: map(W.number_of_factors, [10, 20, 100, 200, 300, 1000])
[29, 57, 295, 595, 895, 2981]
sage: [29/10., 57/20., 295/100., 595/200., 895/300., 2981/1000.]
[2.90000000000000,
 2.85000000000000,
 2.95000000000000,
 2.97500000000000,
 2.98333333333333,
 2.98100000000000]
sage: W[:1000].balance()  # takes 1.6s
122
sage: W[:2000].balance()  # takes 6s
122

Example 3: some random ones. The complexity \(p(n)/n\) occillates between 2 and 3 for factors of length \(n=1000\) in prefixes of length 100000:

sage: for _ in range(10):
....:     W = words.s_adic([choice((L,R)) for _ in range(50)],'1')
....:     print W[:100000].number_of_factors(1000)/1000.
2.02700000000000
2.23600000000000
2.74000000000000
2.21500000000000
2.78700000000000
2.52700000000000
2.85700000000000
2.33300000000000
2.65500000000000
2.51800000000000

For ten randomly generated words, the balance goes from 6 to 27 which is much more than what is obtained for billiard words or by our approach:

sage: for _ in range(10):
....:     W = words.s_adic([choice((L,R)) for _ in range(50)],'1')
....:     print W[:1000].balance(), W[:2000].balance()
12 15
8 24
14 14
5 11
17 17
14 14
6 6
19 27
9 16
12 12

References

[BL15]V. Berthé, S. Labbé, Factor Complexity of S-adic words generated by the Arnoux-Rauzy-Poincaré Algorithm, Advances in Applied Mathematics 63 (2015) 90-130. http://dx.doi.org/10.1016/j.aam.2014.11.001
[DHS2013]Delecroix, Vincent, Tomás Hejda, and Wolfgang Steiner. “Balancedness of Arnoux-Rauzy and Brun Words.” In Combinatorics on Words, 119–31. Springer, 2013. http://link.springer.com/chapter/10.1007/978-3-642-40579-2_14.
[A03]E. Andres, Discrete linear objects in dimension n: the standard model, Graphical Models 65 (2003) 92-111.
[AMST94]P. Arnoux, C. Mauduit, I. Shiokawa, J. I. Tamura, Complexity of sequences defined by billiards in the cube, Bull. Soc. Math. France 122 (1994) 1-12.
[MR13]G. Melançon, C. Reutenauer, On a class of Lyndon words extending Christoffel words and related to a multidimensional continued fraction algorithm. J. Integer Seq. 16, No. 9, Article 13.9.7, 30 p., electronic only (2013). https://cs.uwaterloo.ca/journals/JIS/VOL16/Reutenauer/reut3.html

by Sébastien Labbé at February 26, 2015 04:22 PM

Vince Knight

This class teaches me to not trust my classmates: An iterated prisoners dilemma in class

On Monday, in class we played an iterated prisoner’s dilemma tournament. I have done this many times (both in outreach events with Paul Harper and in this class). This is always a lot of fun but none more so than last year when Paul’s son Thomas joined us. You can read about that one here.

The format of the game is as close to that of Axelrod’s original tournament as I think it can be. I split the class in to 4 teams and we create a round robin where each team plays every other team at 8 consecutive rounds of the prisoner’s dilemma:

The utilities represent ‘years in prison’ and over the 3 matches that each team will play (against every other team) the goal is to reduce the total amount of time spent in prison.

Here are some photos from the game:

Here are the scores:

We see that ‘We will take the gun’ acquired the least total score and so they won the collection of cookies etc…

(The names followed a promise from me to let the team with the coolest name have a nerf gun… Can’t say this had the wanted effect…)

At one point during the tournament, one team actually almost declared a strategy which was cool:

We will cooperate until you defect at which point we will reevaluate

This was pretty cool as I hadn’t discussed at all what a strategy means in a repeated game (ie I had not discussed the fact that a strategy in a repeated game takes count of both play histories).

Here are all the actual duels:

You’ll also notice at the end that a coalition formed and one team agreed to defect so that they could share the prize. This happens about 50% of the time when we play this game but I never cease to be amused by it. Hopefully everyone found this fun and perhaps some even already agree with a bit of feedback I received on this course last year:

‘This class teaches me to not trust my classmates’

One of the other really cool things that happened after this class was H asking for a hand to submit a strategy to my Axelrod repository. She built a strategy called ‘Once Bitten’ that performs pretty well! You can see it here (click on ‘Blame’ and you can see the code that she wrote).

(Big thanks to Jason for keeping track of the scores and to Geraint for helping and grabbing some nice pictures)

February 26, 2015 12:00 AM

February 15, 2015

Liang Ze

The Group Ring and the Regular Representation

In the previous post, we saw how to decompose a given group representation into irreducibles. But we still don’t know much about the irreducible representations of a (finite) group. What do they look like? How many are there? Infinitely many?

In this post, we’ll construct the group ring of a group. Treating this as a vector space, we get the regular representation, which turns out to contain all the irreducible representations of $G$!

The group ring $FG$

Given a (finite) group $G$ and a field $F$, we can treat each element of $G$ as a basis element of a vector space over $F$. The resulting vector space generated by $g \in G$ is

Let’s do this is Sage with the group $G = D_4$ and the field $F = \mathbb{Q}$:

(The Sage cells in this post are linked, so things may not work if you don’t execute them in order.)

We can view $v \in FG$ as vector in $F^n$, where $n$ is the size of $G$ :

Here, we’re treating each $g \in G$ as a basis element of $FG$

Vectors in $FG$ are added component-wise:

Multiplication as a linear transformation

In fact $FG$ is also a ring (called the group ring), because we can multiply vectors using the multiplication rule of the group $G$:

That wasn’t very illuminating. However, treating multiplication by $v \in FG$ as a function

one can check that each $T_v$ is a linear transformation! We can thus represent $T_v$ as a matrix whose columns are $T_v(g), g \in G$:

The regular representation

We’re especially interested in $T_g, g \in G$. These are invertible, with inverse $T_{g^{-1}}$, and their matrices are all permutation matrices, because multiplying by $g \in G$ simply permutes elements of $G$:

Define a function $\rho_{FG}$ which assigns to each $g\in G$ the corresponding $T_g$:

Then $(FG,\rho_{FG})$ is the regular representation of $G$ over $F$.

The regular representation of any non-trivial group is not irreducible. In fact, it is a direct sum of all the irreducible representations of $G$! What’s more, if $(V,\rho)$ is an irreducible representation of $G$ and $\dim V = k$, then $V$ occurs $k$ times in the direct-sum decomposition of $FG$!

Let’s apply the decomposition algorithm in the previous post to $(FG,\rho_{FG})$ (this might take a while to run):

So the regular representation of $D_4$ decomposes into four (distinct) $1$-dim representations and two (isomorphic) $2$-dim ones.

Building character

We’ve spent a lot of time working directly with representations of a group. While more concrete, the actual matrix representations themselves tend to be a little clumsy, especially when the groups in question get large.

In the next few posts, I’ll switch gears to character theory, which is a simpler but more powerful way of working with group representations.

February 15, 2015 12:00 AM

February 02, 2015

Liang Ze

Decomposing Representations

In this post, we’ll implement an algorithm for decomposing representations that Dixon published in 1970.

As a motivating example, I’ll use the permutation matrix representation of $D_4$ that we saw in an earlier post. To make the code more generally applicable, let’s call the group $G$ and the representation $\rho$:

(The Sage cells in this post are linked, so things may not work if you don’t execute them in order.)

We’ll see that this is decomposable, and find out what its irreducible components are.

Unitary representations

A short remark before we begin: The algorithm assumes that $\rho$ is a unitary representation

i.e. for all $g \in G$,

where $A*$ is the conjugate transpose of a matrix $A$. For $G$ a finite group, all representations can be made unitary under an appropriate change of basis, so we need not be too concerned about this. In any case, permutation representations are always unitary, so we can proceed with our example.

Finding non-scalar, commuting matrices

At the end of the previous post we saw that in order to decompose a representation $(V,\rho)$, it is enough to find a non-scalar matrix $T$ that commutes with $\rho(g)$ for every $g \in G$. This first step finds a Hermitian non-scalar $H$ that commutes with $\rho(G)$ (if there is one to be found).

Let $E_{rs}$ denote the $n \times n$ matrix with a $1$ in the $(r,s)$th entry and zeros everywhere else. Here $n$ is the dimension of $V$ in the representation $(V,\rho)$. Define

then the set of matrices $H_{rs}$ forms a Hermitian basis for the $n \times n$ matrices over $\mathbb{C}$.

Now for each $r,s$, compute the sum

Observe that $H$ has the following properties:

  • it is hermitian
  • it commutes with $\rho(g)$ for all $g \in G$

If $\rho$ is irreducible, then $H$ is a scalar matrix for all $r,s$. Otherwise, it turns out that there will be some $r,s$ such that $H$ is non-scalar (this is due to the fact that the $H_{rs}$ matrices form a basis of the $n \times n$ matrices$).

Let’s test this algorithm on our permutation representation of $D_4$:

We get a non-scalar $H$! So the permutation representation of $D_4$ is reducible!

Using $H$ to decompose $\rho$

Our next step is to use the eigenspaces of $H$ to decompose $\rho$. At the end of the previous post, we saw that $\rho(g)$ preserves the eigenspaces of $H$, so we need only find the eigenspaces of $H$ to decompose $\rho$.

Since $H$ is hermitian, it is diagonalizable, so its eigenvectors form a basis of $V$. We can find this basis by computing the Jordan decomposition of $H$:

Finally, we observe that $P^{-1} \rho(g) P$ has the same block-diagonal form for each $g \in G$:

We have thus decomposed $\rho$ into two 1-dimensional representations and one 2-dimensional one!

Decomposing into irreducibles

Finally, to get a decomposition into irreducibles, we can apply the algorithm recursively on each of the subrepresentations to see if they further decompose.

Here’s a stand-alone script that decomposes a representation into its irreducible components:

Getting all irreducible representations

Now we know how to test for irreducibility and decompose reducible representations. But we still don’t know how many irreducible representations a group has.

It turns out that finite groups have finitely many irreducible representations! In the next post, we’ll construct a representation for any finite group $G$ that contains all the irreducible representations of $G$.

February 02, 2015 12:00 AM

January 26, 2015

Liang Ze

Irreducible and Indecomposable Representations

Following up from the questions I asked at the end of the previous post, I’ll define (ir)reducible and (in)decomposable representations, and discuss how we might detect them. Unlike previous posts, this post will have just text, and no code. This discussion will form the basis of the algorithm in the next post.

Decomposability

In the previous post, I showed how to form the direct sum $(V_1 \oplus V2,\rho)$ of two representations $(V_1,\rho_1)$ and $(V_2,\rho_2)$. The matrices given by $\rho$ looked like this:

A representation $(V,\rho)$ is decomposable if there is a basis of $V$ such that each $\rho(g)$ takes this block diagonal form. If $(V,\rho)$ does not admit such a decomposition, it is indecomposable.

Equivalently, $(V,\rho)$ is decomposable if there is an invertible matrix $P$ such that for all $g\in G$,

and indecomposable otherwise. Here, $P$ is a change of basis matrix and conjugating by $P$ changes from the standard basis to the basis given by the columns of $P$.

Reducibility

Notice that if $\rho(g)$ were block diagonal, then writing $v \in V$ as ${v_1 \choose v_2}$, where $v_1$ and $v_2$ are vectors whose dimensions agree with the blocks of $\rho(g)$, we see that

Let $V_1$ be the subspace of $V$ corresponding to vectors of the form ${v_1 \choose 0}$, and $V_2$ be the subspace of vectors of the form ${0 \choose v_2}$. Then for all $g \in G, v \in V_i$,

Now suppose instead that for all $g \in G, \rho(g)$ has the block upper-triangular form

where $ * $ represents an arbitrary matrix (possibly different for each $g \in G$). If $*$ is not the zero matrix for some $g$, then we will still have $\rho(g) v \in V_1 \,\, \forall v \in V_1$, but we no longer have $\rho(g) v \in V_2 \,\, \forall v \in V_2$. In this case, we say that $V_1$ is a subrepresentation of $V$ whereas $V_2$ is not.

Formally, if we have a subspace $W \subset V$ such that for all $g \in G, w \in W$,

then $W$ is a $G$-invariant subspace of $V$, and $(W,\rho)$ is a subrepresentation of $(V,\rho)$.

Any representation $(V,\rho)$ has at least two subrepresentations: $(0,\rho)$ and $(V,\rho)$. If there are no other subrepresentations, then $(V,\rho)$ is irreducible. Otherwise, it is reducible.

Equivalently, $(V,\rho)$ is reducible if there is an invertible matrix $P$ such that for all $g \in G$,

and irreducible otherwise.

Maschke’s Theorem

Note that a decomposable representation is also reducible, but the converse is not generally true. (Equivalently: an irreducible representation is also indecomposable, but the converse is not generally true.) Maschke’s Theorem tells us that the converse is true over fields of characteristic zero! In other words:

Suppose $V$ is a vector space over a field of characteristic zero, say $\mathbb{C}$, and $(V,\rho)$ has a subrepresentation $(W_1,\rho)$. Then there is a subspace $W_2$ (called the direct complement of $W_1$) such that $V = W_1 \oplus W_2$.

Since we will be working over $\mathbb{C}$, we can thus treat (in)decomposability as equivalent to (ir)reducibility. To understand representations of $G$, we need only understand its irreducible representations, because any other representation can be decomposed into a direct sum of irreducibles.

Schur’s Lemma

How may we detect (ir)reducible representations? We’ll make use of the following linear algebraic properties:

Given an eigenvalue $\lambda$ of a matrix $A \in \mathbb{C}^{n \times n}$, its $\lambda$-eigenspace is

Clearly, each eigenspace is an invariant subspace of $A$. If we have another matrix $B \in \mathbb{C}^{n \times n}$ such that $AB = BA$, then $B$ preserves the eigenspaces of $A$ as well. To see this, take $v \in E_\lambda$, then

so $E_\lambda$ is also an invariant subspace of $B$!

Now suppose we have a representation $(V,\rho)$ and a linear map $T:V \to V$ such that for all $g \in G, v \in V$,

Treating $T$ as a matrix, this is equivalent to saying that $\rho(g)T = T\rho(g)$ for all $g \in G$. In that case, the eigenspaces of $T$ are $G$-invariant subspaces, and will yield decompositions of $(V,\rho)$ if they are not the whole of $V$. But if $E_\lambda = V$, then $Tv = \lambda v$ for all $v \in V$, so in fact $T = \lambda I$, where $I$ is the identity matrix. We have thus shown a variant of Schur’s lemma:

If $(V,\rho)$ is irreducible, and $\rho(g) T = T \rho(g)$ for all $g \in G$, then $T =\lambda I$ for some $\lambda$.

We already know that scalar matrices (i.e. matrices of the form $\lambda I$) commute with all matrices. If $(V,\rho)$ is irreducible, this result says that there are no other matrices that commute with all $\rho(g)$. The converse is also true:

If $(V,\rho)$ is a reducible, then there is some $T \neq \lambda I$ such that $\rho(g) T = T\rho(g)$ for all $g \in G$.

I won’t prove this, but note that if $V$ has a decomposition $W_1 \oplus W_2$, then the projection onto either $W_i$ will have the desired properties. If we have such a $T$, then its eigenspaces will give a decomposition of $(V,\rho)$. This will be the subject of the next post.

January 26, 2015 12:00 AM

January 24, 2015

Liang Ze

Direct Sums and Tensor Products

In this short post, we will show two ways of combining existing representations to obtain new representations.

Recall

In the previous post, we saw two representations of $D_4$: the permutation representation, and the representation given in this Wikipedia example. Let’s first define these in Sage:

(The Sage cells in this post are linked, so things may not work if you don’t execute them in order.)

Direct Sums

If $(V_1,\rho_1), (V_2,\rho_2)$ are representations of $G$, the direct sum of these representations is $(V_1 \oplus V_2, \rho)$, where $\rho$ sends $g \in G$ to the block diagonal matrix

Here $\rho_1(g), \rho_2(g)$ and the “zeros” are all matrices.

It’s best to illustrate with an example. We can define a function direct_sum in Sage that takes two representations and returns their direct sum.

Tensor products

We can also form the tensor product $(V_1 \otimes V_2,\rho)$, where $\rho$ sends $g \in G$ to the Kronecker product of the matrices $\rho_1(g)$ and $\rho_2(g)$.

We define a function tensor_prod that takes two representations and returns their tensor product.

Observe that

  • $\dim V_1 \oplus V_2 = \dim V_1 + \dim V_2$,
  • $\dim V_1 \otimes V_2 = \dim V_1 \times \dim V_2$,

which motivates the terms direct sum and tensor product.

We can keep taking direct sums and tensor products of existing representations to obtain new ones:

Decomposing representations

Now we know how to build new representations out of old ones. One might be interested in the inverse questions:

  1. Is a given representation a direct sum of smaller representations?
  2. Is a given representation a tensor product of smaller representations?

It turns out that Q1 is a much more interesting question to ask than Q2.

A (very poor) analogy of this situation is the problem of “building up” natural numbers. We have two ways of building up new integers from old: we can either add numbers, or multiply them. Given a number $n$, it’s easy (and not very interesting) to find smaller numbers that add up to $n$. However, finding numbers whose product is $n$ is much much harder (especially for large $n$) and much more rewarding. Prime numbers also play a special role in the latter case: every positive integer has a unique factorization into primes.

The analogy is a poor one (not least because the roles of “sum” and “product” are switched!). However, it motivates the question

  • What are the analogues of “primes” for representations?

We’ll try to answer this last question and Q1 in the next few posts, and see what it means for us when working with representations in Sage.

January 24, 2015 12:00 AM

January 20, 2015

Liang Ze

Representation Theory in Sage - Basics

This is the first of a series of posts about working with group representations in Sage.

Basic Definitions

Given a group $G$, a linear representation of $G$ is a group homomorphism $\rho: G \to \mathrm{GL}(V)$ such that

For our purposes, we will assume that $G$ is a finite group and $V$ is an $n$-dimensional vector space over $\mathbb{C}$. Then $\mathrm{GL}(V)$ is isomorphic to the invertible $n \times n$ matrices over $\mathbb{C}$, which we will denote $\mathrm{GL}_n \mathbb{C}$.

So a representation is just a function that takes group elements and returns invertible matrices, in such a way that the above equation holds.

Various authors refer to the map $\rho$, the vector space $V$, or the tuple $(V,\rho)$ as a representation; this shouldn’t cause any confusion, as it’s usually clear from context whether we are referring to a map or a vector space. When I need to be extra precise, I’ll use $(V,\rho)$.

Some simple examples

Trivial representation

The simplest representation is just the trivial representation that sends every element of $G$ to the identity matrix (of some fixed dimension $n$). Let’s do this for the symmetric group $S_3$:

(The Sage cells in this post are linked, so things may not work if you don’t execute them in order.)

We can verify that this is indeed a group homomorphism (warning: There are 6 elements in $S_3$, which means we have to check $6^2 = 36$ pairs!):

Permutation representation

This isn’t very interesting. However, we also know that $S_3$ is the group of permutations of the 3-element set {$1,2,3$}. We can associate to each permutation a permutation matrix. Sage already has this implemented for us, via the method matrix() for a group element g:

Qn: From the permutation matrix, can you tell which permutation $g$ corresponds to?

We can again verify that this is indeed a representation. Let’s not print out all the output; instead, we’ll only print something if it is not a representation. If nothing pops up, then we’re fine:

Defining a representation from generators

We could define permutation representations so easily only because Sage has them built in. But what if we had some other representation that we’d like to work with in Sage? Take the dihedral group $D_4$. Wikipedia tells us that this group has a certain matrix representation. How can we recreate this in Sage?

We could hard-code the relevant matrices in our function definition. However, typing all these matrices can be time-consuming, especially if the group is large.

But remember that representations are group homomorphisms. If we’ve defined $\rho(g)$ and $\rho(h)$, then we can get $\rho(gh)$ simply by multiplying the matrices $\rho(g)$ and $\rho(h)$! If we have a set of generators of a group, then we only need to define $\rho$ on these generators. Let’s do that for the generators of $D_4$:

We see that $D_4$ has a generating set of 2 elements (note: the method gens() need not return a minimal generating set, but in this case, we do get a minimal generating set). Let’s call these $r$ and $s$. We know that elements of $D_4$ can be written $r^is^j$, where $i = 0,1,2,3$ and $j = 0,1$. We first run through all such pairs $(i,j)$ to create a dictionary that tells us which group elements are given by which $(i,j)$:

Now for $g = r^i s^j \in D_4$, we can define $\rho(g) = \rho(r)^i \rho(s)^j$ and we will get a representation of $D_4$. We need only choose the matrices we want for $\rho(r)$ and $\rho(s)$.

$r$ and $s$ correspond to $R_1$ and $S_0$, resp., in the Wikipedia example, so let’s use their matrix representations to generate our representation:

One can verify that this does indeed give the same matrices as the Wikipedia example, albeit in a different order.

We can do better!

All the representations we’ve defined so far aren’t very satisfying! For the last example, we required the special property that all elements in $D_4$ have the form $r^i s^j$. In general, it isn’t always easy to express a given group element in terms of the group’s generators (this is known as the word problem).

We’ve also been constructing representations in a rather ad-hoc manner. Is there a more general way to construct representations? And how many are representations are there?

In the next post, I’ll run through two simple ways of combining existing representations to get new ones: the direct sum and the tensor product. I’ll also define irreducible representations, and state some results that will shed some light on the above questions.

January 20, 2015 12:00 AM

January 17, 2015

Liang Ze

Subgroup Explorer

Subgroup Explorer

I’ve written a subgroup lattice generator for all groups of size up to 32. It’s powered by Sage and GAP, and allows you to view the lattice of subgroups or subgroup conjugacy classes of a group from your browser.

Click Go! below to refresh the viewer, or if it doesn’t load.

Normal subgroups are colored green. Additionally, the center is blue while the commutator subgroup is pink.

Showing the full subgroup lattice can get messy for large groups. If the option Conjugacy classes of subgroups is selected, the viewer only shows the conjugacy classes of subgroups (i.e. all subgroups that are conjugate are combined into a single vertex).

The edge labels indicate how many subgroups of one conjugacy class a given representative subgroup of another conjugacy class contains, or how many subgroups of one conjugacy class a given representative subgroup of another conjugacy class is contained by. The labels are omitted if these numbers are 1. The edge colors indicate whether the subgroups in the “smaller” conjugacy class are normal subgroups of those in “larger” conjugacy class.

In the image at the top of the post, the group C15 : C4 (the colon stands for semi-direct product and is usually written $\rtimes$) contains 5 subgroups isomorphic to C3 : C4, which in turn contains 3 subgroups isomorphic to C4 and 1 subgroup isomorphic to C6 (the 5 belows to another edge). The edge colors indicate that C6 is a normal subgroup of C3 : C3 whereas C4 is not. For further information on group descriptors, click here.

And here’s the code for a version that you can run on SageMathCloud. It allows you to input much larger groups. This was used to produce the image at the top of the post. Don’t try running it here, however, since the SageCellServer doesn’t have the database_gap package installed.

Finally, while verifying the results of this program, I found an error in this book! The correction has been pencilled in. The original number printed was 1. A5 Lattice

January 17, 2015 12:00 AM

December 27, 2014

Liang Ze

Lattice of Subgroups III - Coloring Edges

This post will cover the coloring of edges in the lattice of subgroups of a group.

Lattice of subgroups of $C3:C8$

Coloring edges is almost as simple as coloring vertices, so we’ll start with that.

Generating small groups

As we’ve done in previous posts, let’s start by choosing a group and generate its lattice of subgroups. This can be done by referring to this list of constructions for every group of order less than 32 . These instructions allow us to construct every group on Wikipedia’s list of small groups!

For this post, we’ll use $G = C_3 \rtimes C_8$ (or $\mathbb{Z}_3 \rtimes \mathbb{Z}_8$). First, we’ll generate $G$ and display it’s poset of subgroups. For simplicity, we’ll label by cardinality, and we won’t color the vertices.

(The Sage cells in this post are linked, so things may not work if you don’t execute them in order.)

Coloring edges

In the previous post, we colored vertices according to whether the corresponding subgroup was normal (or abelian, or a Sylow subgroup, etc.) These are properties that depend only on each individual subgroup.

However, suppose we want to see the subnormal series of the group. A subnormal series is a series of subgroups where each subgroup is a normal subgroup of the next group in the series. Checking whether a particular series of subgroups is a subnormal series requires checking pairs of subgroups to see whether one is a normal subgroup of the other. This suggests that we color edges according to whether one of its endpoints is a normal subgroup of the other endpoint.

The edges of the Hasse diagram of a poset are the pairs $(h,k)$ where $h$ is covered by $k$ in the poset. This means that $h < k$, with nothing else in between. We thus obtain all the edges of a Hasse diagram by calling P.cover_relations() on the poset $P$.

To color edges of a graph, we create a dictionary edge_colors:

Up next…

This is the last post describing relatively simple things one can do to visualize subgroup lattices (or more generally, posets) in Sage. In the next post, I’ll write code to label edges. Doing this requires extracting the Hasse diagram of a poset as a graph and modifying the edge labels. Also, subgroup lattices tend to get unwieldy for large groups. In the next post, we’ll restrict our attention to conjugacy classes of subgroups, rather than all subgroups.

After that, I hope to write a bit about doing some simple representation theory things in Sage.

December 27, 2014 12:00 AM

December 25, 2014

Liang Ze

Holiday Harmonograph

(Guest post from the Annals of Harmonography) harmonograph

When it’s snowing outside (or maybe not),

And your feet are cold (or maybe hot),

When it’s dark as day (or bright as night),

And your heart is heavy (and head is light),

What should you do (what should you say)

To make it all right (to make it okay)?

.

Just pick up a pen (a pencil will do),

Set up a swing (or three, or two),

And while the world spins (or comes to a still),

In your own little room (or on top of a hill),

Let your pendulum sway (in its time, in its way),

And watch as the pen draws your worries away!

.

.

(Click inside the colored box to choose a color. Then click outside and watch it update.)

  • 7 celebrities and their harmonographs
  • What your harmonograph says about you
  • 10 tips for a happier harmonograph
  • Harmonograph secrets… revealed!

December 25, 2014 12:00 AM

November 14, 2014

William Stein

SageMathCloud Notifications are Now Better

I just made live a new notifications systems for  SageMathCloud, which I spent all week writing.  




These notifications are what you see when you click the bell in the upper right.   This new system replaces the one I made live two weeks ago.     Whenever somebody actively *edits* (using the web interface) any file in any project you collaborate on, a notification will get created or updated.    If a person *comments* on any file in any project you collaborate on (using the chat interface to the right), then not only does the notification get updated, there is also a little red counter on top of the bell and also in the title of the  SageMathCloud tab.   In particular, people will now be much more likely to see the chats you make on files.




NOTE: I have not yet enabled any sort of daily email notification summary, but that is planned. 

Some technical details:  Why did this take all week?  It's because the technology that makes it work behind the scenes is something that was fairly difficult for me to figure out how to implement.  I implemented a way to create an object that can be used simultaneously by many clients and supports realtime synchronization.... but is stored by the distributed Cassandra database instead of a file in a project.   Any changes to that object get synchronized around very quickly.   It's similar to how synchronized text editing (with several people at once) works, but I rethought differential synchronization carefully, and also figured out how to synchronize using an eventually consistent database.    This will be useful for implementing a lot other things in SageMathCloud that operate at a different level than "one single project".  For example, I plan to add functions so you can access these same "synchronized databases" from Python processes -- then you'll be able to have sage worksheets (say) running on several different projects, but all saving their data to some common synchronized place (backed by the database).   Another application will be a listing of the last 100 (say) files you've opened, with easy ways to store extra info about them.    It will also be easy to make account and project settings more realtime, so when you change something, it automatically takes effect and is also synchronized across other browser tabs you may have open.   If you're into modern Single Page App web development, this might remind you of Angular or React or Hoodie or Firebase -- what I did this week is probably kind of like some of the sync functionality of those frameworks, but I use Cassandra (instead of MongoDB, say) and differential synchronization.  

I BSD-licensed the differential synchronization code  that I wrote as part of the above. 


by William Stein (noreply@blogger.com) at November 14, 2014 02:31 PM

October 17, 2014

William Stein

A Non-technical Overview of the SageMathCloud Project

What problems is the SageMathCloud project trying to solve? What pain points does it address? Who are the competitors and what is the state of the technology right now?


What problems you’re trying to solve and why are these a problem?

  • Computational Education: How can I teach a course that involves mathematical computation and programming?
  • Computational Research: How can I carry out collaborative computational research projects?
  • Cloud computing: How can I get easy user-friendly collaborative access to a remote Linux server?

What are the pain points of the status quo and who feels the pain?

  • Student/Teacher pain:
    • Getting students to install software needed for a course on their computers is a major pain; sometimes it is just impossible, due to no major math software (not even Sage) supporting all recent versions of Windows/Linux/OS X/iOS/Android.
    • Getting university computer labs to install the software you need for a course is frustrating and expensive (time and money).
    • Even if computer labs worked, they are often being used by another course, stuffy, and students can't possibly do all their homework there, so computation gets short shrift. Lab keyboards, hardware, etc., all hard to get used to. Crappy monitors.
    • Painful confusing problems copying files around between teachers and students.
    • Helping a student or collaborator with their specific problem is very hard without physical access to their computer.
  • Researcher pain:
    • Making backups every few minutes of the complete state of everything when doing research often hard and distracting, but important for reproducibility.
    • Copying around documents, emailing or pushing/pulling them to revision control is frustrating and confusing.
    • Installing obscuring software is frustarting and distracting from the research they really want to do.
  • Everybody:
    • It is frustrating not having LIVE working access to your files wherever you are. (Dropbox/Github doesn't solve this, since files are static.)
    • It is difficult to leave computations running remotely.

Why is your technology poised to succeed?

  • When it works, SageMathCloud solves every pain point listed above.
  • The timing is right, due to massive improvements in web browsers during the last 3 years.
  • I am on full sabbatical this year, so at least success isn't totally impossible due to not working on the project.
  • I have been solving the above problems in less scalable ways for myself, colleagues and students since the 1990s.
  • SageMathCloud has many users that provide valuable feedback.
  • We have already solved difficult problems since I started this project in Summer 2012 (and launched first version in April 2013).

Who are your competitors?

There are no competitors with a similar range of functionality. However, there are many webapps that have some overlap in capabilities:
  • Mathematical overlap: Online Mathematica: "Bring Mathematica to life in the cloud"
  • Python overlap: Wakari: "Web-based Python Data Analysis"; also PythonAnywhere
  • Latex overlap: ShareLaTeX, WriteLaTeX
  • Web-based IDE's/terminals: target writing webapps (not research or math education): c9.io, nitrous.io, codio.com, terminal.com
  • Homework: WebAssign and WebWork
Right now, SageMathCloud gives away for free far more than any other similar site, due to very substantial temporary financial support from Google, the NSF and others.

What’s the total addressable market?

Though our primary focus is the college mathematics classroom, there is a larger market:
  • Students: all undergrad/high school students in the world taking a course involving programming or mathematics
  • Teachers: all teachers of such courses
  • Researchers: anybody working in areas that involve programming or data analysis
Moreover, the web-based platform for computing that we're building lends itself to many other collaborative applications.

What stage is your technology at?

  • The site is up and running and has 28,413 monthly active users
  • There are still many bugs
  • I have a precise todo list that would take me at least 2 months fulltime to finish.

Is your solution technically feasible at this point?

  • Yes. It is only a matter of time until the software is very polished.
  • Morever, we have compute resources to support significantly more users.
  • But without money (from paying customers or investment), if growth continues at the current rate then we will have to clamp down on free quotas for users.

What technical milestones remain?

  • Infrastructure for creating automatically-graded homework problems.
  • Fill in lots of details and polish.

Do you have external credibility with technical/business experts and customers?

  • Business experts: I don't even know any business experts.
  • Technical experts: I founded the Sage math software, which is 10 years old and relatively well known by mathematicians.
  • Customers: We have no customers; we haven't offered anything for sale.

Market research?

  • I know about math software and its users as a result of founding the Sage open source math software project, NSF-funded projects I've been involved in, etc.

Is the intellectual property around your technology protected?

  • The IP is software.
  • The website software is mostly new Javascript code we wrote that is copyright Univ. of Washington and mostly not open source; it depends on various open source libraries and components.
  • The Sage math software is entirely open source.

Who are the team members to move this technology forward?

I am the only person working on this project fulltime right now.
  • Everything: William Stein -- UW professor
  • Browser client code: Jon Lee, Andy Huchala, Nicholas Ruhland -- UW undergrads
  • Web design, analytics: Harald Schilly -- Austrian grad student
  • Hardware: Keith Clawson

Why are you the ideal team?

  • We are not the ideal team.
  • If I had money maybe I could build the ideal team, leveraging my experience and connections from the Sage project...

by William Stein (noreply@blogger.com) at October 17, 2014 12:04 PM

October 16, 2014

William Stein

Public Sharing in SageMathCloud, Finally

SageMathCloud (SMC) is a free (NSF, Google and UW supported) website that lets you collaboratively work with Sage worksheets, IPython notebooks, LaTeX documents and much, much more. All work is snapshotted every few minutes, and copied out to several data centers, so if something goes wrong with a project running on one machine (right before your lecture begins or homework assignment is due), it will pop up on another machine. We designed the backend architecture from the ground up to be very horizontally scalable and have no single points of failure.

This post is about an important new feature: You can now mark a folder or file so that all other users can view it, and very easily copy it to their own project.




This solves problems:
  • Problem: You create a "template" project, e.g., with pre-installed software, worksheets, IPython notebooks, etc., and want other users to easily be able to clone it as a new project. Solution: Mark the home directory of the project public, and share the link widely.

  • Problem: You create a syllabus for a course, an assignment, a worksheet full of 3d images, etc., that you want to share with a group of students. Solution: Make the syllabus or worksheet public, and share the link with your students via an email and on the course website. (Note: You can also use a course document to share files with all students privately.) For example...


  • Problem: You run into a problem using SMC and want help. Solution: Make the worksheet or code that isn't working public, and post a link in a forum asking for help.
  • Problem: You write a blog post explaining how to solve a problem and write related code in an SMC worksheet, which you want your readers to see. Solution: Make that code public and post a link in your blog post.
Here's a screencast.

Each SMC project has its own local "project server", which takes some time to start up, and serves files, coordinates Sage, terminal, and IPython sessions, etc. Public sharing completely avoids having anything to do with the project server -- it works fine even if the project server is not running -- it's always fast and there is no startup time if the project server isn't running. Moreover, public sharing reads the live files from your project, so you can update the files in a public shared directory, add new files, etc., and users will see these changes (when they refresh, since it's not automatic).
As an example, here is the cloud-examples github repo as a share. If you click on it (and have a SageMathCloud account), you'll see this:


What Next?

There is an enormous amount of natural additional functionality to build on top of public sharing.

For example, not all document types can be previewed in read-only mode right now; in particular, IPython notebooks, task lists, LaTeX documents, images, and PDF files must be copied from the public share to another project before people can view them. It is better to release a first usable version of public sharing before systematically going through and implementing the additional features needed to support all of the above. You can make complicated Sage worksheets with embedded images and 3d graphics, and those can be previewed before copying them to a project.
Right now, the only way to visit a public share is to paste the URL into a browser tab and load it. Soon the projects page will be re-organized so you can search for publicly shared paths, see all public shares that you have previously visited, who shared them, how many +1's they've received, comments, etc.

Also, I plan to eventually make it so public shares will be visible to people who have not logged in, and when viewing a publicly shared file or directory, there will be an option to start it running in a limited project, which will vanish from existence after a period of inactivity (say).

There are also dozens of details that are not yet implemented. For example, it would be nice to be able to directly download files (and directories!) to your computer from a public share. And it's also natural to share a folder or file with a specific list of people, rather than sharing it publicly. If somebody is viewing a public file and you change it, they should likely see the update automatically. Right now when viewing a share, you don't even know who shared it, and if you open a worksheet it can automatically execute Javascript, which is potentially unsafe.  Once public content is easily found, if somebody posts "evil" content publicly, there needs to be an easy way for users to report it.

Sharing will permeate everything

Sharing has been thought about a great deal during the last few years in the context of sites such as Github, Facebook, Google+ and Twitter. With SMC, we've developed a foundation for interactive collaborative computing in a browser, and will introduce sharing on top of that in a way that is motivated by your problems. For example, as with Github or Google+, when somebody makes a copy of your publicly shared folder, this copy should be listed (under "copies") and it could start out public by default. There is much to do.

One reason it took so long to release the first version of public sharing is that I kept imagining that sharing would happen at the level of complete projects, just like sharing in Github. However, when thinking through your problems, it makes way more sense in SMC to share individual directories and files. Technically, sharing at this level works works well for read-only access, not for read-write access, since projects are mapped to Linux accounts. Another reason I have been very hesitant to support sharing is that I've had enormous trouble over the years with spammers posting content that gets me in trouble (with my University -- it is illegal for UW to host advertisements). However, by not letting search engines index content, the motivation for spammers to post nasty content is greatly reduced.

Imagine publicly sharing recipes for automatically gradable homework problems, which use the full power of everything installed in SMC, get forked, improved, used, etc.

by William Stein (noreply@blogger.com) at October 16, 2014 01:29 PM

October 01, 2014

William Stein

SageMathCloud Course Management

by William Stein (noreply@blogger.com) at October 01, 2014 12:05 PM

September 27, 2014

Sébastien Labbé

Abelian complexity of the Oldenburger sequence

The Oldenburger infinite sequence [O39] \[ K = 1221121221221121122121121221121121221221\ldots \] also known under the name of Kolakoski, is equal to its exponent trajectory. The exponent trajectory \(\Delta\) can be obtained by counting the lengths of blocks of consecutive and equal letters: \[ K = 1^12^21^22^11^12^21^12^21^22^11^22^21^12^11^22^11^12^21^22^11^22^11^12^21^12^21^22^11^12^21^12^11^22^11^22^21^12^21^2\ldots \] The sequence of exponents above gives the exponent trajectory of the Oldenburger sequence: \[ \Delta = 12211212212211211221211212\ldots \] which is equal to the original sequence \(K\). You can define this sequence in Sage:

sage: K = words.KolakoskiWord()
sage: K
word: 1221121221221121122121121221121121221221...
sage: K.delta()          # delta returns the exponent trajectory
word: 1221121221221121122121121221121121221221...

There are a lot of open problem related to basic properties of that sequence. For example, we do not know if that sequence is recurrent, that is, all finite subword or factor (finite block of consecutive letters) always reappear. Also, it is still open to prove whether the density of 1 in that sequence is equal to \(1/2\).

In this blog post, I do some computations on its abelian complexity \(p_{ab}(n)\) defined as the number of distinct abelian vectors of subwords of length \(n\) in the sequence. The abelian vector \(\vec{w}\) of a word \(w\) counts the number of occurences of each letter: \[ w = 12211212212 \quad \mapsto \quad 1^5 2^7 \text{, abelianized} \quad \mapsto \quad \vec{w} = (5, 7) \text{, the abelian vector of } w \]

Here are the abelian vectors of subwords of length 10 and 20 in the prefix of length 100 of the Oldenburger sequence. The functions abelian_vectors and abelian_complexity are not in Sage as of now. Code is available at trac #17058 to be merged in Sage soon:

sage: prefix = words.KolakoskiWord()[:100]
sage: prefix.abelian_vectors(10)
{(4, 6), (5, 5), (6, 4)}
sage: prefix.abelian_vectors(20)
{(8, 12), (9, 11), (10, 10), (11, 9), (12, 8)}

Therefore, the prefix of length 100 has 3 vectors of subwords of length 10 and 5 vectors of subwords of length 20:

sage: p100.abelian_complexity(10)
3
sage: p100.abelian_complexity(20)
5

I import the OldenburgerSequence from my optional spkg because it is faster than the implementation in Sage:

sage: from slabbe import KolakoskiWord as OldenburgerSequence
sage: Olden = OldenburgerSequence()

I count the number of abelian vectors of subwords of length 100 in the prefix of length \(2^{20}\) of the Oldenburger sequence:

sage: prefix = Olden[:2^20]
sage: %time prefix.abelian_vectors(100)
CPU times: user 3.48 s, sys: 66.9 ms, total: 3.54 s
Wall time: 3.56 s
{(47, 53), (48, 52), (49, 51), (50, 50), (51, 49), (52, 48), (53, 47)}

Number of abelian vectors of subwords of length less than 100 in the prefix of length \(2^{20}\) of the Oldenburger sequence:

sage: %time L100 = map(prefix.abelian_complexity, range(100))
CPU times: user 3min 20s, sys: 1.08 s, total: 3min 21s
Wall time: 3min 23s
sage: from collections import Counter
sage: Counter(L100)
Counter({5: 26, 6: 26, 4: 17, 7: 15, 3: 8, 8: 4, 2: 3, 1: 1})

Let's draw that:

sage: labels = ('Length of factors', 'Number of abelian vectors')
sage: title = 'Abelian Complexity of the prefix of length $2^{20}$ of Oldenburger sequence'
sage: list_plot(L100, color='green', plotjoined=True, axes_labels=labels, title=title)
/~labbe/Files/2014/oldenburger_abelian_100.png

It seems to grow something like \(\log(n)\). Let's now consider subwords of length \(2^n\) for \(0\leq n\leq 12\) in the same prefix of length \(2^{20}\):

sage: %time L20 = [(2^n, prefix.abelian_complexity(2^n)) for n in range(20)]
CPU times: user 41 s, sys: 239 ms, total: 41.2 s
Wall time: 41.5 s
sage: L20
[(1, 2), (2, 3), (4, 3), (8, 3), (16, 3), (32, 5), (64, 5), (128, 9),
(256, 9), (512, 13), (1024, 17), (2048, 22), (4096, 27), (8192, 40),
(16384, 46), (32768, 67), (65536, 81), (131072, 85), (262144, 90), (524288, 104)]

I now look at subwords of length \(2^n\) for \(0\leq n\leq 23\) in the longer prefix of length \(2^{24}\):

sage: prefix = Olden[:2^24]
sage: %time L24 = [(2^n, prefix.abelian_complexity(2^n)) for n in range(24)]
CPU times: user 20min 47s, sys: 13.5 s, total: 21min
Wall time: 20min 13s
sage: L24
[(1, 2), (2, 3), (4, 3), (8, 3), (16, 3), (32, 5), (64, 5), (128, 9), (256,
9), (512, 13), (1024, 17), (2048, 23), (4096, 33), (8192, 46), (16384, 58),
(32768, 74), (65536, 98), (131072, 134), (262144, 165), (524288, 229),
(1048576, 302), (2097152, 371), (4194304, 304), (8388608, 329)]

The next graph gather all of the above computations:

sage: G = Graphics()
sage: legend = 'in the prefix of length 2^{}'
sage: G += list_plot(L24, plotjoined=True, thickness=4, color='blue', legend_label=legend.format(24))
sage: G += list_plot(L20, plotjoined=True, thickness=4, color='red', legend_label=legend.format(20))
sage: G += list_plot(L100, plotjoined=True, thickness=4, color='green', legend_label=legend.format(20))
sage: labels = ('Length of factors', 'Number of abelian vectors')
sage: title = 'Abelian complexity of Oldenburger sequence'
sage: G.show(scale=('semilogx', 2), axes_labels=labels, title=title)
/~labbe/Files/2014/oldenburger_abelian_2e24.png

A linear growth in the above graphics with logarithmic \(x\) abcisse would mean a growth in \(\log(n)\). After those experimentations, my hypothesis is that the abelian complexity of the Oldenburger sequence grows like \(\log(n)^2\).

References

[O39]Oldenburger, Rufus (1939). "Exponent trajectories in symbolic dynamics". Transactions of the American Mathematical Society 46: 453–466. doi:10.2307/1989933

by Sébastien Labbé at September 27, 2014 10:00 PM

August 27, 2014

Sébastien Labbé

slabbe-0.1.spkg released

These is a summary of the functionalities present in slabbe-0.1 optional Sage package. It depends on version 6.3 of Sage because it uses RecursivelyEnumeratedSet code that was merged in 6.3. It contains modules on digital geometry, combinatorics on words and more.

Install the optional spkg (depends on sage-6.3):

sage -i http://www.liafa.univ-paris-diderot.fr/~labbe/Sage/slabbe-0.1.spkg

In each of the example below, you first have to import the module once and for all:

sage: from slabbe import *

To construct the image below, make sure to use tikz package so that view is able to compile tikz code when called:

sage: latex.add_to_preamble("\\usepackage{tikz}")
sage: latex.extra_preamble()
'\\usepackage{tikz}'

Draw the part of a discrete plane

sage: p = DiscretePlane([1,pi,7], 1+pi+7, mu=0)
sage: d = DiscreteTube([-5,5],[-5,5])
sage: I = p & d
sage: I
Intersection of the following objects:
Set of points x in ZZ^3 satisfying: 0 <= (1, pi, 7) . x + 0 < pi + 8
DiscreteTube: Preimage of [-5, 5] x [-5, 5] by a 2 by 3 matrix
sage: clip = d.clip()
sage: tikz = I.tikz(clip=clip)
sage: view(tikz, tightpage=True)
/~labbe/Files/2014/discreteplane1pi7.png

Draw the part of a discrete line

sage: L = DiscreteLine([-2,3], 5)
sage: b = DiscreteBox([0,10], [0,10])
sage: I = L & b
sage: I
Intersection of the following objects:
Set of points x in ZZ^2 satisfying: 0 <= (-2, 3) . x + 0 < 5
[0, 10] x [0, 10]
sage: I.plot()
/~labbe/Files/2014/discreteline23.png

Double square tiles

This module was developped for the article on the combinatorial properties of double square tiles written with Ariane Garon and Alexandre Blondin Massé [BGL2012]. The original version of the code was written with Alexandre.

sage: D = DoubleSquare((34,21,34,21))
sage: D
Double Square Tile
  w0 = 3032321232303010303230301012101030   w4 = 1210103010121232121012123230323212
  w1 = 323030103032321232303                w5 = 101212321210103010121
  w2 = 2321210121232303232123230301030323   w6 = 0103032303010121010301012123212101
  w3 = 212323032321210121232                w7 = 030101210103032303010
(|w0|, |w1|, |w2|, |w3|) = (34, 21, 34, 21)
(d0, d1, d2, d3)         = (42, 68, 42, 68)
(n0, n1, n2, n3)         = (0, 0, 0, 0)
sage: D.plot()
/~labbe/Files/2014/fibo2.png
sage: D.extend(0).extend(1).plot()
/~labbe/Files/2014/fibo2extend0extend1.png

We have shown that using two invertible operations (called SWAP and TRIM), every double square tiles can be reduced to the unit square:

sage: D.plot_reduction()
/~labbe/Files/2014/fibo2reduction.png

The reduction operations are:

sage: D.reduction()
['SWAP_1', 'TRIM_1', 'TRIM_3', 'SWAP_1', 'TRIM_1', 'TRIM_3', 'TRIM_0', 'TRIM_2']

The result of the reduction is the unit square:

sage: unit_square = D.apply(D.reduction())
sage: unit_square
Double Square Tile
  w0 =     w4 =
  w1 = 3   w5 = 1
  w2 =     w6 =
  w3 = 2   w7 = 0
(|w0|, |w1|, |w2|, |w3|) = (0, 1, 0, 1)
(d0, d1, d2, d3)         = (2, 0, 2, 0)
(n0, n1, n2, n3)         = (0, NaN, 0, NaN)
sage: unit_square.plot()
/~labbe/Files/2014/unit_square.png

Since SWAP and TRIM are invertible operations, we can recover every double square from the unit square:

sage: E = unit_square.extend(2).extend(0).extend(3).extend(1).swap(1).extend(3).extend(1).swap(1)
sage: D == E
True

Christoffel graphs

This module was developped for the article on a d-dimensional extension of Christoffel Words written with Christophe Reutenauer [LR2014].

sage: G = ChristoffelGraph((6,10,15))
sage: G
Christoffel set of edges for normal vector v=(6, 10, 15)
sage: tikz = G.tikz_kernel()
sage: view(tikz, tightpage=True)
/~labbe/Files/2014/christoffelgraph6_10_15.png

Bispecial extension types

This module was developped for the article on the factor complexity of infinite sequences genereated by substitutions written with Valérie Berthé [BL2014].

The extension type of an ordinary bispecial factor:

sage: L = [(1,3), (2,3), (3,1), (3,2), (3,3)]
sage: E = ExtensionType1to1(L, alphabet=(1,2,3))
sage: E
  E(w)   1   2   3
   1             X
   2             X
   3     X   X   X
 m(w)=0, ordinary
sage: E.is_ordinaire()
True

Creation of a strong-weak pair of bispecial words from a neutral not ordinaire word:

sage: p23 = WordMorphism({1:[1,2,3],2:[2,3],3:[3]})
sage: e = ExtensionType1to1([(1,2),(2,3),(3,1),(3,2),(3,3)], [1,2,3])
sage: e
  E(w)   1   2   3
   1         X
   2             X
   3     X   X   X
 m(w)=0, not ord.
sage: A,B = e.apply(p23)
sage: A
  E(3w)   1   2   3
    1
    2         X   X
    3     X   X   X
 m(w)=1, not ord.
sage: B
  E(23w)   1   2   3
    1          X
    2
    3              X
 m(w)=-1, not ord.

Fast Kolakoski word

This module was written for fun. It uses cython implementation inspired from the 10 lines of C code written by Dominique Bernardi and shared during Sage Days 28 in Orsay, France, in January 2011.

sage: K = KolakoskiWord()
sage: K
word: 1221121221221121122121121221121121221221...
sage: %time K[10^5]
CPU times: user 1.56 ms, sys: 7 µs, total: 1.57 ms
Wall time: 1.57 ms
1
sage: %time K[10^6]
CPU times: user 15.8 ms, sys: 30 µs, total: 15.8 ms
Wall time: 15.9 ms
2
sage: %time K[10^8]
CPU times: user 1.58 s, sys: 2.28 ms, total: 1.58 s
Wall time: 1.59 s
1
sage: %time K[10^9]
CPU times: user 15.8 s, sys: 12.4 ms, total: 15.9 s
Wall time: 15.9 s
1

This is much faster than the Python implementation available in Sage:

sage: K = words.KolakoskiWord()
sage: %time K[10^5]
CPU times: user 779 ms, sys: 25.9 ms, total: 805 ms
Wall time: 802 ms
1

References

[BGL2012]A. Blondin Massé, A. Garon, S. Labbé, Combinatorial properties of double square tiles, Theoretical Computer Science 502 (2013) 98-117. doi:10.1016/j.tcs.2012.10.040
[LR2014]Labbé, Sébastien, and Christophe Reutenauer. A d-dimensional Extension of Christoffel Words. arXiv:1404.4021 (April 15, 2014).
[BL2014]V. Berthé, S. Labbé, Factor Complexity of S-adic sequences generated by the Arnoux-Rauzy-Poincaré Algorithm. arXiv:1404.4189 (April, 2014).

by Sébastien Labbé at August 27, 2014 04:53 PM

Releasing slabbe, my own Sage package

Since two years I wrote thousands of line of private code for my own research. Each module having between 500 and 2000 lines of code. The code which is the more clean corresponds to code written in conjunction with research articles. People who know me know that I systematically put docstrings and doctests in my code to facilitate reuse of the code by myself, but also in the idea of sharing it and eventually making it public.

I did not made that code into Sage because it was not mature enough. Also, when I tried to make a complete module go into Sage (see #13069 and #13346), then the monstrous never evolving #12224 became a dependency of the first and the second was unofficially reviewed asking me to split it into smaller chunks to make the review process easier. I never did it because I spent already too much time on it (making a module 100% doctested takes time). Also, the module was corresponding to a published article and I wanted to leave it just like that.

Getting new modules into Sage is hard

In general, the introduction of a complete new module into Sage is hard especially for beginners. Here are two examples I feel responsible for: #10519 is 4 years old and counting, the author has a new work and responsabilities; in #12996, the author was decouraged by the amount of work given by the reviewers. There is a lot of things a beginner has to consider to obtain a positive review. And even for a more advanced developper, other difficulties arise. Indeed, a module introduces a lot of new functions and it may also introduce a lot of new bugs... and Sage developpers are sometimes reluctant to give it a positive review. And if it finally gets a positive review, it is not available easily to normal users of Sage until the next release of Sage.

Releasing my own Sage package

Still I felt the need around me to make my code public. But how? There are people (a few of course but I know there are) who are interested in reproducing computations and images done in my articles. This is why I came to the idea of releasing my own Sage package containing my public research code. This way both developpers and colleagues that are user of Sage but not developpers will be able to install and use my code. This will make people more aware if there is something useful in a module for them. And if one day, somebody tells me: "this should be in Sage", then I will be able to say : "I agree! Do you want to review it?".

Old style Sage package vs New sytle git Sage package

Then I had to chose between the old and the new style for Sage packages. I did not like the new style, because

  • I wanted the history of my package to be independant of the history of Sage,
  • I wanted it to be as easy to install as sage -i slabbe,
  • I wanted it to work on any recent enough version of Sage,
  • I wanted to be able to release a new version, give it to a colleague who could install it right away without changing its own Sage (i.e., updating the checksums).

Therefore, I choose the old style. I based my work on other optional Sage packages, namely the SageManifolds spkg and the ore_algebra spkg.

Content of the initial version

The initial version of the slabbe Sage package has modules concerning four topics: Digital geometry, Combinatorics on words, Combinatorics and Python class inheritance.

/~labbe/Files/2014/slabbe_content.png

For installation or for release notes of the initial version of the spkg, consult the slabbe spkg section of the Sage page of this website.

by Sébastien Labbé at August 27, 2014 04:48 PM

William Stein

What is SageMathCloud: let's clear some things up

[PDF version of this blog post]
"You will have to close source and commercialize Sage. It's inevitable." -- Michael Monagan, cofounder of Maple, told me this in 2006.
SageMathCloud (SMC) is a website that I first launched in April 2013, through which you can use Sage and all other open source math software online, edit Latex documents, IPython notebooks, Sage worksheets, track your todo items, and many other types of documents. You can write, compile, and run code in most programming languages, and use a color command line terminal. There is realtime collaboration on everything through shared projects, terminals, etc. Each project comes with a default quota of 5GB disk space and 8GB of RAM.

SMC is fun to use, pretty to look at, frequently backs up your work in many ways, is fault tolerant, encourages collaboration, and provides a web-based way to use standard command-line tools.

The Relationship with the SageMath Software

The goal of the SageMath software project, which I founded in 2005, is to create a viable free open source alternative to Magma, Mathematica, Maple, and Matlab. SMC is not mathematics software -- instead, SMC is best viewed by analogy as a browser-based version of a Linux desktop environment like KDE or Gnome. The vast majority of the code we write for SMC involves text editor issues (problems similar to those confronted by Emacs or Vim), personal information management, support for editing LaTeX documents, terminals, file management, etc. There is almost no mathematics involved at all.

That said, the main software I use is Sage, so of course support for Sage is a primary focus. SMC is a software environment that is being optimized for its users, who are mostly college students and teachers who use Sage (or Python) in conjunction with their courses. A big motivation for the existence of SMC is to make Sage much more accessible, since growth of Sage has stagnated since 2011, with the number one show-stopper obstruction being the difficulty of students installing Sage.

Sage is Failing

Measured by the mission statement, Sage has overall failed. The core goal is to provide similar functionality to Magma (and the other Ma's) across the board, and the Sage development model and community has failed to do this across the board, since after 9 years, based on our current progress, we will never get there. There are numerous core areas of research mathematics that I'm personally familiar with (in arithmetic geometry), where Sage has barely moved in years and Sage does only a few percent of what Magma does. Unless there is a viable plan for the areas to all be systematically addressed in a reasonable timeframe, not just with arithmetic geometry in Magma, but with everything in Mathematica, Maple., etc, we are definitely failing at the main goal I have for the Sage math software project.

I have absolutely no doubt that money combined with good planning and management would make it possible to achieve our mission statement. I've seen this hundreds of times over at a small scale at Sage Days workshops during the last decade. And let's not forget that with very substantial funding, Linux now provides a viable free open source alternative to Microsoft Windows. Just providing Sage developers with travel expenses (and 0 salary) is enough to get a huge amount done, when possible. But all my attempts with foundations and other clients to get any significant funding, at even the level of 1% of the funding that Mathematica gets each year, has failed. For the life of the Sage project, we've never got more than maybe 0.1% of what Mathematica gets in revenue. It's just a fact that the mathematics community provides Mathematica $50+ million a year, enough to fund over 600 fulltime positions, and they won't provide enough to fund one single Sage developer fulltime.

But the Sage mission statement remains, and even if everybody else in the world gives up on it, I HAVE NOT. SMC is my last ditch strategy to provide resources and visibility so we can succeed at this goal and give the world a viable free open source alternative to the Ma's. I wish I were writing interesting mathematical software, but I'm not, because I'm sucking it up and playing the long game.

The Users of SMC

During the last academic year (e.g., April 2014) there were about 20K "monthly active users" (as defined by Google Analytics), 6K weekly active users, and usually around 300 simultaneous connected users. The summer months have been slower, due to less teaching.

Numerically most users are undergraduate students in courses, who are asked to use SMC in conjunction with a course. There's also quite a bit of usage of SMC by people doing research in mathematics, statistics, economics, etc. -- pretty much all computational sciences. Very roughly, people create Sage worksheets, IPython notebooks, and Latex documents in somewhat equal proportions.

What SMC runs on

Technically, SMC is a multi-datacenter web application without specific dependencies on particular cloud provider functionality. In particular, we use the Cassandra database, and custom backend services written in Node.js (about 15,000 lines of backend code). We also use Amazon's Route 53 service for geographically aware DNS. There are two racks containing dedicated computers on opposites sides of campus at University of Washington with 19 total machines, each with about 1TB SSD, 4TB+ HDD, and 96GB RAM. We also have dozens of VM's running at 2 Google data centers to the east.

A substantial fraction of the work in implementing SMC has been in designing and implementing (and reimplementing many times, in response to real usage) a robust replicated backend infrastructure for projects, with regular snapshots and automatic failover across data centers. As I write this, users have created 66677 projects; each project is a self-contained Linux account whose files are replicated across several data centers.

The Source Code of SMC

The underlying source of SMC, both the backend server and frontend client, is mostly written in CoffeeScript. The frontend (which is nearly 20,000 lines of code) is implemented using the "progressive refinement" approach to HTML5/CSS/Javascript web development. We do not use any Javascript single page app frameworks, though we make heavy use of Bootstrap3 and jQuery. All of the library dependencies of SMC, e.g., CodeMirror, Bootstrap, jQuery, etc. for SMC are licensed under very permissive BSD/MIT, etc. libraries. In particular, absolutely nothing in the Javascript software stack is GPL or AGPL licensed. The plan is that any SMC source code that will be open sourced will be released under the BSD license. Some of the SMC source code is not publicly available, and is owned by University of Washington. But other code, e.g., the realtime sync code, is already available.
Some of the functionality of SMC, for example Sage worksheets, communicate with a separate process via a TCP connection. That separate process is in some cases a GPL'd program such as Sage, R, or Octave, so the viral nature of the GPL does not apply to SMC. Also, of course the virtual machines are running the Linux operating system, which is mostly GPL licensed. (There is absolutely no AGPL-licensed code anywhere in the picture.)

Note that since none of the SMC server and client code links (even at an interpreter level) with any GPL'd software, that code can be legally distributed under any license (e.g., from BSD to commercial).
Also we plan to create a fully open source version of the Sage worksheet server part of SMC for inclusion with Sage. This is not our top priority, since there are several absolutely critical tasks that still must be finished first on SMC, e.g., basic course management.

The SMC Business Model

The University of Washington Center for Commercialization (C4C) has been very involved and supportive since the start of the projects. There are no financial investors or separate company; instead, funding comes from UW, some unspent grant funds that were about to expire, and a substantial Google "Academic Education Grant" ($60K). Our first customer is the "US Army Engineer Research and Development Center", which just started a support/license agreement to run their own SMC internally. We don't currently offer a SaaS product for sale yet -- the options for what can be sold by UW are constrained, since UW is a not-for-profit state university. Currently users receive enhancements to their projects (e.g., increased RAM or disk space) in exchange for explaining to me the interesting research or teaching they are doing with SMC.

The longterm plan is to start a separate for-profit company if we build a sufficient customer base. If this company is successful, it would also support fulltime development of Sage (e.g., via teaching buyouts for faculty, support of students, etc.), similar to how Magma (and Mathematica, etc.) development is funded.

In conclusion, in response to Michael Monagan, you are wrong. And you are right.

by William Stein (noreply@blogger.com) at August 27, 2014 06:55 AM

You don't really think that Sage has failed, do you?

I just received an email from a postdoc in Europe, and very longtime contributor to the Sage project.  He's asking for a letter of recommendation, since he has to leave the world of mathematical software development (after a decade of training and experience), so that he can take a job at hedge fund.  He ends his request with the question:

> P.S. You don't _really_ think that Sage has failed, do you?

After almost exactly 10 years of working on the Sage project, I absolutely do think it has failed to accomplish the stated goal of the mission statement: "Create a viable free open source alternative to Magma, Maple, Mathematica and Matlab.".     When it was only a few years into the project, it was really hard to evaluate progress against such a lofty mission statement.  However, after 10 years, it's clear to me that not only have we not got there, we are not going to ever get there before I retire.   And that's definitely a failure.   

Here's a very rough quote I overheard at lunch today at Sage Days 61, from John Voight, who wrote much quaternion algebra code in Magma: "I'm making a list of what is missing from Sage that Magma has for working with quaternion algebras.  However, it's so incredibly daunting, that I don't want to put in everything.  I've been working on Magma's quaternion algebras for over 10 years, as have several other people.  It's truly daunting how much functionality Magma has compared to Sage."

The only possible way Sage will not fail at the stated mission is if I can get several million dollars a year in money to support developers to work fulltime on implementing interesting core mathematical algorithms.  This is something that Magma has had for over 20 years, and that Maple, Matlab, and Mathematica also have.   That I don't have such funding is probably why you are about to take a job at a hedge fund.    If I had the money, I would try to hire a few of the absolute best people (rather than a bunch of amateurs), people like you, Robert Bradshaw, etc. -- we know who is good. (And clearly I mean serious salaries, not grad student wages!)

So yes, I think the current approach to Sage has failed.    I am going to try another approach, namely SageMathCloud.  If it works, maybe the world will get a free open source alternative to Magma, Mathematica, etc.  Otherwise, maybe the world never ever will.      If you care like I do about having such a thing, and you're teaching course, or whatever, maybe try using SageMathCloud.   If enough people use SageMathCloud for college teaching, then maybe a business model will emerge, and Sage will get proper funding.   

by William Stein (noreply@blogger.com) at August 27, 2014 06:52 AM

August 23, 2014

Nikhil Peter

GSoC: An End, And A New Beginning

Well, it’s officially done. As per my proposal, the project has been officially completed. It’s been a rollercoaster ride of new experiences, a ton of code(by my count its somewhere around 20k lines or so, but GitHub shows a much larger number) and some unforgettable memories. The app is nowhere near perfect, however, and I […]

by hav3n at August 23, 2014 09:15 AM

August 22, 2014

Simon Spicer

GSoC: Wrapping Up

Today marks the last day of my Google Summer of Code 2014 project. Evaluations are due midday Friday PDT, and code submissions for successfully passed projects start soon thereafter. The end result of my project can be found at Sage Trac Ticket 16773. In total it's just over 2000 lines of Python and Cython code, to be added to the next major Sage release (6.4) if/when it gets a final thumbs-up review.

When I write just the number of lines of code it doesn't sound like very much at all - I'm sure there are GSoC projects this year that produced at least 10k lines of code! However, given that what I wrote is complex mathematical code that needed a) significant optimization, and b) to be be mathematically sound in the first place, I'd say that isn't too bad. Especially since the code does what the project description aimed for it to do: compute analytic ranks of rational elliptic curves very quickly with a very low rate of failure. Hopefully this functionality can and will be used to produce numerical data for number theory research in the years to come.

The Google Summer of Code has been a thoroughly rewarding experience for me. It's a great way to sharpen one's coding skills and get paid in the process. I recommend it for any university student who's looking to go into any industry that requires programming skills; I'd apply to do it again next year if I wasn't planning to graduate then!

by Simon Spicer (noreply@blogger.com) at August 22, 2014 09:04 AM

August 18, 2014

Harald Schilly

New combinatorial designs in Sage - by Nathann Cohen

This is a guest post by Nathann Cohen. 


New combinatorial designs in Sage

Below, these graphs are a decomposition of a $K_{13}$ (i.e. the complete graph on 13 points) into copies of $K_4$. Pick two vertices you like: they appear exactly once together in one of the $K_4$.


The second graph shows a decomposition of a $K_{4,4,4}$ (i.e. the complete multipartite graph on $4\times 3$ points) into copies of $K_3$. Pick two vertices you like from different groups: they appear exactly once together in one of the $K_4$.




Sage has gotten quite good at building such decompositions (a specific kind of combinatorial designs) when they exist. This post is about them.

The first object belongs to a family called Balanced Incomplete Block Designs (or $(n,k)$-BIBD), which are defined as "a collection $\mathcal S$ of sets, all of them with size $k$ (here $k=4$), such that any pair of points of a set $X$ with $|X|=n$ (here $n=13$) appears in exactly one set of $\mathcal S$".

The second belongs to the family of Transversal Designs (or $TD(k,n)$) which have a similar definition: consider a set $X$ containing $k$ groups (here $k=3$) of $n$ vertices (here $n=4$). A collection $\mathcal S$ of sets, each of which contains one point from each group, is a $TD(k,n)$ if any two points from different groups appear together in exactly one set of $\mathcal S$.

The main problem with combinatorial designs is to know where they exist. And that is not obvious. Sage does what it can on about that:
  • If you want it to build a $(14,4)$-BIBD, it will tell you that none exists.
  • If you want it to build a $(16,4)$-BIBD, it will tell you that one exists.
  • If you want it to build a $(51,6)$-BIBD, it will tell you that it just not know if there is one (and nobody knows better at the moment)
Examples here:

sage: designs.balanced_incomplete_block_design(14,4,existence=True)
False
sage: designs.balanced_incomplete_block_design(16,4,existence=True)
True
sage: designs.balanced_incomplete_block_design(51,6,existence=True)
Unknown

For a developer (and design lover), the game consists in teaching Sage how to build all combinatorial designs that appear in some research paper. For BIBD as well as for Transversal Designs, on which a LOT of sweat was spent these last months.

For Transversal designs the game is a bit different, as we know that a $TD(k-1,n)$ exists whenever a $TD(k,n)$ exists. Thus, the game consists in finding the largest integer $k_n$ such that a $TD(k_n,n)$ exists. This game is hardly new, and hardly straightforward: In the Handbook of Combinatorial Designs, one can find the table of such $k$ up to $n=10000$ (see here).

The good thing about Sage is that it does not just claim that such a design exists: it also builds it, and there is no better existence proof than that (it is very very quick to check that a combinatorial design is valid). The other good thing is that there is no common database for such data (the Handbook is not updated/printed every night), and that by teaching Sage all new designs found by researchers we build such a database. And it already contains designs that were not known when the Handbook was printed.

Finally, the other other good thing about Sage is that it will soon be able to tell you where those designs come from. Indeed, the most powerful results in the field of Transversal Designs are of the shape "If there exists a $TD(k_1,n_1)$, and a $TD(k_2,n_2)$, ..., and a $TD(k_c,n_c)$, then you can combine them all to obtain a $TD(k,n)$ with $k=k(k_1,...,k_c)$ and $n=n(n_1,...,n_c)$". And it is never very clear how to inverse these functions: if you want to build a $TD(k,n)$, which integers $k_1,...,k_c,n_1,...,n_c$ should you pick ?

Sage knows. It must know it, in order to build these designs anyway. And you can find that data inside. And soon, we will teach it to give you the bibliographical references of the papers in which you can find the right construction to produce the $TD(k,n)$ that you want. And we will provide the right parameters. And the world will be at peace.

A couple of things before we part:
  • Transversal Designs (TD), Orthogonal Arrays (OA), and Mutually Orthogonal Latin Squares (MOLS) are all equivalent objects.
  • We write a LOT of Transversal Designs code these days, so expect all this to improve very fast.
  • You can learn what Sage knows of combinatorial designs right here.
Finally, there are far too many combinatorial designs for one man to learn. If you love combinatorial designs, come join us: Vincent Delecroix, you, and I have code to write together. And if you know a related mathematical results that Sage ignores, come tell us: we could not have gone so far without the mathematical knowledge of Julian Abel. And Sage does not know everything yet.

Have fuuuuuuuuuuuuuuuuuuun !

Nathann

by Harald Schilly (noreply@blogger.com) at August 18, 2014 08:32 PM

August 15, 2014

Simon Spicer

How big should Delta be?

Let's take a look at the central formula in my GSoC rank estimation code. Let $E$ be a rational elliptic curve with analytic rank $r$. Then
$$ r < \sum_{\gamma} \text{sinc}^2(\Delta\gamma) =  \frac{1}{\pi\Delta} \left[ C_0 + \frac{1}{2\pi\Delta}\left(\frac{\pi^2}{6}-\text{Li}_2\left(e^{-2\pi\Delta}\right) \right) + \sum_{n=1}^{e^{2\pi\Delta}} c_n\left(1 - \frac{\log n}{2\pi\Delta}\right) \right] $$
where

  • $\gamma$ ranges over the nontrivial zeros of the $L$-function attached to $E$
  • $\Delta$ is a positive parameter
  • $C_0 = -\eta + \log\left(\frac{\sqrt{N}}{2\pi}\right)$; $\eta$ is the Euler-Mascheroni constant $=0.5772\ldots$ and $N$ is the conductor of $E$
  • $\text{Li}_2(x)$ is the dilogarithm function, defined by $\text{Li}_2(x) = \sum_{k=1}^{\infty} \frac{x^k}{k^2}$
  • $c_n$ is the $n$th coefficient of the logarithmic derivative of the $L$-function of $E$.
The thing I want to look at in this post is that parameter $\Delta$. The larger you make it, the closer the sum on the left hand side over the zeros is to the analytic rank, so when trying to determine the rank of $E$ we want to pick as large a $\Delta$ as we can. However, the bigger this parameter is the more terms we have to compute in the sum over the $c_n$ on the right hand side; moreover the number of terms - and thus the total computation time - scales exponentially with $\Delta$. This severely constrains how big we can make $\Delta$; generally a value of $\Delta=2$ may take a second or two for a single curve on SageMathCloud, while $\Delta=3$ may take upwards of an hour. For the average rank project I'm working on I ran the code on 12 million curves using $\Delta=1.3$; the total computation time was about 4 days on SMC.

However, it should be clear that using too large a $\Delta$ is overkill: if you run the code on a curve with $\Delta=1$ and get a bound of zero out, you know that curve's rank exactly zero (since it's at most zero, and rank is a non-negative integer). Thus using larger $\Delta$ values on that curve will do nothing except provide you the same bound while taking much longer to do so.

This begs the question: just how big of a $\Delta$ value is good enough? Can we, given some data defining an elliptic curve, decide a priori what size $\Delta$ to use so that a) the computation returns a bound that is likely to be the true of the curve, and b) it will do so in as little time as possible?

The relevant invariant to look at here is conductor $N$ of the elliptic curve; go back to the formula above and you'll see that the zero sum includes a term which is $O\left(\frac{\log(N)}{2\pi\Delta}\right)$ (coming from the $\frac{1}{\pi \Delta} C_0$ term). This means that size of the returned estimate will scale with $\log(N)$: for a given $\Delta$, the bound returned on a curve with 10-digit conductor will be about double that which is returned for a 5-digit conductor curve, for example. However, we can compensate this by increasing $\Delta$ accordingly.

To put it all more concretely we can pose the following questions:
  • Given an elliptic curve $E$ with conductor $N$, how large does $\Delta$ need to be in terms of $N$ so that the returned bound is guaranteed to be the true analytic rank of the curve?
  • Given a conductor size $N$ and a proportionality constant $P \in [0,1]$, how big does $\Delta$ have to be in terms of $N$ and $P$ so that at least $P\cdot 100$ percent of all elliptic curves with conductor of size about $N$ will, when the rank estimation code is run on them with that $\Delta$ value, yield returned bounds that are equal to their true analytic rank?
[Note: in both of the above questions we are making the implicit assumption that the returned rank bound is monotonically decreasing for increasing $\Delta$. This is not necessarily the case: the function $y = \text{sinc}^2(x)$ is not a decreasing function in $x$. However, in practice any upwards fluctuation we see in the zero sum is small enough to be eliminated when we take the floor function to get an integer rank bound.]

A $\Delta$ CHOICE GOOD ENOUGH FOR MOST CURVES


The first question is easier to phrase, but more difficult to answer, so we will defer it for now. To answer the second question, it is useful mention what we know about the distribution and density of nontrivial zeros of an elliptic curve $L$-function.

Using some complex analysis we can show that, for the $L$-function of an elliptic curve with conductor $N$, the expected number of zeros in the critical strip with imaginary part at most $T$, is $O(T\log N+ T\log T)$. That is, expected zero density has two distinct components: a part that scales linearly with log of the conductor, and a part that doesn't scale with conductor (but does scale slightly faster than linearly with how far out you go).

Consider the following: if we let 
$$\Delta(N) = \frac{C_0}{\pi} = \frac{1}{\pi}\left(-\eta+\log\left(\frac{\sqrt{N}}{2\pi}\right)\right)$$
then the first term in the right hand side of the zero sum formula is precisely 1 - this is the term that comes from the $\log N$ part of the zero density. The next term - the one involving $\frac{\pi^2}{6}-\text{Li}_2\left(e^{-2\pi\Delta}\right)$ - is the term that comes from the part independent of $N$; because the right hand side is divided by $\Delta$ it therefore goes to zero as the curve's conductor increases. The third term contains the $c_n$ coefficients which (per Sato-Tate) will be positive half the time and negative half the time, so the entire sum could be positive or negative; we therefore expect its contribution to be small on average when we look at large number of elliptic curves.

It thus stands to reason that for this value of Delta, and when the conductor $N$ is sufficiently large, the zero sum will be about 1, plus or minus a smallish amount coming from the $c_n$ sum. This argument is by no means rigorous, but we might therefore expect the zero sum to be within 2 of the actual analytic rank most of the time. Couple that with knowledge of the root number and you get a rank upper bound out which is equal to the actual analytic rank in all but a few pathological cases.

Empirical evidence bears this out. I ran my rank estimation code with this choice of $\Delta$ scaling on the whole Cremona database, which contains all elliptic curves up to conductor 350000:
The proportion of curves up to conductor $N$ for which the computed rank bound is strictly greater than rank. The $x$-axis denotes conductor; the $y$-axis is the proportion of all curves up to that conductor for which the returned rank bound was not equal to the true rank (assuming BSD and GRH as always).
As you can see, the percentage of curves for which the rank bound is strictly greater than rank holds fairly constant at about 0.25%. That's minuscule: what this is saying is that if you type in a random Weierstrass equation, there is only about a 1 in 1000 chance that the rank bounding code with $\Delta = \frac{C_0}{\pi}$ will return a value that isn't the actual analytic rank. Moreover, the code runs considerably faster than traditional analytic rank techniques, so if you wanted to, for example, compute the ranks of a database of millions of elliptic curves, this would be a good first port of call.

Of course, this $\Delta$ scaling approach is by no means problem-free. Some more detailed analysis will show that that as stated above, the runtime of the code will actually be $O(N)$ (omitting log factors), i.e. asymptotic scaling is actually worse than traditional analytic rank methods, which rely on evaluating the $L$-function directly and thus are $O(\sqrt{N})$. It's just that with this code we have some very small constants sitting in front, so the crossover point is at large enough conductor values that neither method is feasible anyway. 

This choice of $\Delta$ scaling works for conductor ranges up to about $10^9$; that corresponds to $\Delta \approx 2.5$, which will give you a total runtime of about 10-20 seconds for a single curve on SMC. Increase the conductor by a factor of 10 and your runtime will also go up tenfold.

For curves of larger conductor, instead of setting $\Delta = \frac{C_0}{\pi}$ we can choose to set $\Delta$ to be $\alpha\cdot \frac{C_0}{\pi}$ for any $\alpha \in [0,1]$; the resulting asymptotic runtime will then be $O(N^{\alpha})$, at the expense of having a reduced proportion of elliptic curves where rank bound is equal to true rank.

HOW LARGE DOES $\Delta$ HAVE TO BE TO GUARANTEE TRUE RANK?


When we use $\Delta = \frac{C_0}{\pi}$, the curves for which the returned rank estimate is strictly larger than the true rank are precisely those which have unusually low-lying zeros. For example, the rank 0 curve with Cremona label 256944c1, has a zero with imaginary part at 0.0256 (see here for a plot), compared to an expected value of 0.824. Using $\Delta = \frac{C_0}{\pi}$ on this curve means $\Delta \approx 1.214$; if we compute the corresponding zero sum with this value of $\Delta$ we get a value of 2.07803. The smallest value of $\Delta$ for which we get a zero sum value of less than 2 is empirically about 2.813; at this point taking the floor and invoking parity tells us that the curve's rank is zero.

The above example demonstrates that if we want to guarantee that the returned rank bound is the true analytic rank, we are forced to increase the size of $\Delta$ to something larger than $\frac{C_0}{\pi}$. Do we need to increase $\Delta$ by a fixed value independent of $N$? Do we need to increase it by some constant factor? Or does it need to scale faster than $\log N$? These are hard questions to answer; it comes down to determining how close to the central point the lowest nontrivial zero can be as a function of the conductor $N$ (or some other invariants of $E$), which in turn is intimately related to estimating the size of the leading coefficient of the $L$-series of $E$ at the central point. This is already the topic of a previous post: it is a question that I hope to make progress in answering in my PhD dissertation.

by Simon Spicer (noreply@blogger.com) at August 15, 2014 09:44 AM

August 14, 2014

Simon Spicer

Things are Better in Parallel

A recent improvement I've implemented in my GSoC code is to allow for parallelized computation. The world has rapidly moved to multi-core as a default, so it makes sense to write code that can exploit this. And it turns out that the zero sum rank estimation method that I've been working on can be parallelized in a very natural way.

THE @parallel DECORATOR IN SAGE


But first: how does one compute in parallel in Sage? Suppose I have written a function in a Sage environment (e.g. a SageMathCloud worksheet, .sage file, Sage console etc.) which takes in some input and returns some other input. The simple example f below takes in a number n and returns the square of that number.

sage: def f(n):
....:     return n*n
....: 
sage: f(2),f(3),f(5)
(4, 9, 25)

This is a fairly straightforward beast; put in a value, get a value out. But what if we have some computation that requires evaluating that function on a large number of inputs? For example, say we want to compute the sum of the first 10 million squares. If you only have one processor to tap, then you're limited to calling f over and over again in series:

sage: def g(N):
....:     y = 0
....:     for n in range(N+1):
....:         y += f(n)
....:     return y
....: 
sage: %time g(10000000)
CPU times: user 17.5 s, sys: 214 ms, total: 17.7 s
Wall time: 17.6 s
333333383333335000000

In this example you could of course invoke the formula for the sum of the first $n$ squares and just write down the answer without having to add anything up, but in general you won't be so lucky. You can optimize the heck out of f, but in general when you're limited to a single processor you're confined to iterating over all the values you need to consider sequentially .

However, if you have 2 processors available one could try write code that splits the work into two roughly equal parts that can run relatively independently. For example, for our function we could compute the sum of all the even squares up to a given bound in one process and the sum of all the odd squares in another, and then add the two results together to get the sum of all square up to our bound.

Sage has a readily available mechanism to do exactly this: the @parallel decorator. To enable parallel computation in your function, put @parallel above your function definition (the decorator can take some parameters; below ncpus=2 tells it that we want to use 2 processors). Note that we also have to modify the function: now it no longer only takes the bound up to which we must add squares, but also a flag indicating whether we should consider even or odd squares.

sage: @parallel(ncpus=2)
....: def h(N,parity):
....:     y = 0
....:     if parity=="even":
....:         nums = range(0,N+1,2)
....:     elif parity=="odd":
....:         nums = range(1,N+1,2)
....:     for n in nums:
....:         y += f(n)
....:     return y

Instead of calling h with its standard sequence of parameters, we can pass it a list of tuples, where each tuple is a valid sequence of inputs. Sage then sends each tuple of inputs off to an available processor and evaluates the function on them there. What's returned is a generator object that can iterate over all the outputs; we can always see the output directly by calling list() on this returned generator:

sage: for tup in list(h([(1000,"even"),(1000,"odd")])):
....:     print(tup)
....: 
(((1000, 'even'), {}), 167167000)
(((1000, 'odd'), {}), 166666500)

Note that the tuple of inputs is also returned. Because we're doing things in parallel, we need to know which output corresponds to which input, especially since processes may finish at different times and return order is not necessarily the same as the order of the input list.

Finally, we can write a wrapper function which calls our parallelized function and combines the returned results:

sage: def i(N):
....:     y = 0
....:     for output in h([(N,"even"),(N,"odd")]):
....:         y += output[1]
....:     return y
....: 
sage: %time i(10000000)
CPU times: user 1.76 ms, sys: 33.2 ms, total: 35 ms
Wall time: 9.26 s
333333383333335000000

Note that i(10000000) produces the same output at g(10000000) but in about half the time, as the work is split between two processes instead of one. This is the basic gist of parallel computation: write code that can be partitioned into parts that can operate (relatively) independently; run those parts on different processors simultaneously; and then collect returned outputs and combine to produce desired result.

PARALLELIZING THE ZERO SUM COMPUTATION


Let's take a look at the rank estimating zero formula again. Let $E$ be a rational elliptic curve with analytic rank $r$. Then

\begin{align*}
r < \sum_{\gamma} \text{sinc}^2(\Delta\gamma) &=  \frac{1}{\pi \Delta}\left(-\eta+\log\left(\frac{\sqrt{N}}{2\pi}\right)\right) \\
&+ \frac{1}{2\pi^2\Delta^2}\left(\frac{\pi^2}{6}-\text{Li}_2\left(e^{-2\pi\Delta}\right) \right) \\
&+ \frac{1}{\pi \Delta}\sum_{\substack{n = p^k \\ n < e^{2\pi\Delta}}} c_n\left(1 - \frac{\log n}{2\pi\Delta}\right)
\end{align*}
where
  • $\gamma$ ranges over the nontrivial zeros of the $L$-function attached to $E$
  • $\Delta$ is a positive parameter
  • $\eta$ is the Euler-Mascheroni constant $=0.5772\ldots$
  • $N$ is the conductor of $E$
  • $\text{Li}_2(x)$ is the dilogarithm function, defined by $\text{Li}_2(x) = \sum_{k=1}^{\infty} \frac{x^k}{k^2}$
  • $c_n$ is the $n$th coefficient of the logarithmic derivative of the $L$-function of $E$, which is zero when $n$ is not a perfect prime power.
The right hand side of the sum, which is what we actually compute, can be broken up into three parts: the first term involving the curve's conductor $N$; the second term involving the dilogarithm function $Li_2(x)$; and the sum over prime powers. The first two parts are quick to compute: evaluating them can basically be done in constant time regardless of the magnitude of $N$ or $\Delta$.

It is therefore not worth considering parallelizing these two components, since the prime power sum dominates computation time for all but the smallest $\Delta$ values. Instead, what I've done is rewritten the zero sum code so that the prime power sum can be evaluated using multiple cores.

As mentioned in this post, we can turn this sum into one indexed by the primes (and not the prime powers); this actually makes parallelization quite straightforward. Recall that all primes except $2$ are odd, and all primes except $2$ and $3$ are either $1$ or $5$ remainder $6$. One can scale this up: given a list of small primes $[p_1,p_2,\ldots,p_n]$, all other primes fall into one of a relatively small number of residue classes modulo $p_1 p_2\cdots p_n$. For example, all primes beyond $2$, $3$, $5$ and $7$ have one of the following 48 remainders when you divide them by $210 = 2\cdot 3\cdot 5 \cdot 7$:
\begin{align*}
&1, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79,\\
&83, 89, 97, 101, 103, 107, 109, 113, 121, 127, 131, 137, 139, 143, 149, \\
&151, 157, 163, 167, 169, 173, 179, 181, 187, 191, 193, 197, 199, 209,
\end{align*}
and no other.

If we had 48 processors available. the natural thing to do would be to get each of them to iterate over all integers in a particular residue class up to $e^{2\pi\Delta}$, evaluating the summand whenever that integer is prime, and returning the sum thereof. For example, if the bound was 1 million, then processor 1 would compute and return $\sum_{n \equiv 1 (\text{mod } 210)}^{1000000} c_n\left(1 - \frac{\log n}{2\pi\Delta}\right)$. Processor 2 would do the same with all integers that are $11$ modulo $210$, etcetera.

In reality, we have to figure out a) how many processors are available, and b) partition the work relatively equally among those processors. Thankfully sage.parallel.ncpus.ncpus() succinctly addresses the former, and the latter is achieved by splitting the residue classes into $n$ chunks of approximately equal size (where $n$ is the number of available CPUs) and then getting a given processor to evaluate the sum over those residues in a single chunk.

Here is the method I wrote that computes the $\text{sinc}^2$ zero sum with (the option of) multiple processors:

Note that I've defined a subfunction to compute the prime sum over a given subset of residue classes; this is the part that is parallelized. Obtaining the residue chunks and computing the actual summand at each prime are both farmed out to external methods.

Let's see some timings. The machine I'm running Sage on has 12 available processors:

sage: sage.parallel.ncpus.ncpus()
12
sage: E = EllipticCurve([12838,-51298])
sage: Z = LFunctionZeroSum(E)
sage: %time Z._zerosum_sincsquared_fast(Delta=2.6)
CPU times: user 36.7 s, sys: 62.9 ms, total: 36.8 s
Wall time: 36.8 s
2.8283629046
sage: %time Z._zerosum_sincsquared_parallel(Delta=2.6)
CPU times: user 7.87 ms, sys: 133 ms, total: 141 ms
Wall time: 4.06 s
2.8283629046

Same answer in a ninth of the time! Note that the parallelized method has some extra overhead, so even with 12 processors we're unlikely to get a full factor of 12 speedup. Nevertheless, the speed boost will allow us to run the zero sum code with larger $\Delta$ values, allowing us to investigate elliptic curves with even larger conductors.

by Simon Spicer (noreply@blogger.com) at August 14, 2014 12:26 PM

July 31, 2014

William Stein

SageMathCloud -- history and status

2005: I made first release the SageMath software project, with the goal to create a viable open source free alternative to Mathematica, Magma, Maple, Matlab.

2006: First web-based notebook interface for using Sage, called "sagenb". It was a cutting edge "AJAX" application at the time, though aimed at a small number of users.

2007-2009: Much work on sagenb. But it's still not scalable. Doesn't matter since we don't have that many users.

2011-: Sage becomes "self sustaining" from my point of view -- I have more time to work on other things, since the community has really stepped up.

2012: I'm inspired by the Simons Foundation's (and especially Jim Simon's) "cluelessness" about open source software to create a new online scalable web application to (1) make it easier for people to get access to Sage, especially on Windows, and (2) generate a more longterm sustainable revenue stream to support Sage development. (I was invited to a day-long meeting in NYC at Simon's headquarters.)

2012-2013: Spent much of 2012 and early 2013 researching options, building prototypes, some time talking with Craig Citro and Robert Bradshaw (both at Google), and launched SageMathCloud in April 2013. SMC got some high-profile use, e.g., by UCLA's 400+ student calculus course.

2014: Much development over the last 1.5 years. Usage has also grown. There is some growth information here. I also have useful google analytics data from the whole time, which shows around 4000 unique users per week, with an average session duration of 97 minutes (see attached). Number of users has actually dropped off during the summer, since there is much less teaching going on.

SMC itself is written mostly in CoffeeScript using Node.js on the backend. There's a small amount of Python as well.

It's a highly distributed multi-data center application. The database is Cassandra. The backend server processes are mostly Node.js processes, and also nginx+haproxy+stunnel.

A copy of user data is stored in every data center, and is snapshotted every few minutes, both via :
  • ZFS -- for rolling snapshots that vanish after a month -- and via
  • bup -- for snapshots that remain forever, and are consistent across data centers.
These snapshots are critical for making it possible to trust collaborators on projects to not (accidentally) destroy your work. It is not possible for users to delete the bup snapshots, by design.
Here's what it does: realtime collaborative editing of Latex docs, IPython notebooks, Sage worksheets; use the command line terminal; have several people collaborate on a project (=a Linux account).
The main applications seem to be:
  • teaching courses with a programming or math software components -- where you want all your students to be able to use something, e.g., IPython, Julia, etc, and don't want to have to deal with trying to get them to install said software themselves. Also, you want to easily be able to share files with students, see their work in realtime, etc. It's a much, much easier for people to get going that with naked VM's they have to configure -- and also I provide cross-data center replication.
  • collaborative research mathematics -- all co-authors of a paper work together in an SMC project, both writing the paper there and doing computations.
Active development work right now:
  • course management for homework (etc)
  • administration functionality (mainly motivated by self-hosting and better moderation)
  • easy history slider to see all pasts states of a document
  • switching from bootstrap2 to bootstrap3.

by William Stein (noreply@blogger.com) at July 31, 2014 10:17 PM

Simon Spicer

The average rank of elliptic curves

It's about time I should demonstrate the utility of the code I've written - the aim of the game for my GSoC project, after all, is to provide a new suite of tools with which to conduct mathematical research.

First some background. Given an elliptic curve $E$ specified by equation $y^2 = x^3 + Ax + B$ for integers $A$ and $B$, one of the questions we can ask is: what is the average rank of $E$ as $A$ and $B$ are allowed to vary? Because there are an infinite number of choices of $A$ and $B$, we need to formulate this question a bit more carefully. To this end, let us define the height of $E$ to be
$$ h(E) = \max\{|A|^3,|B|^2\} $$
[Aside: The height essentially measures the size of the coefficients of $E$ and is thus a fairly decent measure of the arithmetic complexity of the curve. We need the 3rd and 2nd powers in order to make the height function scale appropriately with the curve's discriminant.]

We can then ask: what is the limiting value of the average rank of all curves up to height $X$, as $X$ gets bigger and bigger? Is it infinity? Is it 0? Is it some non-trivial positive value? Does the limit even exist? It's possible that the average rank, as a function of $X$ could oscillate about and never stabilize.

There are strong heuristic arguments suggesting that the answer should be exactly $\frac{1}{2}$. Specifically, as the height bound $X$ gets very large, half of all curves should have rank 0, half should have rank 1, and a negligible proportion should have rank 2 or greater.

Even as recently as five years ago this there we knew nothing concrete unconditionally about average curve rank. There are some results by BrumerHeath-Brown and Young providing successively better upper bounds on the average rank of curves ordered by height (2.3, 2 and $\frac{25}{14}$ respectively), but these results are contingent on the Riemann Hypothesis.

However, starting in 2011 Manjul Bhargava, together with Christopher Skinner and Arul Shankar, published a series of landmark papers (see here for a good expository slideshow, and here and here for two of the latest publications) proving unconditionally that average rank - that is, the limiting value of the average rank of all elliptic curves up to height $X$ - is bounded above by 0.885. A consequence of their work too is that at least 66% of all elliptic curves satisfy the rank part of the Birch and Swinnerton-Dyer Conjecture.

To a computationally-minded number theorist, the obvious question to ask is: Does the data support these results? I am by no means the first person to ask this question. Extensive databases of elliptic curves under various orderings have already been compiled, most notably those by Cremona (ordered by conductor) and Stein-Watkins (ordered essentially by discriminant). However, as yet no extensive tabulation of height-ordered elliptic curves has been carried out.

Here is a summarized table of elliptic curves with height at most 10000 - a total of 8638 curves, and the ranks thereof (all computations done in Sage, of course):

Rank# Curves%
0
298834.59%
1
430749.86%
2
128614.89%
3
570.66%
$\ge$4
00.00%
Total:8638

Thus the average rank of elliptic curves is 0.816 when the height bound is 10000. This is worrisome: the average is significantly different from the value of 0.5 we're hoping to see.

The situation gets even worse when we go up to height bound 100000:

Rank# Curves%
0
1949233.11%
1
2881848.96%
2
974716.56%
3
8011.36%
$\ge$4
40.01%
Total:58862

This yields an average rank of 0.862 for height bound 100000. Bizarrely, the average rank is getting bigger, not smaller!

[Note: the fact that 0.862 is close to Bhargava's asymptotic bound of 0.885 is coincidental. Run the numbers for height 1 million, for example, and you get an average rank of 0.8854, which is bigger than the above asymptotic bound. Observationally, we see the average rank continue to increase as we push out to even larger height bounds beyond this.]

So what's the issue here? It turns out that a lot of the asymptotic statements we can make about elliptic curves are precisely that: asymptotic, and we don't yet have a good understanding of the associated rates of convergence. Elliptic curves, ornery beasts that they are, can seem quite different from their limiting behaviour when one only considers curves with small coefficients. We expect (hope?) that the average to eventually turn around and start to decrease back down to 0.5, but the exact point at which that happens is as yet unknown.

This is where I come in. One of the projects I've been working on (with Wei Ho, Jen Balakrishnan, Jamie Weigandt, Nathan Kaplan and William Stein) is to compute the average rank of elliptic curves for as large a height bound as possible, in the hope that we will get results a bit more reassuring than the above. The main steps of the project are thus:

  1. Generate an ordered-by-height database of all elliptic curves up to some very large  height bound (currently 100 million; about 18.5 million curves);
  2. Use every trick in the book to compute the ranks of said elliptic curves;
  3. Compute the average of said ranks.
Steps 1 and 3 are easy. Step 2 is not. Determining the rank of an elliptic curve is a notoriously hard problem - no unconditional algorithm with known complexity currently exists - especially when you want to do it for millions of curves in a reasonable amount of time. Sage, for example, already has a rank() method attached to their EllipticCurve class; if you pass the right parameters to it, the method will utilize an array of approaches to get a value out that is (assuming standard conjectures) the curve's rank. However, its runtime for curves of height near 100 million is on the order of 20 seconds; set it loose on 18.5 million such curves and you're looking at a total computation time of about 10 CPU years.

Enter GSoC project stage left. At the expense of assuming the Generalized Riemann Hypothesis and the Birch and Swinnerton-Dyer Conjecture, we can use the zero sum rank bounding code I've been working on to quickly compute concrete upper bounds on an elliptic curve's rank. This approach has a couple of major advantages to it:
  • It's fast. In the time it's taken me to write this post, I've computed rank bounds on 2.5 million curves.
  • Runtime is essentially constant for any curve in the database; we don't have to worry about how the method scales with height or conductor. If we want to go up to larger height bounds at a later date, no problem.
As always, some Terms and Conditions apply. The rank bounding code only gives you an upper bound on the rank: if, for example, you run the code on a curve and get the number 4 back, there's no way to determine with this method if the curve's rank is 4, or if it is really some non-negative integer less than 4. 

Note: there is an invariant attached to an elliptic curve called the root number which is efficiently computable, even for curves with large conductor (it took less than 20 minutes to compute the root number for all 18.5 million curves in our database). The root number is one of two values: -1 or +1; if it's -1 the curve has odd analytic rank, and if it's +1 the curve has even analytic rank. Assuming BSD we can therefore always easily tell if the curve's rank is even or odd. My GSoC rank estimation code takes the root number into account, so in the example above, a returned value of 4 tells us that the curve's true rank is one of three values: 0, 2 or 4.

Even better, if the returned value is 0 or 1, we know this must be the actual algebraic rank of the curve: firstly, there's no ambiguity as to what the analytic rank is - it has to the returned 0 or 1; secondly, the BSD conjecture has been proven in the rank 0 & 1 cases. Thus even though we are a priori only computing analytic rank upper bounds, for some proportion of curves we've found the actual algebraic rank.

[Note that the rank bounding code I've written is predicated on knowing that all nontrivial zeros of an elliptic curve $L$-function lie on the critical line, so we still have to assume the Generalized Riemann Hypothesis.]

Thus running the rank bound code on the entire database of curves is a very sensible first step, and it's what I'm currently doing. It's computationally cheap to do - on SageMathCloud, using a Delta value of 1.0, the runtime for a single curve is about 4 milliseconds. Moreover, for some non-negligible percentage of curves the bounds will be observably sharp - based on some trial runs I'm expecting about 20-30% of the computed bounds to be 0 or 1.

That's about 4 million curves for which we won't have to resort to much more expensive rank finding methods. Huge savings!

by Simon Spicer (noreply@blogger.com) at July 31, 2014 07:37 AM

July 21, 2014

Simon Spicer

How to efficiently enumerate prime numbers

There's a part in my code that requires me to evaluate a certain sum
$$ \sum_{p\text{ prime }< \,M} h_E(p) $$
where $h_E$ is a function related to specified elliptic curve that can be evaluated efficiently, and $M$ is a given bound that I know. That is, I need to evaluate the function h_E at all the prime numbers less than $t$, and then add all those values up.

The question I hope to address in this post is: how can we do this efficiently as $M$ gets bigger and bigger? Specifically, what is the best way to compute a sum over all prime numbers up to a given bound when that bound can be very large?

[For those who have read my previous posts (you can skip this paragraph if you haven't - it's not the main thrust of this post), what I want to compute is, for an elliptic curve $E/\mathbb{Q}$, the analytic rank bounding sum $ \sum_{\gamma} \text{sinc}^2(\Delta \gamma) $ over the zeros of $L_E(s)$ for positive parameter $\Delta$; this requires us to evaluate the sum $ \sum_{n < \exp(2\pi\Delta)} c_n\cdot(2\pi\Delta-\log n)$. Here the $c_n$  are the logarithmic derivative coefficients of the completed $L$-function of $E$. Crucially $c_n = 0$ whenever $n$ isn't a prime power, and we can lump together all the terms coming from the same prime; we can therefore express the sum in the form you see in the first paragraph.]

As with so many things in mathematical programming, there is a simple but inefficient way to do this, and then there are more complicated and ugly ways that will be much faster. And as has been the case with other aspects of my code, I've initially gone with the first option to make sure that my code is mathematically correct, and then gone back later and reworked the relevant methods in an attempt to speed things up.

METHOD 1: SUCCINCT BUT STUPID


Here's a Python function that will evaluate the sum over primes. The function takes two inputs: a function $h_E$ and an integer $M$, and returns a value equal to the sum of $h_E(p)$ for all primes less than $M$. We're assuming here that the primality testing function is_prime() is predefined.


As you can see, we can achieve the desired outcome in a whopping six lines of code. Nothing mysterious going on here: we simply iterate over all integers less than our bound and test each one for primality; if that integer is prime, then we evaluate the function h_E at that integer and add the result to y. The variable y is then returned at the end.

Why is this a bad way to evaluate the sum? Because there are far more composite integers than there are primes. According to the prime number theorem, the proportion of integers up to $M$ that are prime is approximately $\frac{1}{\log M}$. For my code I want to compute with bounds in the order of $M = e^{8\pi} \sim 10^{11}$; the proportion of integers that are prime up to this bound value is correspondingly about $\frac{1}{8\pi} \sim 0.04$. That is, 96% of the integers we iterate over aren't prime, and we end up throwing that cycle away.

Just how inefficient this method is of course depends on how quickly we can evaluate the primality testing function is_prime(). The best known deterministic primality testing algorithm has running time that scales with (at most) the 6th power of $\log n$, where $n$ is the number being tested. This places primality testing in a class of algorithms called Polynomial Time Complexity Algorithms, which means the runtime of the function scales relatively well with the size of the input. However, what kills us here is the sheer number of times we have to call is_prime() - on all integers up to our bound $M$ - so even if it ran in constant time the prime_sum() function's running time is going to scale with the magnitude of $M$.

METHOD 2: SKIP THOSE $n$ WE KNOW ARE COMPOSITE


We can speed things up considerably by noting that apart from 2, all prime numbers are odd. We are therefore wasting a huge amount of time running primality tests on integers that we know a priori are composite. Assuming is_prime() takes a similar time to execute than our coefficient function h_E(), we could therefore roughly halve the runtime of the prime sum function by skipping the even integers and just checking odd numbers for primality.

We can go further. Apart from 2 and 3, all primes yield a remainder of 1 or 5 when you divide them by 6 (because all primes except for 2 are 1 (modulo 2) and all primes except for 3 are 1 or 2 (modulo 3)). We can therefore skip all integers that are 0, 2, 3 or 4 modulo 6; this means we only have to check for primality on only one third of all the integers less than $M$.

Here's a second version of the prime_sum() function that does this:


Of course we could go even further with the technique by looking at remainders modulo $p$ for more primes $p$ and combining the results: for example, all primes outside of 2, 3 and 5 can only have a remainder of 7, 11, 13, 17, 19, 23 or 29 modulo 30. However, the further you go the more special cases you need to consider, and the uglier your code becomes; as you can see, just looking at cases modulo 6 requires us to write a function about three times as long as previously. This method therefore will only be able to take us so far before the code we'd need to write would become too unwieldy for practicality.

METHOD 3: PRIME SIEVING...


This second prime_sum() version is a rudimentary example of a technique called prime sieving. The idea is to use quick computations to eliminate a large percentage of integers from consideration in a way that doesn't involve direct primality testing, since this is computationally expensive. Sieving techniques are an entire field of research in their own right, so I thought I'd just give as example one of the most famous methods: the Sieve of Eratosthenes (named after the ancient Greek mathematician who is thought to first come up with the idea). This takes as input a positive bound $M$ and returns a list of all prime numbers less than $M$. The method goes as follows:
  1. Start with a list of boolean flags indexed by the numbers 2 through $M$, and set all of them to True. 
  2. Starting at the beginning of the list, let $i$ be the index of the first True entry. Set all entries at indices a multiples of $i$ to False.
  3. Repeat step 2 until the first True entry is at index $> \sqrt{M}$.
  4. Return a list of all integers $i$ such that the entry at index $i$ is True.
This is definitely a case where a (moving) picture is worth a thousand words:
A good graphic representation of the Sieve of Eratosthenes being used to generate all primes less than 121. Courtesy Wikipedia: "Sieve of Eratosthenes animation". Licensed under CC BY-SA 3.0 via Wikimedia Commons
How and why does this work? By mathematical induction, at each iteration the index of the first True entry will always be prime. However any multiple thereof is by definition composite, so we can walk along the list and flag them as not prime. Wash, rinse, repeat. We can stop at $\sqrt{M}$, since all composite numbers at most $M$ in magnitude must have at least one prime factor at most $\sqrt{M}$ in size.

Here is a third version of our prime_sum() function that utilizes the Sieve of Eratosthenes:

Let's see how the three versions stack up against each other time-wise in the Sage terminal. I've saved the three functions in a file called prime_sum_functions.py, which I then import up front (if you want to do the same yourself, you'll need to import or define appropriate is_prime() and sqrt() functions at the top of the file). I've also defined a sample toy function h_E() and bound M:

sage: from prime_sum_functions import *
sage: def h_E(n): return sin(float(n))/float(n)
sage: M = 10000
sage: prime_sum_v1(h_E,M)
0.19365326958140347
sage: prime_sum_v2(h_E,M)
0.19365326958140347
sage: prime_sum_v3(h_E,M)
0.19365326958140347
sage: %timeit prime_sum_v1(h_E,M)
1 loops, best of 3: 363 ms per loop
sage: %timeit prime_sum_v2(h_E,M)
1 loops, best of 3: 206 ms per loop
sage: %timeit prime_sum_v3(h_E,M)
10 loops, best of 3: 86.8 ms per loop

Good news! All three functions (thankfully) produce the same result. And we see version 2 is about 1.8 times faster than version 1, while version 3 is four times as fast. These ratios remained roughly the same when I timed the functions on larger bounds, which indicates that the three versions have the same or similar asymptotic scaling - this should be expected, since no matter what we do we will always have to check something at each integer up to the bound.

METHOD 4: ...AND BEYOND


It should be noted, however, that the Sieve of Eratosthenes as implemented above would be a terrible choice for my GSoC code. This is because in order to enumerate the primes up to $M$ we need to create a list in memory of size $M$. This isn't an issue when $M$ is small, but for my code I need $M \sim 10^{11}$; an array of booleans that size would take up about 12 gigabytes in memory, and any speedups we get from not having to check for primality would be completely obliterated by read/write slowdowns due to working with an array that size. In other words, while the Sieve of Eratosthenes has great time complexity, it has abysmal space complexity.

Thankfully, more memory-efficient sieving methods exist that drastically cut down the space requirements. The best of these - for example, the Sieve of Atkin - need about $\sqrt{M}$ space. For $M \sim 10^{11}$ this translates to only about 40 kilobytes; much more manageable.

Of course, there's always a downside: bleeding edge prime enumeration methods are finicky and intricate, and there are a plethora of ways to get it wrong when implementing them. At some point squeezing an extra epsilon of speedup from your code is no longer worth it in terms of the time and effort it will take to get there. For now, I've implemented a more optimized version of the second prime_sum() function in my code (where we skip over all integers that are obviously not prime), since for now that is my happy middle ground.  If I have time at the end of the project I will revisit the issue of efficient prime enumeration and try implement a more optimized sieving method, but that is a tomorrow problem.

by Simon Spicer (noreply@blogger.com) at July 21, 2014 03:00 PM

July 14, 2014

Simon Spicer

Cythonize!

I'm at the stage where my code essentially works: it does everthing I initially set out to have it do, including computing the aforementioned zero sums for elliptic curve $L$-functions. However, the code is written in pure Python, and it is therefore not as fast as it could be.

This is often not a problem; Python is designed to be easy to read and maintain, and I'm hoping that the Python code I wrote is both of those. If we were just planning to run it on elliptic curves with small coefficients - for example, the curve represented by the equation $y^2=x^3-16x+16$ - this wouldn't be an issue. Curves with small coefficients have small conductors and thus few low-lying zeros near the central point, which allows us to run the zero sum code on them with small Delta parameter values. A small Delta value means the computation will finish very quickly regardless of how efficiently it's implemented, so it probably wouldn't be worth my while trying to optimize the code in that case.

To illustrate this point, here is the first most high-level, generic version of the method that computes the sum $\sum_{\gamma} \text{sinc}^2(\Delta \gamma)$ over the zeros of a given elliptic curve $L$-function (minus documentation):

[Of course, there's plenty going on in the background here. I have a separate method, self.cn() which computes the logarithmic derivative coefficients, and I call the SciPy function spence() to compute the part of the sum that comes from the Fourier transform of the digamma function $\frac{\Gamma^{\prime}}{\Gamma}$. Nevertheless, the code is simple and straightforward, and (hopefully) it's easy to follow the logic therein.]

However, the whole point of my GSoC project is to produce code that can be used for mathematical research; ultimately we want to push computations as far as we can and run the code on elliptic curves with large conductor, since curves with small conductor are already well-understood. Ergo, it's time I thought about going back over what I've written and seeing how I can speed it up.

There are two distinct ways to achieve speedups. The first is to rewrite the code more cleverly to eliminates unnecessary loops, coercions, function calls etc. Here is a second version I have written of the same function (still in Python):

The major change I've made between the two versions is improving how the sum involving the logarithmic derivative coefficients is computed - captured in the variable y. In the first version, I simply iterated over all integers $n$ up to the bound $t$, calling the method self.cn() each time. However, the logarithmic derivative coefficient $c_n$ is zero whenever $n$ is not a prime power, and knowing its value for $n=p$ a prime allows us to efficiently compute its value for $p^k$ any power of that prime. It therefore makes sense to do everything "in-house": eliminate the method call to self.cn(), iterate only over primes, and compute the logarithmic derivative coefficients for all powers of a given prime together.

Let's see how the methods match up in terms of speed. Below we run the two versions of the zero sum method on the elliptic curve $E: y^2=x^3-16x+16$, which is a rank 1 curve of conductor 37:

sage: import sage.lfunctions.zero_sums as zero_sums
sage: ZS = zero_sums.LFunctionZeroSum(EllipticCurve([-16,16]))
sage: ZS._zerosum_sincsquared(Delta=1)
1.01038406984
sage: ZS._zerosum_sincsquared_fast(Delta=1)
1.01038406984
sage: %timeit ZS._zerosum_sincsquared(Delta=1)
10 loops, best of 3: 20.5 ms per loop
sage: %timeit ZS._zerosum_sincsquared_fast(Delta=1)
100 loops, best of 3: 3.46 ms per loop

That's about a sixfold speedup we've achieved, just by reworking the section of the code that computes the $c_n$ sum.

The downside, of course, is that the code in method version 2 is more complicated - and thus less readable - than that in version 1. This is often the case in software development: you can write code that is elegant and easy to read but slow, or you can write code that is fast but horribly complicated and difficult to maintain. And when it comes to mathematical programming, unfortunately, sometimes the necessity for speed trumps readability.

The second major way to achieve speedups is to abandon pure Python and switch to a more low-level language. I could theoretically take my code and rewrite it in C, for example; if done relatively intelligently I'm sure the resulting code would blow what I have here out the water in terms of speed. However, I have no experience writing C code, and even if I did getting the code to interface with the rest of the Sage codebase would be a major headache.

Thankfully there is a happy middle ground: Cython. Cython is a programming language - technically a superset of Python - that allows direct interfacing with C and C++ data types and structures. Code written in Cython can be as fast as C code and nearly as readable as pure Python. Crucially, because it's so similar to Python it doesn't require rewriting all my code from scratch. And Sage already knows how to deal with Cython, so there aren't any compatibility issues there.

I am therefore in the process of doing exactly that: rewriting my code in Cython. Mostly this is just a cut-and-paste job and is pleasingly hassle-free; however, in order to achieve the desired speedups, the bottleneck methods - such as our $\text{sinc}^2$ zero sum method above - must be modified to make use of C data types.

Here is the third, most recent version of the _zerosum_sincsquared() method for our zero sum class, this time written in Cython:

Definitely longer and uglier. I now must declare my (C) variables up front; previously Python just created them on the fly, which is nice but slower than allocating memory space for the variables a priori. I've eliminated the use of complex arithmetic, so that everything can be done using C integer and float types. I still iterate over all primes up to the bound $t$; however now I deal with those primes that divide the conductor of $E$ (for which the associated summand is calculated slightly differently) beforehand, so that in the main loop I don't have to check at each point if my prime $p$ divides the conductor or not [This last check is expensive: the conductor $N$ can be very large - too large to cast as a $C$ long long even - so we would have to use slower Python or Sage data types to represent it. Better to get rid of the check altogether].

Let's see how this version holds up in a speed test. The Cython code has already been built into Sage and the class loaded into the global namespace, so I can just call it without having to attach or load any file:

sage: ZS = LFunctionZeroSum(EllipticCurve([-16,16]))
sage: ZS._zerosum_sincsquared_fast(Delta=1)
1.01038406984
sage: %timeit ZS._zerosum_sincsquared_fast(Delta=1)
1000 loops, best of 3: 1.72 ms per loop

The good news: the Cythonized version of the method produces the same output as the Python versions, and it's definitely faster. The bad news: the speedup is only about a factor of 2, which isn't hugely impressive given how much uglier the code is.

Why is this? Crucially, we still iterate over all integers up to the bound $t$, testing for primality at each step. This is very inefficient: most integers are not prime (in fact, asymptotically 0 percent of all positive integers are prime); we should be using sieving methods to eliminate primality testing at those integers which we know before checking are composite. For example, we should at the very least only ever iterate over the odd numbers beyond 3. That immediately halves the number of primality tests we have to do, and we should therefore get a comparable speedup if primality testing is what is dominating the runtime in this method.

This is therefore what I hope to implement next: rework the zero sum code yet again to incorporate prime sieving. This has applicability beyond just the $\text{sinc}^2$ method: all explicit formula-type sums at some point invoke summing over primes or prime powers, so having access to code that can do this quickly would be a huge boon.

by Simon Spicer (noreply@blogger.com) at July 14, 2014 11:37 AM

July 11, 2014

Nikhil Peter

Sage Android – UI Update

It’s been a busy week so far in the land of UI improvements. Disclaimer: I’m pretty bad at UI, so any and all suggestions are welcome. Firstly, last week’s problems have been resolved viz. Everything is saved nicely on orientation change, including the interacts, which required quite a bit of effort. In short, it involved […]

by hav3n at July 11, 2014 05:59 PM