## December 01, 2013

### Vince Knight

#### How to handle float error for plots near discontinuities in Sage

Last week I read this blog post by +Patrick Honner. In the post +Patrick Honner plots a graph of a function with a removable discontinuity on Desmos and when zooming in enough he got some errors.

I was waiting around to start this (ridiculously fun) Hangout on Air with a bunch of mathematicians hosted by +Amy Robinson of +Science on Google+:

While waiting I rushed to write this blog post claiming that if you did the same thing with +Sage Mathematical Software System you did not get any errors. It was quickly pointed out to me on twitter and in the comments that I just had not zoomed in enough.

I edited the blog post to first of all change the title (it was originally 'When Sage doesn't fail' but now reads 'When Sage also fails') and also to include some code that shows that the exact same error appears.

On G+, +Robert Jacobson (who's the owner of the Mathematics community which you should check out if you haven't already) pointed out that you could surely use Sage's exact number fields to avoid this error.

He put together some code and shared it with me on +The Sagemath Cloud that does exactly this. Here's a slight tweak of the code Robert wrote (hopefully you haven't changed your mind and still don't mind if I blog this Robert!):

f(x) = (x + 2) / (x ^ 2 + 3 * x + 2) # Define the functiondiscontinuity = -1  # The above function has two discontinuities, this one I don't want to plothole = -2  # The hole described by Patrick Honnerdef make_list_for_plot(f, use_floats=False, zoom_level=10^7, points=1001):    count = 0  # Adding this to count how many tries fail    z = zoom_level    xmin = hole - 10/z # Setting lower bound for plot    xmax = min(hole + 10/z, discontinuity - 1/10) # Setting upper bound for plot only up until the second (messy) discontinuity    x_vals = srange(start=xmin, end=xmax, step=(xmax-xmin)/(points-1), universe=QQ, check=True, include_endpoint=True)    # If we are using floating point arithmetic, cast all QQ numbers to floating point numbers using the n() function.    if use_floats:        x_vals = map(n, x_vals)    lst = []    for x in x_vals:        if x != hole and x != discontinuity:  # Robert originally had a try/except statement here to pick up ANY discontinuities. This is not as good but I thought was a bit fairer...            y = f(x)            lst.append((x, y))    return lst

The code above makes sure we stay away from the discontinuity but also allows us to swap over to floating point arithmetic to see the effect. The following plots the functions using exact arithmetic:

exact_arithmetic = make_list_for_plot(f)p = list_plot(exact_arithmetic, plotjoined=True)  # Plot fp += point([hole, -1], color='red', size=30)  # Add a pointshow(p)

We see the plot here (with no errors):

To call the plots with floating point arithmetic:

float_arithmetic = make_list_for_plot(f, use_floats=True)p = list_plot(float_arithmetic, plotjoined=True)  # Plot fp += point([hole, -1], color='red', size=30)  # Add a pointshow(p)

We see that we now get the numerical error:

Just to confirm here is the same two plots with an even higher zoom:

To change the zoom, try out the code in the sage cell linked here: simply change the zoom_level which was set to $10^12$ for the last two plots.

(Going any higher than $10^14$ seems to bring in another error that does not get picked up by my if statement in my function definition: Robert originally had a try except method but I thought that in a way this was a 'fairer' way of doing things. Ultimately though it's very possible and easy to get an error-less plot.)

## November 23, 2013

### Vince Knight

#### When Sage also fails

+Patrick Honner wrote a post titled: 'When Desmos Fails' which you should go read. In it he shows a quirk about Desmos (a free online graphing calculator) that seems to not be able to correctly graph around the removable discontinuity  $(-2,-1)$ of the following function:

$$f(x)=\frac{x+2}{x^2+3x+2}$$

People who understand this better than me say it might have something to how javascript handles floats...

Anyway I thought I'd see how +Sage Mathematical Software System could handle this. Here's a Sage cell with an interact that allows you to zoom in on the point (click on evaluate and it should run, it doesn't seem to fit too nice embedded in my blog so here's a link to a standalone Sage cell: http://goo.gl/WtezZ4):

It looks like Sage doesn't have the same issues as Desmos does. This is probably not a fair comparison, and needed a bit more work than Desmos (which I can't say I've used a lot) to get running but I thought it was worth taking a look at :)

EDIT: IF you Zoom in more you do get the same behaviour as Desmos! I thought I had zoomed in to the same level as +Patrick Honner did but perhaps I misjudged from his picture :)

Here's the same thing in Sage (when setting $z=10^7$ in the above code):

## November 16, 2013

### Vince Knight

#### Plotting complex numbers in Sage

I had a student email me overnight asking how to plot complex numbers in +Sage Mathematical Software System.

I spent a couple of minutes googleing and found various command that would plot complex functions:

f = sqrt(x) + 1 / x complex_plot(sqrt, (-5,5), (-5, 5))

This gives the following plot:

This was however not what my student was asking. They wanted to know how to plot a given set of points in the complex plain (referred to as the Argand plane). A quick google to check if there was anything in Sage pre built for this brought me to this published sheet by +Jason Grout.

I tweaked it slightly so that it was in line with the commands my students have learnt so far and also to include axes legends and put the following in to a function:

def complex_point_plot(pts):
    """
    A function that returns a plot of a list of complex points.
    Arguments: pts (a list of complex numbers)
    Outputs: A list plot of the imaginary numbers
    """
    return list_plot([(real(i), imag(i)) for i in pts], axes_labels = ['Re($z$)', 'Im($z$)'], size=30)

This function simply returns a plot as required. Here is a small test with the output:

complex_point_plot([3*I, e^(I*pi), e^(I*3*pi/4), 4-4*I])

Here is some code that will plot the unit circle using the $z=e^{i\theta}$ notation for complex numbers (and the Sage srange command):

pts = [e^(I*(theta)) for theta in srange(0, 2*pi, .1)]
complex_point_plot(pts)

Here is the output:

I published all this in a worksheet on our server so it's now immediately available to all our students. I'm really enjoying teaching +Sage Mathematical Software System to our students.

A Sage cell with the above code (that you can run in your browser) can be found here: http://goo.gl/jipzxV

EDIT: Since posting this +Punarbasu Purkayastha pointed out on G+ that list_plot can handle complex points right out of the box :) So the above can just be obtained by typing:

pts = [e^(I*(theta)) for theta in srange(0, 2*pi, .1)]
list_plot(pts, axes_labels=['Re($z$)','Im($z$)'], size=30)

Learn something new everyday...

## November 07, 2013

### Harald Schilly

#### When will Sagemath Cloud hit 100,000 user accounts?

The Sagemath Cloud is an online environment for computational mathematics. Get an account, log in, and your web-browser transforms into almost everything you need to study algebra, calculus, numerics, statistics, number and game theory, (computational aspects of) physics, chemistry and all other quantitative sciences. It's built around all sorts of tools and utilities offered in a proper Linux environment. All files can be shared with collaborators and edited in real-time without stepping on each other's toes. Oh wait, there is also a LaTeX editor: given you know LaTeX, qualitative sciences are also covered ;-)

But beware, this posting will only talk about the very first: accounts. Below, I'll explain how I managed to create this page, the plot below and how I let it update automatically every hour - all done solely by SMC!

 SMC accounts over time

Sagemath Cloud is very public about what is happening on its servers. This stats link gives you the raw data for the current overall load of the machines. What I did over the last weeks was the following: Every hour I've downloaded this stats json file, parsed it, accumulated some interesting numbers, and stored the processed data in a CSV file.

I did all this in SMC, because it allows you to run your own crontab files. Cron periodically goes through all kind of files to figure out if there is a job to do. Just enter a magical line in your own crontab file, and the given command is run whenever you tell it to do so. You do not have to be logged in!

In my case, it's like this:
First, edit your crontab file:
$crontab -e Then enter this line: 0 * * * * python$HOME/get.py

The $HOME is important, because you have to specify the full path where your script sits. (By the way, if you have to start something when the SMC server reboots, use the @reboot descriptor) So, what is get.py doing? It uses the wonderful requests Python library to retrieve and parse the stats url and extracts some data. Then it appends a line to a CSV file. Two minutes later (the crontab line starts with a "2"), another script is called which processes this CSV file. It reads the columns via pandas, properly parses the dates into time-series, and allows me to do all sorts of analysis, transformations and plots with it. For example, the plot shown above overlays the raw time series plot with OLS fits done by statsmodels for selected time ranges (where it looks "flat"). We can see the growth trends clearly! Even more important, so far the growth increases and hence we are watching the exponential growth phase as part of the beginning of the usual logistic growth. The other plots on this statistics page show aggregated time-series data. For example, the plot for the number of concurrent connections is also increasing and reassures that SMC is scaling well.  Concurrent connections to SMC On this statistics page, there are also a few "dynamic" fields in the HTML content. This is done by jinja2 in such a way, that the template "stats.tmpl" contains the HTML code and "mustache"-style variables. Jinja2 renders this template with some variables and that's it. import jinja2 as j2env = j2.Environment(loader=j2.FileSystemLoader("."))stats = env.get_template("stats.tmpl")data = { 'date' : "%s UTC" % datetime.utcnow(), 'recent_data' : totals.ix[-24:].to_html()}with open("stats.html", "wb") as output: output.write(stats.render(**data)) The last step of the script is to actually publish the files to the webserver. That's rather straight forward. First and only once, create ssh keys via ssh-keygen without a password and then use ssh-copy-id -i ~/.ssh/id_dsa.pub name@server to copy over your keys. Subsequent ssh connections will be established without any questions asked, because the remote server knows your identity. I'm using scp to copy the files: scp *.png stats.html name@remote-server:~/target/dir/ Last but not least, when will SMC hit the 100,000 user mark? In less than 6 days the 10,000 mark should be crossed and I hope the trend continues into the upward direction. ## October 23, 2013 ### Vince Knight #### Pigeon holes, Markov chains and Sagemath. On the 16/10/2013 I posted the following picture on G+: Here's what I wrote on that post: For a while now there's been a 'game' going on with our pigeon holes where people would put random objects in other people's pigeon holes (like the water bottle you see in the picture). These objects would then follow a random walk around the pigeon holes as each individual would find an object in their pigeon hole and absent-mindedly move it to someone else's pigeon hole. As such each pigeon hole could be thought of as being a transient state in a Markov chain (http://en.wikipedia.org/wiki/Markov_chain). What is really awesome is that one of the PhD students here didn't seem to care when these random objects appeared in her pigeon hole. Her pigeon hole was in fact an absorbing state. This has now resulted in more or less all random objects (including a wedding photo that no one really knows the origin of) to be in her pigeon hole. I thought I'd have a go at modelling this as an actual Markov chain. Here's a good video by a research student of mine (+Jason Young) describing the very basics of a Markov chain: To model the movement of an object as a Markov chain we first of all need to describe the states. In our case this is pretty easy and we simply number our pigeon holes and refer to them as states. In my example there I've decided to model a situation with 12 pigeon holes. What we now need is a set of transition probabilities which model the random behaviour of people finding an object in their pigeon hole and absent-mindedly moving it to another pigeon hole. This will be in the form of a matrix$P$. Where$P_{ij}$denotes the probability of going from state$i$to state$j$. I could sit in our photocopier room (that's where our pigeon holes are) and take notes as to where the individual who owns pigeon hole$i$places the various objects that appear in their pigeon hole... That would take a lot of time and sadly I don't have any time. So instead I'm going to use +Sage Mathematical Software System. The following code gives a random matrix: N = 12P = random_matrix(QQ, N, N) This is just a random matrix over$\mathbb{Q}$so we need to do tiny bit of work to make it a stochastic matrix: P = [[abs(k) for k in row] for row in P] # This ensures all our numbers are positiveP = matrix([[k / sum(row) for k in row] for row in P]) # This ensures that our rows all sum to 1 The definition of a stochastic matrix is any matrix$P$such that: •$P$is square •$P_{ij}\geq 0$(all probabilities are non negative) •$\sum_{j}P_{ij}=1\;\forall\;i$(when leaving state$i$the probabilities of going to all other states must sum to 1) Recall that our matrix is pretty big (12 by 12) so we the easiest way to visualise it is through a heat map: P.plot(cmap='hsv',colorbar=True) Here's what a plot of our matrix looks like (I created a bunch of random matrix gifs here): We can find the steady state probability of a given object being in any given state using a very neat result (which is not actually that hard to prove). This probability vector$\pi$(where$\pi_i$denotes the probability of being in state$i$) will be a solution of the matrix equation: $$\pi P = \pi$$ To solve this equation it can be shown that we simply need to find the eigenvector of$P$corresponding to the unit eigenvalue: eigen = P.eigenvectors_left() # This finds the eigenvalues and eigenvectors To normalise our eigenvector we can do this: pi = [k[1][0] for k in eigen if k[0] == 1][0] # Find eigenvector corresponding to unit eigenvaluepi = [k / sum(pi) for k in pi] # normalise eigenvector Here's a bar plot of out probability vector: bar_chart(pi) ## October 19, 2013 ### William Stein #### Jason Grout's description of the Sagemath Cloud ## Jason Grout's description of the Sagemath Cloud: William Stein, the lead developer of Sage, has been developing a new online interface to Sage, the Sage Cloud at https://cloud.sagemath.com. Currently in beta status, it is already a powerful computation and collaboration tool. Work is organized into projects which can be shared with others. Inside a project, you can create any number of files, folders, Sage worksheets, LaTeX documents, code libraries, and other resources. Real-time collaborative editing allows multiple people to edit and chat about the same document simultaneously over the web. The LaTeX editor features near real-time preview, forward and reverse search, and real-time collaboration. Also, it is easy to have Sage do computations or draw gures and have those automatically embedded into a LaTeX document using the SageTeX package (for example, after including the sagetex package, typing \sageplot{plot(sin(x))} in a TeX document inserts the plot of sin(x)). A complete Linux terminal is also available from the browser to work within the project directory. Snapshots are automatically saved and backed up every minute to ensure work is never lost. William is rapidly adding new features, often within days of a user requesting them. ## October 12, 2013 ### William Stein #### "A Symphony of Cursors" (guest post by Jason Grout) Today's post is from guest blogger, Jason Grout, lead developer of the Sage Cell Server. The other day some students and I met to do some development on the Sage cell server. We each opened up our shared project on cloud.sagemath.com on our own laptops, and started going through the code. We had a specific objective. The session went something like this: Jason: Okay, here's the function that we need to modify. We need to change this line to do X, and we need to change this other line to do Y. We also need to write this extra function and put it here, and change this other line to do Z. James: can you do X? David: can you look up somewhere on the net how to do Y and write that extra function? I'll do Z. Then in a matter of minutes, cursors scattering out to the different parts of the code, we had the necessary changes written. I restarted the development sage cell server running inside the cloud account and we were each able to test the changes. We realized a few more things needed to be changed, we divided up the work, and in a few more minutes each had made the necessary changes. It was amazing: watching all of the cursors scatter out into the code, each person playing a part to make the vision come true, and then quickly coming back together to regroup, reassess, and test the final complete whole. Forgive me for waxing poetic, but it was like a symphony of cursors, each playing their own tune in their lines of the code file, weaving together a beautiful harmony. This fluid syncing William wrote takes distributed development to a new level. Thanks! ## October 10, 2013 ### Martin Albrecht #### libFES : Fast Exhaustive Search for Polynomial Systems over F2 Charles Bouillaguet set up a nice shiny website for libFES the library for exhaustive search on polynomial systems over . The library has a Sage interface, so it’s easy to get started. It’s also integrated in Charles’ upcoming one-stop boolean system solving patch. He calls his benchmarketing “bragging rights” … and boy has he earned those […] ### Doxdrum #### Installation of SageManifold Hello again! If you are looking for a Differential Geometry tool, a Sage package which is under development is SageManifold. Let’s see how to install it. Download the package using the link (currently v.0.2). I’d assume it is saved on your Downloads folder. Assuming you have SAGE installed, and you have created an alias to […] ## October 04, 2013 ### William Stein #### Backing up the Sagemath Cloud The terms of usage of the Sagemath Cloud say "This free service is not guaranteed to have any uptime or backups." That said, I do actually care a huge amount about backing up the data stored there, and ensuring that you don't lose your work. ## Bup I spent a lot of time building a snapshot system for user projects on top of bup. Bup is a highly efficient de-duplicating compressed backup system built on top of git; unlike other approaches, you can store arbitrary data, huge files, etc. I looked at many open source options for making efficient de-duplicated distributed snapshots, and I think bup is overall the best, especially because the source code is readable. Right now https://cloud.sagemath.com makes several thousand bup snapshots every day, and it has practically saved people many, many hours in potentially lost work (due to them accidentally deleting or corrupting files). You can access these snapshots by clicking on the camera icon on the right side of the file listing page. ## Some lessons learned when implementing the snapshot system • Avoid creating a large number of branches/commits -- creating an almost-empty repo, but with say 500 branches, even with very little in them, makes things painfully slow, e.g., due to an enormous number of separate calls to git. When users interactively get directory listings, it should take at most about 1 second to get a listing, or they will be annoyed. I made some possibly-hackish optimization -- mainly caching -- to offset this issue, which are here in case anyone is interested: https://github.com/williamstein/bup (I think they are too hackish to be included in bup, but anybody is welcome to them.) • Run a regular test about how long it takes to access the file listing in the latest commit, and if it gets above a threshhold, create a new bup repo. So in fact the bup backup deamons really manage a sequence of bup repos. There are a bunch of these daemons running on different computers, and it was critical to implement locking, since in my experience bad things happen if you try to backup an account using two different bups at the same time. Right now, typically a bup repo will have about 2000 commits before I switch to another one. • When starting a commit, I wrote code to save information about the current state, so that everything could be rolled back in case an error occurs, due to files moving, network issues, the snapshot being massive due to a nefarious user, power loss, etc. This was critical to avoid the bup repo getting corrupted, and hence broken. • In the end, I stopped using branches, due to complexity and inefficiency, and just make all the commits in the same branch. I keep track of what is what in a separate database. Also, when making a snapshot, I record the changed files (as output by the command mentioned above) in the database with the commit, since this information can be really useful, and is impossible to get out of my backups, due to using a single branch, the bup archives being on multiple computers, and also there being multiple bup archives on each computer. NOTE: I've been recording this information for cloud.sagemath for months, but it is not yet exposed in the user interface, but will be soon. ## Availability The snapshots are distributed around the Sagemath Cloud cluster, so failure of single machines doesn't mean that backups become unavailable. I also have scripts that automatically rsync all of the snapshot repositories to machines in other locations, and keep offsite copies as well. It is thus unlikely that any file you create in cloud.sagemath could just get lost. For better or worse, is also impossible to permanently delete anything. Given the target audience of mathematicians and math students, and the terms of usage, I hope this is reasonable. ## September 22, 2013 ### Verónica Suaste Morales #### GSoC 15th - 21th september Week for documentation. Decoding functions for linear codes were changed to decoder.py in sage.coding Method 'decode' from linear_code.py was modified so now the new decoding algorithms are supported. Examples and documentation were added to each function. https://bitbucket.org/vsuaste/coding-theory/commits/21fdf54936c34a51434ae8c93af9a708987c8820 https://bitbucket.org/vsuaste/coding-theory/commits/b710a5169e67221344f11956c0226d1a1fc97bd7 https://bitbucket.org/vsuaste/coding-theory/commits/827bf6bdb252a5771129c62d96a5668bf4a90d18 https://bitbucket.org/vsuaste/coding-theory/commits/f714bc2d14a18c02d80a3add36742d39a2d6fa30 Also I'm still working in a function that it could be added. This is about minimal support words of the code. Here some explanation: Theory ....Getting ready the final patch with alphabetical ordered functions. ## September 20, 2013 ### Vince Knight #### Revisiting examples of computer assisted mathematics I'm in the middle of a finishing off some last things for a brand new course we're teaching at +Cardiff University starting in October. I plan on using my first lecture to explain to our new students how important computing/programming/coding is for modern mathematicians. I will certainly be talking about the 4 colour theorem which states that any map can be coloured using 4 colours. I'll probably demo a bit of what +Sage Mathematical Software System can do. Here's a sage cell that will demo some of this (click evaluate and you should see the output, feel free to then play around with the code). There I've written some very basic code to show a colouring of a graph on 9 vertices. I expect that Students might find that interesting in particular if I show colourings for non planar graphs. For example here's another cell showing the procedure on complete graph on 9 vertices: That is just a couple of 'qwerky' things that don't really go near the complexities of the proof of the 4 colour theorem. I took to social media in the hope of asking for more examples of cool things that mathematicians use computers for. Here's a link to the blog post but without a doubt the most responses I got was on this G+ post. You can see all the responses on that post but I thought I'd try to compile a list and quick description of some of the suggestions that caught my eye: • This wiki was pointed out by +Kevin Clift which contains a large amount of great animations (like the one below) made by 'Keiff': 'This is the curse of computing: giving up understanding for an easy verification.' • +Joerg Fliege mentioned chess endgame tablebases which I think would be a cool thing to point out to students. • +David Ketcheson+Dima Pasechnik and others pointed out how computer algebra systems are more or less an everyday tool for mathematicians nowadays. When I'm finding my way through a new project I normally always have a sage terminal open to try out various algebraic relationships as I go... There are a couple of other things that I'm not listing above (mainly because I don't know enough about them to be able to comment), but interestingly enough +Timothy Gowers posted the other day a link to a paper that he has co-authored entitled: 'A fully automatic problem solver with human-style output'. In that paper a new program is described able to produce human style proofs of theorems. A blog post that +Timothy Gowers put up a while back was actually an experiment for this paper. I'm obviously missing a large amount of other stuff so please do let me know :) ## September 14, 2013 ### Vince Knight #### For no reason whatsoever: animated gifs of random matrices Here are some animated gifs of random matrices done in Sage: Here's a smaller 10 by 10 matrix (the above are 500 by 500) which probably needs to come with a health warning: The code to do this using Sage is pretty easy (it makes use of the plot method on matrices): import ossize = 500nbrofmatrices = 100for i in range(nbrofmatrices): print "Ploting matrix: %i of %s" % (i + 1, nbrofmatrices) A = random_matrix(RR,size) p = A.plot(cmap='hsv') p.save('./plots/%.3d.png' % i)print "Converting plots to gif"os.system("convert -loop 0 ./plots/*png %sanimatedmatricesofsize%s.gif" % (nbrofmatrices, size)) Each of the three above gifs were made using different colour maps: (ie changing the cmap option). This creates 100 random 500 by 500 matrices and uses imagemagik (that's a link to a blog post I wrote about creating these) to create an animated gif. ## September 13, 2013 ### William Stein #### IPython Notebooks in the Cloud with Realtime Synchronization and Support for Collaborators I spent the last two weeks implementing hosted IPython notebooks with sync for https://cloud.sagemath.com. Initially I had just plan to simplify the port forwarding setup, since using multiple forward and reverse port forwards seemed complicated. But then I became concerned about multiple users (or users with multiple browsers) overwriting each other's notebooks; this is a real possibility, since projects are frequently shared between multiple people, and everything else does realtime sync. I had planned just to add some very minimal merge-on-save functionality to avoid major issues, but somehow got sucked into implementing full realtime sync (even with the other person's cursor showing). ## Here's how to try it out • Go to https://cloud.sagemath.com and make an account; this is a free service hosted on computers at University of Washington. • Create a new project. • Click +New, then click "IPython"; alternatively, paste in a link to an IPython notebook (e.g., anything here http://nbviewer.ipython.org/ -- you might need to get the actual link to the ipynb file itself!), or upload a file. • An IPython notebook server will start, the given .ipynb file should load in a same-domain iframe, and then some of the ipython notebook code is and iframe contents are monkey patched, in order to support sync and better integration with https://cloud.sagemath.com. • Open the ipynb file in multiple browsers, and see that changes in one appear in the other, including moving cells around, creating new cells, editing markdown (the rendered version appears elsewhere), etc. Since this is all very new and the first (I guess) realtime sync implementation on top of IPython, there are probably a lot of issues. Note that if you click the "i" info button to the right, you'll get a link to the standard IPython notebook server dashboard. ## IPython development Regarding the monkey patching mentioned above, the right thing to do would be to explain exactly what hooks/changes in the IPython html client I need in order to do sync, etc., make sure these makes sense to the IPython devs, and send a pull request. As an example, in order to do sync efficiently, I have to be able to set a given cell from JSON -- it's critical to do this in place when possible, since the overhead of creating a new cell is huge (due probably to the overhead of creating CodeMirror editors); however, the fromJSON method in IPython assumes that the cell is brand new -- it would be nice to add an option to make a cell fromJSON without assuming it is empty. The ultimate outcome of this could be a clean well-defined way of doing sync for IPython notebooks using any third-party sync implementation. IPython might provide their own sync service and there are starting to be others available these days -- e.g., Google has one, and maybe Guido van Rosum helped write one for Dropbox recently? ## How it works Earlier this year, I implemented Neil Fraser's differential synchronization algorithm, since I needed it for file and Sage worksheet editing in https://cloud.sagemath.com. There are many approaches to realtime synchronization, and Fraser makes a good argument for his. For example, Google Wave involved a different approach (Operational Transforms), whereas Google Drive/Docs uses Fraser's approach (and code -- he works at Google), and you can see which succeeded. The main idea of his approach is eventually stable iterative process that involves heuristically making and applying patches on a "best effort" basis; it allows for all live versions of the document to be modified simultaneously -- the only locking is during the moment when a patch is applied to the live document. He also explains how to handle packet loss gracefully. I did a complete implementation from scratch (except for using the beautiful Google diff/patch/match library). There might be a Python implementation of the algorithm as part of mobwrite. The hardest part of this project was using Fraser's algorithm, which is designed for unstructured text documents, to deal with IPython's notebook format, which is a structured JSON document. I ended up defining another less structured format for IPython notebooks, which gets used purely for synchronization and nothing else. It's a plain text file whose first line is a JSON object giving metainformation; all other lines correspond, in order, to the JSON for individual cells. When patching, it is in theory possible in edge cases involving conflicts to destroy the JSON structure -- if this happens, the destruction is isolated to a single cell, and that part of the patch just gets rejected. The IPython notebook is embedded as an iframe in the main https://cloud.sagemath.com page, but with exactly the same domain, so the main page has full access to the DOM and Javascript of the iframe. Here's what happens when a user makes changes to a synchronized IPython notebook (and at least 1 second has elapsed): • The outer page notices that the notebook's dirty flag is set for some reason, which could involve anything from typing a character, deleting a bunch of cells, output appearing, etc. • Computes the JSON representation of the notebook, and from that the document representation (with 1 line per cell) described above. This takes a couple of milliseconds, even for large documents, due to caching. • The document representation of the notebook gets synchronized with the version stored on the server that the client connected with. (This server is one of many node.js programs that handles many clients at once, and in turn synchronizes with another server that is running in the VM where the IPython notebook server is running. The sync architecture itself is complicated and distributed, and I haven't described it publicly yet.) • In the previous step, we in fact get a patch that we apply -- in a single automatic operation (so the user is blocked for a few milliseconds) -- to our document representation of the notebook in the iframe. If there are any changes, the outer page modifies the iframe's notebook in place to match the document. My first implementation of this update used IPython's noteobook.fromJSON, which could easily take 5 seconds (!!) or more on some of the online IPython notebook samples. I spent about two days just optimizing this step. The main ideas are: 1. Map each of the lines of the current document and the new document to a unicode character, 2. Use diff-patch-match to find an efficient sequence of deletions, insertions, swaps to transforms one document to the other (i.e., swapping cells, moving cells, etc.) -- this is critical to do, 3. Change cells in place when possible. With these tricks (and more can be done), modifying the notebook in place takes only a few milliseconds in most cases, so you don't notice this as you're typing. • Send a broadcast message about the position of your cursor, so the other clients can draw it. (Symmetrically, render the cursor on receiving a broadcast message.) ## September 12, 2013 ### Eviatar Bach #### Status of special functions in Sage Hello, Sorry for not posting status updates in a while, but much of what I've been working on would not be interesting to a general audience of Sage users. The Digital Library of Mathematical Functions has a Software Index, which lists the software that implement certain mathematical function. For Sage, that list is extremely out of date. Despite having sent an email to the DLMF with updates (the editor has confirmed that the table will be updated in the next release of the DLMF), I still think it's valuable to give a more detailed outline of the status of special functions in Sage so that gaps can be filled (especially the blue and violet entries, which have patches available!). Sorry about the excessive colour; I wanted to make it easy to discern the categories. Legend Green: Available in Sage Blue: Patch implementing it is available Yellow: Partially available Violet: Available, and patch with improvements exists Orange: Implemented in mpmath but not in Sage Pink: Not available in mpmath nor Sage 4 Elementary Functions 5 Gamma Function §5.24(ii)$\mathop{\Gamma}\nolimits\!\left(x\right), x\in\mathbb{R}$✓ §5.24(iii)$\mathop{\psi}\nolimits\!\left(x\right), \mathop{\psi^{{(n)}}}\nolimits\!\left(x\right), x\in\mathbb{R}$✓ §5.24(iv)$\mathop{\Gamma}\nolimits\!\left(z\right), \mathop{\psi}\nolimits\!\left(z\right), \mathop{\psi^{{(n)}}}\nolimits\!\left(z\right), z\in\mathbb{C}$✓ §5.24(v)$\mathop{\mathrm{B}}\nolimits\!\left(a,b\right), a,b\in\mathbb{R}$✓ §5.24(vi)$\mathop{\mathrm{B}}\nolimits\!\left(a,b\right), a,b\in\mathbb{C}$http://trac.sagemath.org/ticket/12521 would fix this, but a better solution (both for speed and precision) would be to use mpmath. ✓ §7.25(ii)$\mathop{\mathrm{erf}}\nolimits x, \mathop{\mathrm{erfc}}\nolimits x, \mathop{\mathrm{i}^{{n}}\mathrm{erfc}}\nolimits\!\left(x\right), x\in\mathbb{R}$✓ §7.25(iii)$\mathop{\mathrm{erf}}\nolimits z, \mathop{\mathrm{erfc}}\nolimits z, z\in\mathbb{C}\mathrm{erfc}$is not yet implemented for complex numbers. §7.25(iv)$\mathop{C}\nolimits\!\left(x\right), \mathop{S}\nolimits\!\left(x\right), \mathop{\mathrm{f}}\nolimits\!\left(x\right), \mathop{\mathrm{g}}\nolimits\!\left(x\right), x\in\mathbb{R}$§7.25(v)$\mathop{C}\nolimits\!\left(z\right), \mathop{S}\nolimits\!\left(z\right), z\in\mathbb{C}$§7.25(vi)$\mathop{\mathcal{F}}\nolimits\!\left(x\right), \mathop{G}\nolimits\!\left(x\right), \mathop{\mathsf{U}}\nolimits\!\left(x,t\right), \mathop{\mathsf{V}}\nolimits\!\left(x,t\right), x\in\mathbb{R}$§7.25(vii)$\mathop{\mathcal{F}}\nolimits\!\left(z\right), \mathop{G}\nolimits\!\left(z\right), z\in\mathbb{C}$§8.28(ii) Incomplete Gamma Functions for Real Argument and Parameter ✓ §8.28(iii) Incomplete Gamma Functions for Complex Argument and Parameter ✓ §8.28(v) Incomplete Beta Functions for Complex Argument and Parameters §8.28(vi) Generalized Exponential Integral for Real Argument and Integer Parameter ✓ §8.28(vii) Generalized Exponential Integral for Complex Argument and Parameter ✓ §9.20(ii)$\mathop{\mathrm{Ai}}\nolimits\!\left(x\right), {\mathop{\mathrm{Ai}}\nolimits^{{\prime}}}\!\left(x\right), \mathop{\mathrm{Bi}}\nolimits\!\left(x\right), {\mathop{\mathrm{Bi}}\nolimits^{{\prime}}}\!\left(x\right), x\in\mathbb{R}$✓ §9.20(iii)$\mathop{\mathrm{Ai}}\nolimits\!\left(z\right), {\mathop{\mathrm{Ai}}\nolimits^{{\prime}}}\!\left(z\right), \mathop{\mathrm{Bi}}\nolimits\!\left(z\right), {\mathop{\mathrm{Bi}}\nolimits^{{\prime}}}\!\left(z\right), z\in\mathbb{C}$See http://trac.sagemath.org/ticket/12455 §9.20(iv) Real and Complex Zeros §9.20(v) Integrals of$\mathop{\mathrm{Ai}}\nolimits\!\left(x\right), \mathop{\mathrm{Bi}}\nolimits\!\left(x\right), x\in\mathbb{R}$See http://trac.sagemath.org/ticket/12455 §9.20(vi) Scorer Functions §10.77(ii) Bessel Functions–Real Argument and Integer or Half-Integer Order (including Spherical Bessel Functions) ✓ §10.77(iii) Bessel Functions–Real Order and Argument ✓ §10.77(iv) Bessel Functions–Integer or Half-Integer Order and Complex Arguments, including Kelvin Functions Kelvin functions are not implemented. They are in mpmath however. §10.77(v) Bessel Functions–Real Order and Complex Argument (including Hankel Functions) See http://trac.sagemath.org/ticket/15024 §10.77(viii) Bessel Functions–Complex Order and Argument ✓ §10.77(ix) Integrals of Bessel Functions §10.77(x) Zeros of Bessel Functions §11.16(ii) Struve Functions §11.16(iii) Integrals of Struve Functions §11.16(iv) Lommel Functions §11.16(v) Anger and Weber Functions §11.16(vi) Integrals of Anger and Weber Functions §13.32(ii) Real Argument and Parameters See http://trac.sagemath.org/ticket/14896 §13.32(iii)Complex Argument and/or Parameters See http://trac.sagemath.org/ticket/14896 §14.34(ii)Legendre Functions: Real Argument and Parameters §14.34(iii)Legendre Functions: Complex Argument and/or Parameters §14.34(iii)Legendre Functions: Complex Argument and/or Parameters §15.20(ii) Real Parameters and Argument See http://trac.sagemath.org/ticket/2516 §15.20(iii) Complex Parameters and Argument See http://trac.sagemath.org/ticket/2516 §16.27(ii) Real Argument and Parameters See http://trac.sagemath.org/ticket/2516 §16.27(iii) Complex Argument and/or Parameters See http://trac.sagemath.org/ticket/2516 ✓ §19.39(ii) Legendre’s and Bulirsch’s Complete Integrals See http://trac.sagemath.org/ticket/15046 §19.39(iii) Legendre’s and Bulirsch’s Incomplete Integrals See http://trac.sagemath.org/ticket/15046 §19.39(iv) Symmetric Integrals See http://trac.sagemath.org/ticket/14996 Euler polynomials are not implemented. §25.21(ii) Zeta Functions for Real Arguments ✓ §25.21(iii) Zeta Functions for Complex Arguments ✓ §25.21(iv) Hurwitz Zeta Function See http://trac.sagemath.org/ticket/15095 §25.21(v) Dilogarithms, Polylogarithms ✓ §25.21(vi) Clausen’s Integral §25.21(vii) Fermi–Dirac and Bose–Einstein Integrals §25.21(viii) Lerch’s Transcendent §25.21(ix) Dirichlet L-series ✓ ✓ ✓ ## September 09, 2013 ### Vince Knight #### Handling data files in a Sage notebook (and some linear regression in Sage) One of my most viewed videos on +YouTube is the following (briefly demonstrating how to import csv files in to +python): I'm in the middle of preparing various teaching materials for an upcoming class on coding for mathematicians. I'm teaching this class in a flipped classroom (if you don't know what that is circle +Robert Talbert on G+ who posts a lot of great stuff about it) and as a result I've been screen casting a lot recently. Some of these clips are solely intended for my students (as I don't believe they'd be of interest to anyone else 'to do this exercise try and think about what your base case would be for the recursion to terminate'). I'm just starting to screen cast for the +Sage Mathematical Software System part of the course and needed to put together a little something as to how to import data in to a Sage notebook. As the above video seemed quite helpful to people I thought I'd put together another one that might be helpful. The data file used can be found here. Here are the lines of code used in the notebook: import csvf = open(DATA + 'reg', 'r') # Open the data file using the special DATA variabledata = csv.reader(f)data = [row for row in data] # Read data using csv libraryf.close()data = [[row[2], row[1]] for row in data[1:]] # Only use data that is of interest, remove unwanted columns and 1st rowa, b = var('a, b') # Declare symbolic variablesmodel(t) = a * t + b # Define modelfit = find_fit(data, model, solution_dict=True) # Find fit of model to datamodel.subs(fit) # View the modelp = plot(model.subs(fit), 5, 11, color='red') # Plot fitp += list_plot(data) # Plot datap #### Examples of computer assisted mathematics needed. This blog post is a very slight expansion of this G+ post where I've asked people for example of computer assisted mathematics. I'm posting here in the hope of maximum exposure. I'm in the middle of preparing a new course that will be teaching +Python and +Sage Mathematical Software System to our first year undegraduates at Cardiff university. As such I'd appreciate as many great example as I can get so please do let me know of anything :) I use the term mathematics in a loose sense to include 'things' that are not necessarily proofs: - identifying conjectures; - visualisations; - etc I'm familiar with the 'well known' results that can be found on this +Wikipedia page: http://goo.gl/EjyKel I can also recommend the book 'A=B' by Petkovsek, Wilf and Zeilberger which you can download here: http://www.math.upenn.edu/~wilf/AeqB. The search for prime numbers is also a pretty cool thing that I can talk about. Even some example of things that have an obvious need of computers would be great to hear about. For example I'm quite familiar with discrete event simulation techniques that I'll be mentioning to the students. Here's a video I put together a while back showing how to use a computer to simulate a queue: ## September 08, 2013 ### Verónica Suaste Morales #### GSoC 1st - 8th September This week a new optimization for the FGLM adapted algorithm has been added. As expected, this new algoritm is faster for computing the grobner basis of the ideal associated to the linear code in the way we need it for the decoding algorithm. Here is part of the comparisson of the groebner basis computation. sage: C = RandomLinearCode(9,3,GF(4,'a')) sage: I = C.create_ideal() sage: %time gB = I.groebner_basis('libsingular:stdfglm') CPU times: user 370.84 s, sys: 2.36 s, total: 373.19 s Wall time: 375.74 s sage: %time gb=C.groebner_basis_fglm() #FGLM adapted algorithm CPU times: user 16.36 s, sys: 0.10 s, total: 16.45 s Wall time: 16.78 s sage: C = RandomLinearCode(10,3,GF(5)) sage: %time gb = C.groebner_basis_fglm()#FGLM adapted algorithm CPU times: user 581.37 s, sys: 2.21 s, total: 583.58 s Wall time: 590.37 s sage: I = C.create_ideal() sage: %time gB = I.groebner_basis() CPU times: user 1331.38 s, sys: 1.14 s, total: 1332.53 s Wall time: 1336.74 s In order to make this comparisson was necessary implement the function for creating the ideal associated to the code. Code Documentation of the FGLM adapted algorithm for linear codes over finite field in the general case and all other functions, including the new decoding algorithm has been documented a tested. The link to the repository: https://bitbucket.org/vsuaste/coding-theory/commits/e3dc1a07f7d304700053981596493646ca06428f Here some examples in which the new decoding algorithm is faster than syndrome decoding of sage: sage: C = HammingCode(2,GF(7)) sage: C Linear code of length 8, dimension 6 over Finite Field of size 7 sage: v = random_vector(GF(7),C.length()) sage: %time C.decode(v) #syndrome algorithm CPU times: user 2.10 s, sys: 0.03 s, total: 2.13 s Wall time: 2.19 s (0, 4, 5, 4, 1, 2, 5, 4) sage: %time C.decode_fq(v) #new decoding algorithm CPU times: user 0.81 s, sys: 0.02 s, total: 0.83 s Wall time: 0.87 s (0, 5, 5, 1, 4, 1, 4, 6) sage: C = HammingCode(2,GF(11)) sage: v = random_vector(GF(11), C.length()) sage: v (0, 10, 0, 8, 10, 0, 2, 5, 2, 10, 10, 4) sage: %time C.decode_fq(v) # new decoding algorithm CPU times: user 53.32 s, sys: 1.45 s, total: 54.76 s Wall time: 56.19 s (0, 1, 0, 10, 8, 0, 0, 2, 3, 8, 8, 5) sage: %time C.decode(v) ###syndrome algorithm still running after hours..... sage: C = HammingCode(3,GF(4,'a')) sage: v = random_vector(GF(4,'a'),C.length()) sage: v (a + 1, a + 1, 0, a + 1, 0, 1, a + 1, a + 1, a + 1, a + 1, 0, 1, a, 1, a, 0, a + 1, 0, a, a + 1, 0) sage: %time C.decode_fq(v) #new decoding algorithm CPU times: user 81.18 s, sys: 1.36 s, total: 82.55 s Wall time: 84.30 s (0, 0, 0, a + 1, 0, 1, 0, 0, 0, 0, 0, 0, a, 1, 1, 0, a + 1, 0, a, a + 1, 1) sage: %time C.decode(v) ###syndrome algorithm still running after hours..... ## September 02, 2013 ### William Stein #### Status report: integrating IPython into https://cloud.sagemath.com -- my approach I'm still working on the IPython notebook integration into https://cloud.sagemath.com right now. This will be a valuable new feature for users, since there's a large amount of good content out there being developed as IPython notebooks, and the IPython notebook itself is fast and rock solid. I spent the last few days (it took longer than expected) creating a generic way to *securely* proxy arbitrary http-services from cloud projects, which is now done. I haven't updated the page yet, but I implemented code so that  https://cloud.sagemath.com/[project-id]/port/[port number]/... gets all http requests automatically proxied to the given port at the indicated project. Only logged in users with write access to that project can access this url -- with a lot of work, I think I've set things up so that one can safely create password-less non-ssl web services for a groub of collaborators, and all the authentication just piggy backs on cloud.sagemath accounts and projects: it's SSL-backed (with a valid cert) security almost for free, which solves what I know to be a big problem users have. The above approach is also nice, since I can embed IPython notebooks via an iframe in cloud.sagemath pages, and the url is exactly the same as cloud.sagemath's, which avoids subtle issues with firewalls, same-source origin, etc. For comparison, here's what the iframe that contains a single ipynb worksheet looks like for wakari.io:  iframe class="notebookiframe" id="" src="https://prod-vz-10.wakari.io:9014/auto_login/acd84627972f91a0838e512f32e09c9823782ec0?next=/notebook_relative/Listing 2.ipynb"  and here's what it's going to look like in cloud.sagemath:  iframe class="notebookiframe" id="" src="https://cloud.sagemath.com/70a37ef3-4c3f-4bda-a81b-34b894c89701/port/9100/Listing 2.ipynb"  With the wakari.io approach, some users will find that notebooks just don't work, e.g., students at University of Arizona, at least if their wifi still doesn't allow connecting to nonstandard ports, like it did when I tried to setup a Sage notebook server there once for a big conference. By having exactly the same page origin and no nonstandard orts, the way I set things up, the parent page can also directly call javascript functions in the iframe (and vice versa), which is potentially very useful. IPython notebook servers will be the first to use this framework, then I'll use something similar to serve static files directly out of projects. I'll likely also add sage cell server and the classic sage notebook as well at some point, and maybe wiki's, etc. Having read and learned a lot of about the IPython notebook, my main concern now is their approach to multiple browsers opening the same document. If you open a single worksheet with multiple browsers, there is absolutely no synchronization at all, since there is no server-side state. Either browser can and will silently overwrite the work of the other when you (auto-)save. It's worse than the Sage Notebook, where at least there is a sequence number and the browser that is behind gets a forced refresh (and a visible warning message about their being another viewer). For running your own IPython notebook on your own computer, this probably isn't a problem (just like a desktop app), but for a long-running web service, where a single user may use a bunch of different computers (home laptop, tablet, office computer, another laptop, etc.) or there may be multiple people involved, I'm uncomfortable that it is so easy for all your work to just get overwritten, so I feel I must find some way to address this problem before releasing IPython support. With cloud.sagemath, a lot of people will likely quickly start running ipython notebook servers for groups of users, since it would take about 1 minute to setup a project with a few collaborators -- then they all get secure access to a collection of ipython notebooks (and other files). So I'm trying to figure out what to do about this. I'll probably just implement a mechanism so that the last client to open an ipython notebook gets that notebook, and all older clients get closed or locked. Maybe in a year IPython will implement proper sync, and I can remove the lock. (On the other hand, maybe they won't -- having no sync has its advantages regarding simplicity and *speed*.) ### Verónica Suaste Morales #### GSoC Aug 26th - 31th For this week the work was to implement a fglm alternative algorithm for computing grobner basis of the ideal associated to a linear code over a finite field. So far I had worked only with the binary case. It's well known that the problem of complete decoding for a linear code is a NP-hard computational, so for codes of long length or over finite fields with big cardinality the time for decoding is big.(grows exponentially) Still I'm working in the optimization of the code to reduce as much as possible the time of decoding. One problem I'm facing is how to generate (in the wisest way) all vectors over the finite field of given weight and certain length. Permutations and combinations in general are operations that take long times plus the fact that the number of vectors grows exponentially with the length of the code. Here the code. Still missing documentation and details for presentation: Code Here an example: sage: C = RandomLinearCode(6,3,GF(4,'a')) sage: v = random_vector(GF(4,'a'),C.length()) sage: v (0, a, a, a + 1, a, a + 1) #time it takes with syndrome algorithm vs fglm algorithm sage: %timeit C.decode(v) #syndrome algorithm 1000 loops, best of 3: 709 us per loop sage: %timeit C.decode_fq(v) #using my implemented fglm 1 loops, best of 3: 609 ms per loop #solutions of syndrome and fglm algorithm are different sage: d = C.decode(v) sage: d (0, 1, 0, a + 1, a, a + 1) sage: d1 = C.decode_fq(v) sage: d1 (0, 1, a + 1, a + 1, a + 1, a + 1) #check that both d and d1 belong to the same coset sage: y1 = v-d sage: y2 = v-d1 sage: H = C.check_mat() sage: H*y1 (0, a + 1, a) sage: H*y2 (0, a + 1, a) ## August 28, 2013 ### William Stein #### LaTeX in the Cloud Motivated by work on a book and by the stacks projects, I just wrote a new web-based LaTeX editor, which I've just released. You can try it now by making a free account at https://cloud.sagemath.com, then creating a project, and uploading or creating a .tex file, then opening it. ### Features • Side-by-side LaTeX editing, with re-build on save (you can set the autosave interval if you want). • Forward and inverse search. • Parsing of the log file, with buttons to jump to corresponding place in tex file and pdf file. • Preview uses high-resolution color png's, so it will work in browsers that don't have any support for pdf. • The command to LaTeX your document is customizable. • The build process should run LaTeX, bibtex, and sagetex automatically if the log file says they need to be run; otherwise you can click a button to force bibtex or sagetex to run. • Scales up to large documents -- my test document is a book! -- for me sitting at home working on my 134 page book, the time from making a change and clicking "save" to when it appears in the preview pane in high resolution is less than 7 seconds. ### Some advantages over https://www.sharelatex.com and https://www.writelatex.com/ Disclaimer: I'm not an expert with either of the editors mentioned above, so I may be completely wrong that the following are advantages: • This is free (unlimited collaborators, space right now) -- my main motivation is to increase usage of Sage • Sagetex is fully supported • Forward and inverse search: jump from point in .tex file to corresponding point in pdf and conversely (it seems the competition doesn't have this, but I bet they will implement it soon after they read this post) • High quality png preview with customizable resolution • Perfect quality embedded pdf view as well, if your browser supports embedded pdf's • If you need a full xterm for some reason you have it: you can run arbitrary purpose programs on that command line. This means, you can download some data (file, website, database, experimental result files, use git), process them in the most general sense of computing, and generate those files or parts of it for your LaTeX document. • It scales up to large documents more efficiently (in my limited tests), since I was pretty careful about using hashing tricks, parallel compute to generate png's, etc. • A different synchronization implementation for multiple people editing the same file at once; the others lock the editor when the network drops, or reset the docuement when the connection comes back; in real life, network connections are often dropping... • I put some effort into trying to make this latex editor work on iPad/Android, though you'll want to use a bluetooth keyboard since there are major issues with CodeMirror and touch still. And some disadvantages: • I've put little work into properly dealing with multi-file LaTeX documents • The error messages are not displayed embedded in the tex document (not sure I want this though). • You must have a cloud.sagemath account (free) -- you can't just start editing without signing up. • Single file download is limited to 12MB right now, so if your PDF is huge, you won't be able to just download it -- you can scp it anywhere though using the terminal. ### Behind the Scenes As a professional mathematician, I've spent 20 years using LaTeX, often enhanced with little Python scripts I write to automate the build process somewhat. Also, I've spent way too much time over the years just configuring and re-configuring forward and inverse search under Linux, OS X, and Windows with various editors and previewers. All the new code I wrote to implement the LaTeX editor is client-side CoffeeScript, HTML, and CSS, which builds on the infrastructure I've developed over the last year (so, e.g., it can run bash scripts on remote linux machines, etc.). Here are some specific problems I confronted; none of the solutions are what I expected two weeks ago or first tried! #### Problem: How should we display a PDF in the browser I investigated three approaches to displaying PDF files in the web browser: (1) show a bunch of images (png or jpg), (2) use a native pdf viewer plugin, and (3) use a javascript pdf renderer (namely pdf.js). Regarding (2), Chrome and Safari have a native plugin that efficiently shows a high-quality display of a complete PDF embedded in a web page, but Chromium has nothing by default. Regarding (3), the Firefox devs wrote pdf.js, which they include with Firefox by default; it looks good on Firefox, but looks like total crap in Chrome. In any case, after playing around with (2)-(3) for too long (and even adding a salvus.pdf command to Sage worksheets in cloud.sagemath), I realized something: the only possible solution is (1), for the following reasons: • Inverse and forward search: It is impossible to read mouse clicks, page location, or control the location of the pdf viewer plugin in some browsers, e.g., in Chrome. Thus only using a PDF plugin would make inverse and forward search completely impossible. Game over. • It might be possible to modify pdf.js to support what is needed for inverse and forward search, but this might be really, really hard (for me). Plus the rendering quality of pdf.js on Chrome is terrible. Game over. • My test document is this book's PDF, which is about 8MB in size. With PDF viewer plugins, every time the PDF file changes, the entire 8MB pdf file has to be transferred to the browser, which just doesn't scale -- especially if you replace 8MB by 60MB (say). I want people to be able to write their books and Ph.D. theses using this editor. When editing a LaTeX document, the PDF file often changes only a little -- usually only a few pages changes and everything else remains identical; only the changes should get sent to the browser, so that even a 1000-page document could be efficiently edited. This sort of thing doesn't matter when working locally, but when working over the web it is critical. So we are stuck with (1) for the main PDF preview for a file we are actively editing using LaTeX. There are a long list of apparent drawbacks: • One substantial drawback to (1) for general PDF display is that there is no way to do full text search or copy text out of the PDF document. Neither of these drawback matters for the LaTeX editor application though, since you have the source file right there. Also, there's nothing stopping me from also providing the embedded PDF viewer, which has search and copy, and that's what I've done for cloud.sagemath. • Another potential drawback of (1) is that it takes a long time to generate jpg or png images for a large pdf file -- 5 pages is fine, but what about 150 pages? 1000 pages? I tried using ImageMagick and Ghostscript. ImageMagick is way too slow to be useful for this. Ghostscript is incredibly powerful for this, and has a wide range of parameters, with numerous different rendering devices. The solution I choose here is to: (1) generate a high quality PNG image just for the currently visible pages (and +/-1), then (2) generate medium quality pages in some neighborhood of the visible pages, then (3) generate low quality PNG's for all the other pages. All this is done in parallel, since the host VM's have many cores. Also, we compute the sha1 hashes of the previews, and if the browser already has them, don't bother to update those images. Finally, it turns out to be important to replace high quality images by lower quality ones as the user scrolls through the document, since otherwise the browser can end up using too much memory. A useful trick for the high quality pages is using ghostscript's downsampling feature, so the PDF is rendered at 600dpi (say) in memory, but output at 200dpi to the PNG. So the Preview tab in the LaTeX editor shows a png-based preview whose quality automatically enhances as you scroll through the document. This png preview will work on any browser (for which cloud.sagemath works), irregardless of PDF plugins. Summary: It is critical to realize exactly what problem we're trying to solve, which is viewing a PDF that is often changing locally. This is completely different than the general problem of viewing a static PDF, or editing a PDF itself, or even annotating one. #### Problem: how to implement forward and inverse search in the browser Forward and inverse search let you easily jump back and forth between a point in the source tex file and the corresponding point in the rendered PDF preview. You need this because editing LaTeX documents is not WYSIWYG (unless you are using something like Lyx or Texmacs), and without this feature you might find yourself constantly being lost, doing fulltext search through the source of pdf file, etc., and generally wasting a lot of effort on something that should be automatic. The first time I used inverse and forward search was around 2004 with the Winedt and TexShopeditors, which I think (at the time) used various heuristics to implement them, since they often didn't quite work right. I 100% assumed that I would have to do use heuristics for cloud.sagemath, and started working on a heuristic approach based on pdftotext, page percentages, etc. Then one morning I searched and learned about synctex, which was "recently" added to the core of pdflatex. The first thing I did was run it, look at the output file and try to parse with my eyes -- that didn't work. I then searched everywhere and could not find any documentation about the format of the synctex files; however, I found a paper by the author of synctex and read straight through it. In that paper, they mention that they provide a C library and C program to parse the synctex files, and explicitly don't document the format since they don't want anybody to write programs to parse it, since they reserve the right to significantly change it. No problem -- so I just call out to the shell and run the synctex program itself with appropriate options. With a little research into scaling factors, etc., I'm able to map mouse clicks on the png to the data synctex needs to get the corresponding location in the source file. This is all actually pretty easy and provides forward and inverse search with absolutely no hacks or heuristics. Also, forward search works well since using PNG's to display the preview means one can precisely set the preview location. #### Problem: making sense of the LaTeX log file When you build a LaTeX document, tex spits out a log file full of filenames, parentheses, warnings, errors, etc., sometimes stopping to ask you questions, sometimes refusing to exit, etc. This file is NOT (by default, at least) easy for a human to read, at least not me! You can see an error message that refers to a specific location in a file, but which file that is is often listed hundreds of lines before, and you must manually balance paranthesis to figure this out. I read some documents about its format, and fortunately found this Javascript library, which parses LaTeX logs. Cloud runs pdflatex using the option -interact=nonstopmode, so that the whole file gets processed, then parses the log file, and displays first errors, then typesetting issues (overfull hboxes, etc.), and finally warnings. Each message has two buttons -- one to jump to the corresponding location in tex file, and one to jump to the location in the pdf preview. This is all easy to use, and I've found myself for the first time ever actually going through tex files and cleaning up the overfull hboxes. The log file also says when to run sagetex and bibtex, and whether or not to run pdflatex again to update cross references, and cloud parses that and automatically runs those tools. For some reason, sagetex doesn't say "run me again" even though it should when you update existing blocks, and then you have to do it manually by clicking a button. Summary: I hope this LaTeX editor in the Sagemath Cloud is useful to people who just want to edit tex documents, play around with sagetex, and not have to worry about configuring anything. Implementing it was fun and interesting. If you have any questions about the technical details, please ask! Enjoy. ## August 25, 2013 ### Verónica Suaste Morales #### GSoC Aug 18th - 25th For this week I have implemented a new decoding algorithm for binary linear codes using Grobner Basis of the ideal associated to the code. In the binary case I have decided to use the FGLM algorithm from singular to compute the Grobner Basis, this algorithm is very fast. So first I have to construct the ideal associated to the code. Then, I create a test-set for the code. And use this one to apply the descent decoding algorithm. More details about this algorithm : Theory The process of computing the grobner basis and the test-set is only executed once, and after that, the decoding process is very fast.(Decoding algorithms with pre-computed information) I have done some tests to analyze the time it takes with the different decoding algorithms and differents types of binary linear codes: Results of experiments: Testing Algorithm Also I have documented the functions with examples and more. The code of this algorithm and functions is in the commit: https://bitbucket.org/vsuaste/coding-theory/commits/4d5c8255eaec112b191eb095873eaf09e5708ed4 With this algorithm I finished algorithms for binary codes. So from now, I have started trying to generalize the algorithms for the case of codes over any finite field. In this case we expect to gain some speed with a new version of FGLM algorithm implemented specificaly for general case of codes. ## August 21, 2013 ### Rasmi Elasmar #### Sage for Android Testing APK Now Available! After much revision and cleaning-up, the Sage Android application is now at a point where most basic features are functional, and bug reporting, feature requests, and general feedback are needed as work on the application progresses. If you’d like to try the latest APK, you can download it here. Features are always being added (an updated APK with History and Favorites will be available soon!), and you can track the latest updates at the GitHub repository. As always, feedback and suggestions are much appreciated. Thank you! ## August 19, 2013 ### Eviatar Bach #### Improvements to special functions in Sage ## August 04, 2013 ### Ondrej Certik #### How to support both Python 2 and 3 I'll start with the conclusion: making backwards incompatible version of a language is a terrible idea, and it was bad a mistake. This mistake was somewhat corrected over the years by eventually adding features to both Python 2.7 and 3.3 that actually allow to run a single code base on both Python versions --- which, as I show below, was discouraged by both Guido and the official Python documents (though the latest docs mention it)... Nevertheless, a single code base fixes pretty much all the problems and it actually is fun to use Python again. The rest of this post explains my conclusion in great detail. My hope is that it will be useful to other Python projects to provide tips and examples how to support both Python 2 and 3, as well as to future language designers to keep languages backwards compatible. When Python 3.x got released, it was pretty much a new language, backwards incompatible with Python 2.x, as it was not possible to run the same source code in both versions. I was extremely unhappy about this situation, because I simply didn't have time to port all my Python code to a new language. I read the official documentation about how the transition should be done, quoting: You should have excellent unit tests with close to full coverage. 1. Port your project to Python 2.6. 2. Turn on the Py3k warnings mode. 3. Test and edit until no warnings remain. 4. Use the 2to3 tool to convert this source code to 3.0 syntax. Do not manually edit the output! 5. Test the converted source code under 3.0. 6. If problems are found, make corrections to the 2.6 version of the source code and go back to step 3. 7. When it's time to release, release separate 2.6 and 3.0 tarballs (or whatever archive form you use for releases). I've also read Guido's blog post, which repeats the above list and adds an encouraging comment: Python 3.0 will break backwards compatibility. Totally. We're not even aiming for a specific common subset. In other words, one has to maintain a Python 2.x code base, then run 2to3 tool to get it converted. If you want to develop using Python 3.x, you can't, because all code must be developed using 2.x. As to the actual porting, Guido says in the above post: If the conversion tool and the forward compatibility features in Python 2.6 work out as expected, steps (2) through (6) should not take much more effort than the typical transition from Python 2.x to 2.(x+1). So sometime in 2010 or 2011 I started porting SymPy, which is now a pretty large code base (sloccount says over 230,000 lines of code, and in January 2010 it said almost 170,000 lines). I remember spending a few full days on it, and I just gave up, because it wasn't just changing a few things, but pretty fundamental things inside the code base, and one cannot just do it half-way, one has to get all the way through and then polish it up. We ended up using one full Google Summer of Code project for it, you can read the final report. I should mention that we use metaclasses and other things, that make such porting harder. Conclusion: this was definitely not "the typical transition from Python 2.x to 2.(x+1)". Ok, after months of hard work by a lot of people, we finally have a Python 2.x code base that can be translated using the 2to3 tool and it works and tests pass in Python 3.x. The next problem is that Python 3.x is pretty much like a ghetto -- you can use it as a user, but you can't develop in it. The 2to3 translation takes over 5 minutes on my laptop, so any interactivity is gone. It is true that the tool can cache results, so the next pass is somewhat faster, but in practice this still turns out to be much much worse than any compilation of C or Fortran programs (done for example with cmake), both in terms of time and in terms of robustness. And I am not even talking about pip issues or setup.py issues regarding calling 2to3. What a big mess... Programming should be fun, but this is not fun. I'll be honest, this situation killed a lot of my enthusiasm for Python as a platform. I learned modern Fortran in the meantime and with admiration I noticed that it still compiles old F77 programs without modification and I even managed to compile a 40 year old pre-F77 code with just minimal modifications (I had to port the code to F77). Yet modern Fortran is pretty much a completely different language, with all the fancy features that one would want. Together with my colleagues I created a fortran90.org website, where you can compare Python/NumPy side by side with modern Fortran, it's pretty much 1:1 translation and a similar syntax (for numerical code), except that you need to add types of course. Yet Fortran is fully backwards compatible. What a pleasure to work with! Fast forward to last week. A heroic effort by Sean Vig who ported SymPy to single code base (#2318) was merged. Earlier this year similar pull requests by other people have converted NumPy (#3178, #3191, #3201, #3202, #3203, #3205, #3208, #3216, #3223, #3226, #3227, #3231, #3232, #3235, #3236, #3237, #3238, #3241, #3242, #3244, #3245, #3248, #3249, #3257, #3266, #3281, #3191, ...) and SciPy (#397) codes as well. Now all these projects have just one code base and it works in all Python versions (2.x and 3.x) without the need to call the 2to3 tool. Having a single code base, programming in Python is fun again. You can choose any Python version, be it 2.x or 3.x, and simply submit a patch. The patch is then tested using Travis-CI, so that it works in all Python versions. Installation has been simplified (no need to call any 2to3 tools and no more hacks to get setup.py working). In other words, this is how it should be, that you write your code once, and you can use any supported language version to run it/compile it, or develop in. But for some reason, this obvious solution has been discouraged by Guido and other Python documents, as seen above. I just looked up the latest official Python docs, and that one is not upfront negative about a single code base. But it still does not recommend this approach as the one. So let me fix that: I do recommend a single code base as the solution. The newest Python documentation from the last paragraph also mentions Regardless of which approach you choose, porting is not as hard or time-consuming as you might initially think. Well, I encourage you to browse through the pull requests that I linked to above for SymPy, NumPy or SciPy. I think it is very time consuming, and that's just converting from 2to3 to single code base, which is the easy part. The hard part was to actually get SymPy to work with Python 3 (as I discussed above, that took couple months of hard work), and I am pretty sure it was pretty hard to port NumPy and SciPy as well. The docs also says: It /single code base/ does lead to code that is not entirely idiomatic Python That is true, but our experience has been, that with every Python version that we drop, we also delete lots of ugly hacks from our code base. This has been true for dropping support for 2.3, 2.4 and 2.5, and I expect it will also be true for dropping 2.6 and especially 2.7, when we can simply use the Python 3.x syntax. So not a big deal overall. To sum this blog post up, as far as I am concerned, pretty much all the problems with supporting Python 2.x and 3.x are fixed by having a single code base. You can read the pull requests above to see how to implemented things (like metaclasses, and other fancy stuff...). Python is still quite the same language, you write your code, you use a Python version of your choice and things will just work. Not a big deal overall. The official documentation should be fixed to recommend this approach, and deprecate the other approaches. I think that Python is great and I hope it will be used more in the future. Written with StackEdit. ## July 31, 2013 ### Verónica Suaste Morales #### GSoC Results of GDDA new decoging algorithm As part of the evaluation for GSoC project I present the results obtained so far. I have implemented a new decoding algorithm GDDA (gradient descent decoding algorithm) based on the grobner representation of a code. The idea of this algorithm is that once computed the grobner representation the decoding step is pretty straight and it doesn't takes time. That's why I save the grobner representation as attribute of the code, so you only have to compute it once. I also modified the format of the output for grobner_representation method. Now returns a dictionary representing Matphi function instead of a List. GDDA and Grobner representation code Here I leave you the comparison I've made of the GDDA with decoding algorithms already implemented in Sage such as "sydrome", "nearest neighbor" and guava. The results are very interesting. I've tried with Hamming Codes, Extended Golay Code and random linear codes. In the case with Hamming Code the GDDA resulted to be faster than "syndrome" ,"nearest neighbor" and "guava "algorithms, even considering the time it takes to compute grobner representation. In case with random linear code GDDA resulted to be fasted that "syndrome" and "nearest neighbor" again even considering the time it takes to GDDA computing grobner representation. With Extended Golay Code "syndrome" and "nearest neighbot" resulted to be faster than GDDA. GDDA Comparison #### GSoC Sixth Week (July 20-27) This week I've been documented and tested the functions I have so far, with the final purpose of opening the ticket. Which I have done. Here I leave you the link: http://trac.sagemath.org/ticket/14973 About the function "covering_rad()": I merged it with the existing one "covering_radius()". By parameter "algorithm" you can indicate wich one you want to use. "Algorithm = guava" use the pre existing method (requires GAP optional packages guava). And "algorithm = None" is my implemented functions which doesn't requires optional packages. Code I also changed the subroutine "insert_next" because I ordered the list every time I inserted a new element, I didn't need it. So now every time I insert a new element simply it looks for the right place with respect to the specified order. Code ## July 24, 2013 ### Verónica Suaste Morales #### GSoC Preparing Ticket By now I'm preparing the ticket for Sage. Here I leave you the documentation I have so far. It suppose to be parte of LinearCode class in linear_code.py Please let me know any comment about the way I do it. I tried to follow what developers guide says but I could be missing something. Possible Ticket documented I also leave you the .py file, because in the preview(above) somethings get changed. (for example you can't see the different color in commented lines...) documentation.py Note: About the functions I have, all of them receive a degree ordering (instance of TermOrder class). I'm thinking change it, and instead it could receive a string with the name of the degree ordering, then I create an instance of TermOrder in the function. I think would be much more intuitive. ## July 22, 2013 ### Verónica Suaste Morales #### GSoC Report (Third Week) This time I implemented a function that given a linear code and a degree ordering it returs the set of coset leaders. The important thing about this functions is that after we have it, we can calculate parameters of the code almost directly, such as: newton radius, covering radius and weight distribution of the cosets. I implemented them as well. The algorithm and expalantion goes here : Algorithm The code of the function goes here: Code And, I already tested it with some examples in Sage : Examples Notes: The algorithm for computing coset leaders is almost the same for computing the grobner representation(Second Week) . So the idea is that if the grobner representation is computed, we save the set of the coset leaders as an atribute of the code. In this way we don't have to compute the set of coset leaders every time we want some of the parameters I mentioned before. We only have to do it once. ## July 21, 2013 ### Verónica Suaste Morales #### GSoC Fifth Week (14-19 July) This week the main goal was to implement the function that given a Binary Linear Code it returns a reduced Grobner basis. We want the Grobner basis to later compute the test-set(see definition in the pdf). This structure is the last thing we'll need to then start programming the new decoding algorithm, which is the main objective of this project. Here I attach the explanation of the algorithm and also the way I use the implemented function. With some examples tested in Sage. Algorithm Here is the implementation of the function: Code So far, I've tried to use what is already implemented in Sage. But one problem I presented with this function is that I need the permutations of one vector with given hamming weight. And I did it using "IntegerVectorsModPermutationGroup" nevertheless this part is very consuming time. And for codes which have a big length is not going to work. So, for the next week I'm planning to find another way of implementing this. At least for the binary case, which I think should be posible with bitwise operations. ## July 20, 2013 ### PolyBoRi Blog #### GSOC 2013 project progress for weeks 0-4 I am Ioana Tamas, and this is a summary of the weekly updates of my GSOC 2013 project for Polybori. Hopefully, the next updates will be posted in separate blog posts. My repository is: http://sourceforge.net/p/libzdd/code. Weeks 0 &1: I have been allocating a lot of time to learning how to work with big code bases etc., so the progress may not be very outstanding. Also, I figured that I will not be able to follow the proposed timeline, one example being the fact that updating the reference and memory allocation methods can't be done without updating the main classes and structures at the same time (which appear as later milestones). Now, a summary of the code I have written until now: 1) I began the ZDD class, which will contain all the members and methods of the Cudd ZDD class, plus the ones inherited from Cudd classes that are of no use for Polybori (they were implemented in Cudd for handling more types of decision diagrams). 2) I started writing the structures needed for the completeness of the class definition, though I have not decided yet if I should rather have classes instead of structures. I also removed the structure members that are not useful for our purpose. 3) I have set up testing files, but I didn't include meaningful unit tests yet. 4) I added the Cudd code base in my directory, and I will gradually delete the files that are not useful until there is no Cudd left. 5) I made a namespace "libzdd" to avoid name coincidences. However, the way I did this may cause problems when I need Cudd files. To finalize and summarize, the things I still have issues with would be: - constructing the library using SCons, so that it accepts the boost unit test framework and takes into consideration the Cudd files - correctly using the namespace - choosing meaningful unit tests Week2 During the past week, I started transforming the Cudd's structs into classes and further working on my main source and header file. After revising the timeline a little bit, I decided that I should take a break from what I started (so I temporarily commented a great part of my code) in order to work more on actually providing something that can be tested, using some temporary definitions. This part is still in progress. Week3 The past week I temporarily kept only my Zero-Suppressed Decision Diagram class definition, using typedefs for the other relevant classes at the moment. This way, I was able to set up some boost tests and (more difficult than I expected) passing them. They test the things implemented until now, which are the constructors and some operators. I also changed the structure of the repository, by making nice and clean header and source files for everything that I use currently use. I also deleted a few things that I had introduced too early, but they will come back at the right time (they are saved somewhere in my computer for now). Week4 This week, I made some significant progress in making the zdd_ZDD class (my project's main class) avoid using raw pointers, by wrapping the access to the Diagram Manager for all the constructors and operators. In order to achieve this, I made use of boost shared pointers, and I introduced a new class. I also introduced a unique table, with its own class (that will soon totally replace Cudd's), using boost tuples and std maps. ## July 16, 2013 ### Eviatar Bach #### Confluent hypergeometric functions in Sage The confluent hypergeometric functions are solutions to Kummer's differential equation, $$z\frac{d^2w}{dz^2} + (b-z)\frac{dw}{dz} - aw = 0.$$ Two linearly independent solutions are functions denoted as$M$and$U$, which I implemented symbolically in Sage ($U$was already implemented, but only numerically). Here are some things it can do: sage: (hypergeometric_M(1, 1, 1) + ....: hypergeometric_U(1, 2, 1)).simplify_hypergeometric() e + 1 sage: hypergeometric_U(1, 3, x).simplify_hypergeometric() (x + 1)/x^2 sage: hypergeometric_M(1, 3/2, 1).simplify_hypergeometric() 1/2*sqrt(pi)*e*erf(1) sage: hypergeometric_U(2, 2, x).series(x == 3, 100).subs(x=1).n() 0.403652637676806 sage: hypergeometric_U(2, 2, 1).n() 0.403652637676806 As far as I can tell, no open-source computer algebra system has this level of simplification of confluent hypergeometric functions; Maxima is used here, but it is not wrapped like this in Maxima. I hope this will prove useful for those working with hypergeometric functions or differential equations in Sage. It's fairly trivial to implement the Whittaker functions in a similar way; the reason I haven't yet is because I didn't want to make the patch larger and stall review, and because the Maxima conversions are a bit trickier. The newly implemented dynamic attributes for symbolic expressions have proved enormously useful for this ticket, as well as for the generalized hypergeometric functions and for Volker Braun's new implementation of piecewise functions. I may write a new section for the Developer's Guide which explains symbolic functions in detail, including the dynamic attributes. I uploaded a patch at #14896; however, since it makes use of the generalized hypergeometric framework of #2516, that patch has to be merged first. Any help with review would be greatly appreciated! ## July 12, 2013 ### Eviatar Bach #### Google Summer of Code status update My work was delayed for a few days due to having to deal with some registration issues at my university. I now have all the main features of hypergeometric functions working, with an initial patch at #2516, and also fixed a bug dependency with #14858. Now I am just expanding the documentation and adding tests. I also wrote a patch for unholding expressions with held operations, #10034, which I thought initially I would use for implementing simplification of hypergeometric functions, but ended up doing it differently. It is useful for other purposes, however. ## July 08, 2013 ### Verónica Suaste Morales #### GSoC Second Week (June 24-29) The work for this week was to implement a function which, given a linear binary code, returns the Groebner representation of itselt. This Groebner representation will after, help us to compute the groebner basis of the ideal asociated to the linear code. You can check the details about this Groebner representation and algorithm here: Algorithm The process for this function implementation was as follow: -Understand what it is already implemented about monomial orderings -Implement sub-functions as insert_next, next_term and member (see pdf above) -Implement funciton groebner_representation Code of this funcions: Code Also, this algorithm has been tested with examples. This work you can see it at cloud sage. My project "Grobner_Project" is public but only my mentors and I can modify it. After complete this algorithm for computing the Grober basis I'll make the patch. ## July 07, 2013 ### Verónica Suaste Morales #### GSoC First Week (June 14-21) My project submitted to GSoC has been accepted! Great start!! In order to get involved with Sage development the first week of work I had the opportunity to attend Sage Days 48 . This workshop was very inspiring for me. I meet other sage developers and their work. I learnt about the different ways to contributing sage and the easiest way to do it. What I most liked it was the introduction to Sage @ cloud. Here you can find a description of this project and what you can do with it: Sage Cloud This tool has become esencial in my project. Advantages I have with Sage cloud: -multiple open windows (with synchronized files! :) ) So you can test what you just changed in the source code, without being opening and closing windows and files. -all you can do with a terminal -share the project with my mentors -I can work everywhere without needing my laptop. In conclusion, so far, it has been very very convenient. And, I'm still discovering and exploring how to take advantage of this awesome tool. I leave you the link of the presentation by William Stein in case you want(you should) to know more about it. http://www.youtube.com/watch?v=MZ4gF33cLfQ&feature=youtu.be ## July 02, 2013 ### Ondrej Certik #### My impressions from the SciPy 2013 conference I have attended the SciPy 2013 conference in Austin, Texas. Here are my impressions. Number one is the fact that the IPython notebook was used by pretty much everyone. I use it a lot myself, but I didn't realize how ubiquitous it has become. It is quickly becoming the standard now. The IPython notebook is using Markdown and in fact it is better than Rest. The way to remember the "[]()" syntax for links is that in regular text you put links into () parentheses, so you do the same in Markdown, and append [] for the text of the link. The other way to remember is that [] feel more serious and thus are used for the text of the link. I stressed several times to +Fernando Perez and +Brian Granger how awesome it would be to have interactive widgets in the notebook. Fortunately that was pretty much preaching to the choir, as that's one of the first things they plan to implement good foundations for and I just can't wait to use that. It is now clear, that the IPython notebook is the way to store computations that I want to share with other people, or to use it as a "lab notebook" for myself, so that I can remember what exactly I did to obtain the results (for example how exactly I obtained some figures from raw data). In other words --- instead of having sets of scripts and manual bash commands that have to be executed in particular order to do what I want, just use IPython notebook and put everything in there. Number two is that how big the conference has become since the last time I attended (couple years ago), yet it still has the friendly feeling. Unfortunately, I had to miss a lot of talks, due to scheduling conflicts (there were three parallel sessions), so I look forward to seeing them on video. +Aaron Meurer and I have done the SymPy tutorial (see the link for videos and other tutorial materials). It's been nice to finally meet +Matthew Rocklin (very active SymPy contributor) in person. He also had an interesting presentation about symbolic matrices + Lapack code generation. +Jason Moore presented PyDy. It's been a great pleasure for us to invite +David Li (still a high school student) to attend the conference and give a presentation about his work on sympygamma.com and live.sympy.org. It was nice to meet the Julia guys, +Jeff Bezanson and +Stefan Karpinski. I contributed the Fortran benchmarks on the Julia's website some time ago, but I had the feeling that a lot of them are quite artificial and not very meaningful. I think Jeff and Stefan confirmed my feeling. Julia seems to have quite interesting type system and multiple dispatch, that SymPy should learn from. I met the VTK guys +Matthew McCormick and +Pat Marion. One of the keynotes was given by +Will Schroeder from Kitware about publishing. I remember him stressing to manage dependencies well as well as to use BSD like license (as opposed to viral licenses like GPL or LGPL). That opensource has pretty much won (i.e. it is now clear that that is the way to go). I had great discussions with +Francesc Alted+Andy Terrel+Brett Murphy+Jonathan Rocher+Eric Jones+Travis Oliphant+Mark Wiebe+Ilan Schnell+Stéfan van der Walt+David Cournapeau+Anthony Scopatz+Paul Ivanov+Michael Droettboom, +Wes McKinney, +Jake Vanderplas, +Kurt Smith+Aron Ahmadia+Kyle Mandli, +Benjamin Root and others. It's also been nice to have a chat with +Jason Vertrees and other guys from Schrödinger. One other thing that I realized last week at the conference is that pretty much everyone agreed on the fact that NumPy should act as the default way to represent memory (no matter if the array was created in Fortran or other code) and allow manipulations on it. Faster libraries like Blaze or ODIN should then hook themselves up into NumPy using multiple dispatch. Also SymPy would then hook itself up so that it can be used with array operations natively. Currently SymPy does work with NumPy (see our tests for some examples what works), but the solution is a bit fragile (it is not possible to override NumPy behavior, but because NumPy supports general objects, we simply give it SymPy objects and things mostly work). Similar to this, I would like to create multiple dispatch in SymPy core itself, so that other (faster) libraries for symbolic manipulation can hook themselves up, so that their own (faster) multiplication, expansion or series expansion would get called instead of the SymPy default one implemented in pure Python. Other blog posts from the conference: ## June 29, 2013 ### Eviatar Bach #### Google Summer of Code: Summary of the first two weeks It has been two weeks since I started my Google Summer of Code project for Sage. Before the start date, I began on the benchmark framework and adding SymPy conversions to functions that lacked them. The latter will make conversions from Sage to SymPy expressions better, although the reverse is still a problem. This is because Sage doesn't parse SymPy expressions as it would for Maxima, for example; it calls a _sage_ method on the SymPy objects, which means that Sage has to rely on SymPy developers to add the conversions, which hasn't been reliable so far. Maybe this should be changed in the future. For the first week I was fortunate to be at Sage Days 48, ostensibly organized for working on the Sage Notebook, although some, including me, worked on different parts of Sage. I really enjoyed it and learned a lot. Burcin Erocal wrote some patches adding dynamic attributes to symbolic expressions (#9556) and allowing symbolic functions to take tuple arguments (#14780), which lays the foundation for implementing hypergeometric functions smoothly. I worked on getting some special function-related patches merged, including making Bessel functions symbolic (#4102), making Airy functions symbolic (#12455), and fixing numerical evaluation of log_gamma (#12521, a palindrome ticket!). I also started working on hypergeometric functions. This past week I have been mostly working on the hypergeometric functions (see here for an introduction), building on Fredrik Johansson's code in #2516. It needs some modification, since it was implemented in a way to work around the aforementioned limitations of symbolic expressions that have now been removed with Burcin's patches. I've been making progress and should have a patch next week. I'm going to make sure I'm diligent with uploading and following through with patches, since I've heard this is sometimes a problem with GSoC projects. ## June 28, 2013 ### Rasmi Elasmar #### First Handshake: Meeting the Sage Cell Server The first and most immediately important task of updating the Sage Android application as part of Google’s Summer of Code is to update the way the app interacts with the Sage Cell Server, which performs the calculations in the cloud so that the Android device doesn’t have to locally. Currently, the application communicates with the server through a series of HTTP requests — the client (our app) initializes the connection, then sends and receives query data back and forth from the server. This all sounds fine and efficient, but by relying solely on HTTP requests, the client ends up having to constantly poll the server to check the status of the calculation (to see if there are any updates), which is inefficient and not ideal for a light application on a light device. It’s a decent way of doing things, but the year is 2013, and the Sage Cell Server now supports WebSocket, so my first task is to update the app’s client-server interactions so that users can once again send calculations and receive results. First, the client must make initial contact with the server in a sort of “handshake” between the two. The app will send an HTTP request, and the server will reply with connection details, at which point the WebSocket connection may be established. This seems easy to implement, but as someone who hasn’t spent much time with networking in Java, I had some difficulty getting hands to shake. The requests I was sending were seemingly correct, but the response I received was similar to that of visiting the site directly from a web browser — 405: Method Not Allowed. How could I let the server know that the app was special, and that it wanted to have meaningful interactions together? With some help from Volker, I was able to see that the issue was in the headers of my request. First, it seemed as if I should use GET instead of POST, which turned out not to be the case. The POST I was looking for (via the Python client), as revealed by WireShark. I was advised to make use of WireShark to inspect each request I was making (using the Python sample client as my example of proper HTTP ettiquette). A POST request was being made, but it was being made to the wrong URI — I ended up using Java’s built-in URI functionality to generate one properly. At this point, all that was holding my request back from serverside acceptance was its headers. By adding Accept-Econding:identity, I finally had intialized a proper POST request, as indicated by the server’s response. Success! At this point, the server responds with a simple JSON object that contains a “kernel_id” and the “ws_url”, indicating that it is ready to begin our session with the information provided. Using the WebSocket URL and the kernel ID, it is simple at this point to establish a connection through one of the many fine WebSocket libraries that exist for Android. Two connections are made: an IOPub socket and a Shell socket. Once the connection is established, the rest is simply a matter of sending and receiving calculations and results in JSON form on both channels. The IOPub channel details the status of the calculation, while the Shell channel deals with the calculation and results themselves. Data is only sent and received when something has actually happened (at which point either the client or server can react accordingly), and once everything is done, the connection is closed. Especially when considering more advanced features such as Interacts, the improvement in networking and overall efficiency and simplicity from switching over to these sockets is clear. Handling results and sending/receiving calculation data in general should be simpler now that the client/sever networking process is simplified. Now, it is a matter of getting everything to work together in order for us to run calculations from the app itself. Although at this moment it has no functional networking capabilities (and therefore cannot perform calculations), you can download the older version of the Sage Math Android app from the Play Store and view (and even contribute to!) the source on GitHub. You can also read more about the Sage Cell Server and its interactions on the GitHub Wiki. ## June 16, 2013 ### Vince Knight #### Why and how: open education resources. This is the fourth post in a series of posts reflecting on the teaching and learning in a recent course I've taught on SAS and R. This post will be quite different in nature to the previous posts which looked at students choices between SAS and/or R in their assessment: Here I would like to talk about how I deliver teaching materials (notes, exercises, videos etc) to my students. All of the teaching materials for the course can be found here: drvinceknight.github.io/MAT013/. How they got there and why I think it's a great place for them to be will be what I hope to discuss... A Virtual Learning Environment that was not as good as alternatives. At +Cardiff University we have a VLE provided that is automatically available to all our students that all lecturers are encouraged to use. So when I started teaching I diligently started using the service but it had various aspects that did not fit well with my workflow (having to upload files on every change, clunky interface that actually seemed optimised for IE and various other things). It was also awkward (at the time, I believe that this has been addressed now) for students to use the environment on smart phones etc... As an alternative, I setup a very simple website using google sites and would use +Dropbox's public links to link pdfs and other resources for my students. An example of such a delivery is these basic Game Theoretical materials. This gave me great control, I no longer had to mess around with uploading versions of files, every change I made was immediately online and also as the site was pretty simple (links and pdfs) it was easily accessible to students on all platforms (I could also include some YouTube videos). An immediate consequence of this approach is that my materials are all publicly available online. To anyone, our students or not. The first thing I did was check with +Paul Harper: the director of the MSc course that I was only teaching on at the time that this was ok. We chatted about it a bit and were both happy to carry on. My main train of thought was that there are far better resources already available online so mine might as well be. (I've subsequently checked with our School's director of learning and teaching and there's no internal regulations against it which is nice to know about +Cardiff University) There is a huge amount of talk about open access in research (I won't go in to that here) but less so to some extent in teaching. I did find this interesting newspaper article that ponders as to "Why don't more academics use open educational resources?". This offers a good general discussion about open education resources. I would feel very very humbled if anyone chose to actually use my resources. I'm at the early part of my career and am still learning so I don't think that will happen anytime soon but there is another more important benefit to having my teaching stuff online for everyone. I always post about any new courses I'm working on, on G+ and am grateful to get a fair bit of feedback from other academics around the world. This in itself gives me a certain level of confidence in front of my students who know that what I'm teaching them is verifiable by anyone in the world. I've often changed a couple of things based on feedback by other academics and I think that's brilliant. To some extent my teaching resources are not just reviewed by a couple of peers in my university but also by anyone online who might be interested in them. (It would be great if research worked this way too) Through G+ (I've posted about how awesome a tool G+ is as an academic personal development tool) I learnt about git and github. If you don't know about git watch this video by +Zoë Blade is very helpful: After a while I jumped in and starting using it. After a little longer while I found out that you can use github to host a website: Using this I it is really easy to put together a very basic website that has all the teaching materials. The added benefit is that the materials are now all in a github repo which opens them up even more (using dbox, only the pdf files were in general in view) whereas now everything is (md, tex source files etc...) and theoretically if anyone wanted to they could pull etc... I'm certainly not the first person to put teaching stuff up on github, (watching people like +Dana Ernst+Theron Hitchman and various others do it is what made me jump in). The github repo for my R and SAS course can be found here and here are some other teaching things I have up on github (with the corresponding webpage if I've gotten around to setting it up): To finish off here are the various reasons I have for putting my teaching stuff up on github: • Openness: • my students know that this is viewable by everyone which hopefully gives the resources a level of confidence; • people on G+ and elsewhere are able to point out improvements and fixes (if and when they have time); • Access: the sites are really simple (basic html with links) so they can be viewed on more or less anything; • Ease of use: I don't have to struggle to use whatever system is being used. If it's an option I kind of refuse to use stuff that makes me less efficient (an example of this is our email system: I use gmail). At the moment the system I like is github + git. I wrote a blog post (which is the most read thing I've ever written - online or offline - I think) showing how to combine various things like tikz, makefiles, +Sage Mathematical Software System etc to automate the process of creating a course site so I'll put a link to that here. ## June 04, 2013 ### PolyBoRi Blog #### GSOC 2013 project for Polybori I am Ioana-Maria Tamas and for this year's Google Summer of code, I will be working for lmonade's project: "Binary decision diagrams for Boolean polynomial rings". The main details of the project are the ones below, but small modifications may occur. • ## Abstract Zero-suppressed binary decision diagrams are used by Polybori for efficiently representing Boolean polynomials. At the moment, they are manipulated via CUDD, which is not specialized on this type of diagrams and only uses C in the implementation. The goal of the project is implementing an independent library in C++, that is specialized on zero-suppressed binary decision diagrams. • ## Objective There are no major problems in the current method used to deal with decision diagrams, but implementing the new library will definitely increase Polybori’s usability and efficiency. The things that will be reimplemented are the reference counting, the caching management, the diagram manager and then the operators and methods from the decision diagram class. • ## Deliverables The final product will be a well-documented, well-tested and easy to build library, that will help Polybori manipulate Boolean Polynomials without using CUDD in the background, and in a more efficient and specialized way. • ## Timeline  Time Frame Milestone 17 June - 24 June Prepare unit tests for a wrapper class that works with CUDD in the background 25 June - 5 July Implement a C++ reference counting method 6 July - 11 July Add the new method to the wrapper and test 12 July - 26 July Implement a C++ cache management method 27 July - 2 August Add the new method to the wrapper and testPrepare for midterm evaluation 3 August - 16 August Re-implement the DdManager, DdNode and DdChild and test them 17 August - 31 August Implement a C++ ZDD class, with all the operations and methods available in CUDD 1 September - 9 September Write unit tests that are independent from CUDD 10 September - 15 September Optional: Make an independent C++ library using the new implementations (with a proper build system)Test everything without CUDD 16 September - 23 September Finalize documentationPrepare for final evaluation ## June 02, 2013 ### Verónica Suaste Morales #### Submitting first patch to SAGE I already got my trac account. And I just sent the next patch to SAGE for ticket #12532. trac_12532_transformation_symbolic_vector.patch #### Project description for GSoC 2013 I leave you my proposal for "Implementation of new decoding algorithm" project. Project Description ## May 29, 2013 ### Eviatar Bach #### Google Summer of Code introduction I'm excited to have been chosen for Google Summer of Code 2013! I'm going to be working on Sage under the mentorship of Flavia Stan and Burcin Erocal. You can see my proposal on Google Docs. I'm going to be posting updates on the project on this blog. ## May 28, 2013 ### Harald Schilly #### Sage announces 3 GSoC projects Sage is pleased to announce three Google Summer of Code projects for 2013. They focus on speed improvements of symbolic functions, simplifying the distribution and installation procedure on Debian/Linux and ubiquitous accessibility of Sage on the Android platform. # Mathematical Functions Library Eviatar Bach – University of British Columbia in Vancouver, Canada (Mentor: Flavia Stan, Backup: Burcin Erocal) Sage interfaces with multiple third-party libraries, such as MPFR, GSL, GP/PARI, mpmath, and Maxima, for numerical evaluation of special functions. There are significant discrepancies between these backends in the performance for numerical approximations of the same expression. An initial benchmark reveals, for example, that calculating spherical_bessel_J(1, 5.2) with SciPy is over 100 times faster than with Maxima. The project has the following goals: 1. develop a benchmark framework to determine which backend should be used by default to evaluate a special function over a specific domain, 2. create symbolic wrappers for all the special functions that can be evaluated numerically by a package included in Sage, 3. create a data structure for generalized hypergeometric functions and extend the symbolic wrappers to obtain representations in terms of generalized hypergeometric functions when possible, 4. implement closure properties for holonomic functions as a next step to improve the symbolic processing of special functions in Sage. # Overall improvement of the Sage Android application Rasmi Elasmar (Mentor: Volker Braun, Backup: Harald Schilly) Although there are already some existing efforts, Sage is still not easily accessible from the Android platform. The Sage Cell client/server infrastructure is an already existing step towards running Sage on a server and communicating back the results. The aim of this proposal is to fix, improve and update the Sage Android application to include new features and functionality, as well as an improved interface for simpler and improved usability. Android's new "Holo" style, sharing of calculations and results, and much more waits to be realized on Android for Sage. # Get Sage ready for Linux distributions Felix Salfelder – Goethe Universität, Frankfurt, Germany (Mentor: Tobias Hansen, Julien Puydt, Jeroen Demeyer & John Palmieri ) The aim of this project is to detach the build process of Sage ("the software") from Sage ("the distribution"). The goal is a build system that works within the context of Sage as well as for any GNU/Linux distribution that ships the dependencies for Sage. Distributions that already ship Sage packages or plan to do so are Fedora and Debian. This project is an important step towards making Sage packages in GNU/Linux distributions feasible. Sage warmly welcomes all three new students and wishes them all the best to learn something new and make an impact in Sage's future developments! ## May 21, 2013 ### Vince Knight #### Probability of saying 'yes' to academic responsibilities I've just read a great post by +Adriana Salerno: Learning to say no. In that post Adriana discusses how in mathematics (and I'm sure a bunch of other/most fields) one needs a long period of uninterrupted time to work on Research she links to this Big Bang Theory clip: She also however talks about how as an early career researcher it's important to take opportunities for responsibilities as and when they come. This is something that rings very true to me. Growing up I played a lot of rugby and basically had a "Say yes to coach" attitude ("Vince, you're slow, run sprints" - "Yes coach", "Vince, you're going to sit on the bench this week" - "Yes coach" etc... - Although I actually said "Oui Monsieur" as all my rugby was played in France, but I digress...). I've kind of taken that attitude in to the early days of my career (I'm still a 'young pup' academia wise) but I also am very grateful of every opportunity that gets sent my way (I'm very lucky to be sitting on various committees, the editorial boards for a couple of journals and am in the middle of preparing not 1 but 2 brand new courses which is a great opportunity as opposed to being given others people's courses!). Having said that, as Adriana points out in her blog it's important to find a balance so that I can also do some research. The point of this post is not to say that I've figured out how to do that but to post this xkcd style graph that I made using this package on github: XKCDify. I'm about at the point where the solid line meets the dashed line (ie the "unkown" for me). I suspect that I'm still being quite optimistic as to how low the probability of saying yes will go for me as I still generally do as I'm told and appreciate the opportunities greatly :) In Adriana's post she talks about a "research day", I might try to be strict on that... PS Here's another similar kind of graph that +Paul Harper (my head of research group) put together when he was actually looking back a bit on his 10 years in Academia. (If anyone's interested here's the repo with the code I used to get that plot, I actually used +Sage Mathematical Software System 's find_fit command to fit a quintic to the few points I wanted to have on there... There might be a better way to do that though...) ## May 05, 2013 ### Vince Knight #### Counting semi magic squares using generating functions and Sage. +John Cook has written a couple of posts recently that caught my eye and they kind of start with this one: Rolling dice for normal sample: Python version. Go check it out it's pretty cool but basically John Cook shows how to use the Sympy package to obtain coefficients from generating functions in python. I thought I'd do something similar but with +Sage Mathematical Software System (which is built on python and would be using Sympy on this occasion) but also on a slightly different topic: Semi Magic Squares. A semi magic square is defined as a square matrix whose row and columns sums are equal to some constant$r$. So for example semi magic squares of size 3 will be of the form:$\begin{pmatrix} a&b&r-a-b\\ c&d&r-c-d\\ r-a-c&r-b-d&a+b+c+d-r \end{pmatrix}$(with$0\leq a, b, c, d, a+c, b+d, a+b, c+d \leq r \leq a+b+c+d$and$a,b,c,d \in \mathbb{Z}$) Using this it can be shown that$|\text{SMS}(3,r)|=\binom{r+4}{4}+\binom{r+3}{4}+\binom{r+2}{4}$(where SMS(n,r) is the set of semi magic squares of size$n$and line size$r$). A really nice and accessible proof of this is in this paper by Bona: A New Proof of the Formula for the Number of 3 by 3 Magic Squares. That is however not the point of this blog post :) In fact I'm just going to show the Sage code needed to enumerate the set SMS(n,2). There's a nice result obtained in A combinatorial distribution problem by Anand Dumir and Gupta in 1966 that expresses the number of elements in SMS(n,r) as a generating function:$\frac{e^{\frac{x}{2}}}{\sqrt{1-x}}=\sum_{n=0}^{\infty}|\text{SMS}(n,2)|\frac{x^n}{n!^2}$In the rest of this post I'll now show how to use +Sage Mathematical Software System to get the numbers of Semi Magic Squares of line sum 2 and size$n$. First of all we need to define the above generating function. In Sage this is easy and as we're going to use the variable$x$don't need to declare it as a symbolic variable (this is by default for$x$in sage). Here's the natural way of doing this: sage: f(x) = e ^ (x / 2)/(sqrt(1 - x)) We can get a plot of$f(x)$easily in Sage: sage: plot(f(x),x,-10,1) which gives: To obtain the taylor series (up to$x^5$) expansion of this function we simply call: sage: taylor(f(x),x,0,5) Which returns: 69/160*x^5 + 47/96*x^4 + 7/12*x^3 + 3/4*x^2 + x + 1 Recalling the generating function formulae above we see that the size of$|\text{SMS}(5,2)|$is:$(5!)^2\times \frac{69}{160}=6210$. We can use usual python syntax to define a function that will return$|\text{SMS}(n,2)|$for any$n$: sage: def SMS(n,2):....: return taylor(f(x),x,0,n).coeff(x^n)*(factorial(n)^2) This makes use of the coeff method on the polynomials in sage. To get$|\text{SMS}(n,2)|$for$0\leq n \leq 10$we can use: sage: print [SMS(n,2) for n in range(11)] which gives: [0, 1, 3, 21, 282, 6210, 202410, 9135630, 545007960, 41514583320] If we throw that sequence in to the OEIS (my best friend during my PhD) we indeed get the correct sequence: A000681 (if you don't know about the OEIS it's awesome, be sure to click one of those two links). In my previous post today I just used pure Python to carry out simulations to estimate$\pi$but there are certain things that I find Sage to be better suited for (like generating functions :) ). In my PhD I looked at a set of objects similar to Semi Magic Squares called: Alternating Sign Matrices. You can find some of the arguments I briefly talk about here in my thesis (the thought of someone actually reading it is terrifying). I used Mathematica throughout my PhD but (without wanting to sound like a sales pitch) if I had known about Sage (not a sponsor) at the time I would have probably used Sage. (All the math on this page is rendered using http://www.mathjax.org/ which I'm not entirely sure to have configured correctly for +Blogger; if some of the math hasn't rendered try reloading the page...) ## April 10, 2013 ### PolyBoRi Blog #### Release of PolyBoRi 0.8.3 Already some time ago we had released PolyBoRi 0.8.3 at Among minor fixes it improves the support of multiple Python installations, comes with consistent library version naming, and easier updating of libm4ri. Binary packages for OpenSuse, Fedora, and Ubuntu are available at the usual place. Updated ebuild scripts for lmona.de and Gentoo-prefix as well as spkg packages for Sage will be available soon. Binary packages are available for recent OpenSuSE, and Fedora as well as Ubuntu distributions. Just point your package manager here: Updated ebuild scripts for lmona.de and Gentoo-prefix a will be available soon. Now, the updated spkg packages for Sage will be included in the next release. My best, Alexander ## April 09, 2013 ### Harald Schilly #### Sage part of Google's Summer of Code 2013 Good News, like last year Sage is once again part of Google's Summer of Code. This means, until April 22 at 19:00 UTC students can submit their applications and mentors will review them and do the matching. Please share this with prospective students or think about being a mentor this year! links: ## April 08, 2013 ### Vince Knight #### Makefiles for tikz sagemath and teaching notes written in markdown So I've posted before on multiple occasions about PCUTL which is a higher education certification process I'm currently undergoing. One of the things I paid particular attention to in my latest portfolio (a copy of which can be found at the previous link) is accessibility of teaching notes. Generally this implies making sure students have notes in a format that they can easily manipulate (for example change font size and color). This is often implied to mean that notes should be written in .doc format. As a mathematician this isn't really an option as all our teaching material really requires to be written in LaTeX. I have spent some time being amazed by pandoc which is a really smart piece of software that allows you to take documents in various languages and output to a variety of other languages. Here's a video I made showing how I use like to use pandoc to get notes in html,pdf and .docx format (I wrote a blog post here about it): My entire SAS/R programming course I'm currently teaching was written this way (you can find the notes etc here). I've recently started writing a new course: a game theory course. This course needs a bunch of diagrams and plots (as opposed to the previous that basically had a bunch of screenshots). I wanted to use the same overall system but planned on using tikz for the diagrams (tikz lets you draw nice diagrams using code) and perhaps thought about using sagetex (I use +Sage Mathematical Software System a lot but have never used sagetex which lets you have sage code inside of a LaTeX document) for the plots. After searching around for a bit I didn't see how I was going to be able to use the awesome pandoc to create teaching notes in multiple formats using the above approach. There was no way for pandoc to take the tikz code and output a separate image file for the html (and whatever it needs for .docx). This blog post is a summary of my approach. Which basically uses a couple of things: • Some tikz code that creates a standalone pdf image; • A script that converts a pdf to a png (so that it can be used in html); • A makefile so that I can recompile only the tikz code that needs to be recompiled; That is how I get my tikz diagrams. I also have a similar system to create plots from sage and I'll also show the makefile I use to get pandoc to sort out the files I want. # Creating standalone pngs with tikz and makefiles To create a standalone tikz image you need to use the standalone document class. Once you've done that you can just use normal tikz code: \documentclass[tikz]{standalone}\usepackage{tikz,amsmath}\tikzstyle{end} = [circle, minimum width=3pt, fill, inner sep=0pt, right]\begin{document}\begin{tikzpicture} \draw (0,0) -- (4,0) node [below] {$u_1$}; \draw (0,0) -- (0,4) node [left] {$u_2$}; \draw (3,0) node[end] (A) {} node [above=.3cm,right] {$(3,0)$}; \draw (0,3) node[end] (B) {} node [above=.3cm,right] {$(0,3)$}; \draw (1,1) node[end] (C) {} node [below,left] {$(1,1)$}; \draw (2,2) node[end] (D) {} node [above=.3cm,right] {$(2,2)$}; \draw (A) -- (D); \draw (D) -- (B); \draw (B) -- (C); \draw (C) -- (A); \draw [dashed,thick] (C) -- ++(0,1.5); \draw [dashed,thick] (C) -- ++(1.5,0); \draw [->] (2,3.5) node[above] {\tiny{Feasible average payoffs}} -- (.75,2); \draw [->] (4,1.5) node[above] {\tiny{Individually rational payoffs}} -- (1.5,1.25);\end{tikzpicture}\end{document} The above for example creates the following standalone image (note that if you pdflatex it you obviously get a pdf which would be great if you were just pdflatex'ing a bigger document but I'll get to creating a png next): I've then written a short python script (I'm sure this could be done as easily in bash but I'm just more comfortable in python) that can act on any given tex file to convert it to png: #!/usr/bin/env pythonfrom sys import argvfrom os import systemsystem("pdflatex %s" % (argv[1]))system("convert -density 300 %s.pdf %s.png" % (argv[1][:-4], argv[1][:-4])) This makes use of convert which is a program that can convert pdf to png (more info here but it's natively on Mac OS and *nix I believe). I call that python file convert_tex_to_png.py and so I can convert any tex file by doing: python convert_tex_to_png.py file.tex I'm going to put all these image files (tex and png) in a folder called images and there's (now that I'm done) about 70 image files in there so every time I change or write a new tex file I don't want to compile all the files and I also don't want to have to waste time compiling the particular file I've written. So I've used makefiles. Now I've tried to use makefiles before but in all honesty they confuse the heck out of me. I spent a while going through a bunch of tutorials but most of them look at c code. For those who don't know, a makefile is basically a short program that gives instructions to compile certain files. When written well these are great as by simply typing make you can recompile any file that has been modified (and not necessarily all of them!). So here's the makefile I've converged to using (after heuristically trying a bunch of stuff from stackoverflow): tex =$(wildcard *.tex)pngs = $(tex:%.tex=%.png)all:$(pngs)%.png: %.tex    ./convert_tex_to_png.py $<;clean: rm *.aux rm *.log rm *.pdf I save the above file calling it makefile so that it is run by default when calling make. This will take all the tex files (that have been modified) and output the corresponding png files as required. Note I can also type make clean to remove all auxiliary files (aux, log and pdf). I find this really useful as I can modify a bunch of tex files and once I'm ready simply run make to get all the pngs as required. I'll briefly show the (basically same) code I use to get plots from sage as standalone pictures. # Creating standalone pngs with sage and makefiles I put all the plots created via sage in a plots directory. Here's an example graph created with sage: The sage code to do that is here: f(a,b)=a^3/3+3/4*b^2+(1-a-b)^2/2p = contour_plot(f,(0,.5),(0,.5),colorbar=True,axes_labels=["$\\alpha$","$\\beta$"],contours=100, cmap='coolwarm',fill=True)p.axes_labels(['$\\alpha$','$\\beta$'])p.save("L18-plot01.png") The makefile (which will keep all png files up to date based on the sage files in the directory): sage =$(wildcard *.sage)pngs = $(sage:%.sage=%.png)all:$(pngs)%.png: %.sage    sage $<;clean: rm *.py (I can similarly use make clean to remove the .py files that get created when running sage in cli). The final thing I'll show is how to write a makefile to get pandoc to output the required file formats. # Using a makefile with pandoc So I write all of my notes in markdown. For various reasons which include: • Pandoc likes markdown as input (it can also handle tex and html though); • I like markdown; • I can incorporate latex in markdown when needed. • If a student ever wanted to modify the markdown but didn't know LaTeX they'd have an honest chance of knowing what was going on (if you don't know markdown it really is worth taking a look at this short video) Thus my makefile must be able to compile all modified markdown files. I use a similar approach to the approach I had for tikz diagrams which is to write a short python script: #!/usr/bin/env pythonfrom sys import argvfrom os import systeme = argv[1][:-3]print esystem("pandoc -s " + e + ".md -N -o " + e + ".html --mathjax")system("pandoc " + e + ".md -o " + e + ".docx")system("pandoc " + e + ".md -N -o " + e + ".pdf --latex-engine=xelatex") (pandocipy_given_file.py is the name of the above python script). There are basically three pandoc commands going on there. The first creates the html with the LaTeX math being rendered by mathjax, the second creates the .docx format and the last one creates the pdf (using the xelatex engine). The final piece of the puzzle is to use the following makefile: md =$(wildcard *.md)htmls = $(md:%.md=%.html)all:$(htmls)%.html: %.md    ./pandocipy_given_file.py $<; The way that makefiles work (I think) is to indicate what files you want as an output (and what those files are dependent on). So for example my previous makefiles wanted to output pngs with tex and sage files as dependencies. Here I use the required output as html and the dependencies are the md files. Using the above I can change any md file and simple type make to get all the pdf, html and docx files updated. # Summary and a couple of things I don't want to forget I have 3 directories here. The first: Course_notes contains the md files and corresponding python script and makefile. Course_notes also contains 2 other directories: images and plots which both contain the images and plots as I've described above. So in the md files I refer to an image as so: ![A pic for C1](images/C01-img03.png) I'm using pdf as the prescribed format of the notes for the course (students are welcome to use html and docx if they wish but I'm recommending pdf) and in general xelatex will sometime move images around to better format the text. To make sure things don't get confusing I want to be able to \ref and \label images. The easiest way I've found to do this without messing around with the html (who knows what happens with docx, life is too short to worry about everything) is to use the \text: In the excellently drawn image shown\text{ in Figure \ref{C01-img03}} we see that...![A pic for C1\label{C01-img03}](images/C01-img03.png) Pandoc is clever enough to simply ignore what is in the \text when converting to html and docx so that you get documents in html and docx that won't have a reference to the figure (but in general they won't reorder the images) and the pdf will have the documents referenced as you can see in these screenshots (showing pdf and html): I've mainly posted this to make sure I remember how to do it but it might be helpful for others. I'm sure it could be improved by getting rid of my python scripts and having everything in the makefile. I tried that but had a hard time getting wildcards to work as I did so I didn't try anything more complicated. Finally there are ways to get makefiles to run makefiles in subdirectories so it would be nice I guess to also do that so that a single make in the highest directory would sort out everything. I'll be putting all the teaching notes up on github so will link to that repository when I've done it. EDIT: Here's the github repo and here's a website with all the content (which will eventually become the class website). ANOTHER EDIT: Here's a follow up post which uses sed to keep the links in the right format. ## March 24, 2013 ### Vince Knight #### Open Source and/or Free Game Theory Software Last week I posted some python code I've been putting together: • The github repo is here • A website with user guide and some videos is here The code is currently a collection of 4 tools that can solve 4 type of game theoretical problems: • Shap.py which calculate the shapley value in a cooperative game; • Gale_Shapley.py an implementation of the extended Gale Shapley algorithm for matching games; • lrs_nash.py a (clumsy) python wrapper for the lrs library which can be used to solve normal form games. (If anyone wants to contribute to how I speak to the c library please do! - I sadly know pretty much no c yet...) • Abm.py which can be used to carry out agent based models of normal form games (to see emergent behaviour). Here's a screencast demonstrating that last program: The point of this blog post is to list some of the other pieces of software available that can be used in Game Theory. # List of game theory tools 1. The lrs library What it does: This is a self-contained ANSI C implementation as a callable library of the reverse search algorithm for vertex enumeration/convex hull problems and comes with a choice of three arithmetic packages. Basically it's a really smart package that does a lot of things. One of those things is solving normal form games. Ease of use: Pretty straight forward on a *nix machine. Download a tar file and make the contents of the unpack directory. To use the package you simply pass it a couple of text files as arguments. 1. The banach.lse.ac.uk/ website. What it does: This is actually just an online interface making use of the above mentioned lrs library to solve normal form games. Ease of use: Very very easy. Just point and click and enter in your normal form game. As far as I can tell that website was put together by +Rahul Savani who is actually connected to the next item on this list. 1. The Gambit library What it does: As a disclaimer I haven't actually used Gambit yet. Here's the blurb from the website: Gambit is a library of game theory software and tools for the construction and analysis of finite extensive and strategic games. Gambit is designed to be portable across platforms, and runs on Linux, Mac OS X, and Windows. Reading through the website it looks like it can be used to solve normal and extensive form games. Development seems fairly active with various suggestions (including an implementation of the lrs library) for contributions. It basically looks like a C++ library with a nice GUI also available. Ease of use: Can't say :) When I get a moment I'm sure I'll try it out... 1. My Sage interact for 2 by 2 games What it does: If you're not familiar with +Sage Mathematical Software System I can't recommend it enough. It's an open source mathematics package (Maple, Mathematica etc). It's a very powerful package with a very active community of developers and users. One of the neat tools is the interact website which lets you put up small bits of code online for anyone to use. This is one of those small bits of code that can be used to solve any 2 by 2 game ("any" is a bit of a lie as someone has pointed out that it crashes for some sets of inputs). It's very basic and solves the game algebraically. I actually put this together as a teaching tool (interacts are great for that) and here's the screencast I shared with my students: Ease of use: Pretty easy, it's again a simple point and click interface available in a browser but it is very limited: it just solves 2 by 2 games. 1. William Spaniel's calculator What it does: I've only played with this for a couple of minutes so I won't comment too much but it looks like a solver implemented in VBA for excel for 2 by 2 games. It also looks at the repeated game aspect a bit looking at whether or not a game is a Prisoners Dilemma and what discount factor would be needed to induce cooperation in an infinitely repeated game context. Ease of use: On a personal level I don't really like opening up an excel file unless I have to (as it usually starts up the various updates and what not) and this also won't run on linux (unless using wine etc...). Having said that it does do what it says on the tin. The VBA seems to be nicely written so that all the results are just shown immediately without the need to run any macros. 1. My Sage interact for the Shapley value What it does: This is another code snippet on the +Sage Mathematical Software System interact site which I wrote to accompany a screencast I did for my students showing how to calculate the Shapley value for a cooperative game (note that there's a small error at 3:51): Ease of use: This is again pretty easy to use. It's a point and click interface in a web browser where one simply says how many players are in the game as well as the characteristic function. 1. The Gametheory.net list A list of (mainly Java) programs. Mostly normal form games (some of which are constrained to zero sum games). I haven't tried many of these but I'll mention this one which is written in javascript and lets you play (and find equilibria) for zero sum games of size up to 5 by 5. 1. Some github repos A quick search seems to identify two repos on github that are connected to game theory (I restricted my search to python): Both these repos seem concerned with normal form games but I have not used them. # Conclusions There is a fair bit of stuff out there for solving normal form games (and the screenshots of Gambit's GUI seem to show a nice ability to handle extensive form games). For larger scale games it seems that something that makes use of the lrs library is the best bet. There does not seem to be much in the form of agent based models and/or other type of games (such as the Gale Shapley algorithm or code to handle the calculation of the Shapley value in cooperative games apart from my humble contributions). In the gametheory.net list there is broken link to something that supposedly would calculate voting indices. Everything that is out there is pretty easy to use though and there's a lot of open source stuff that people could contribute to :) On a personal level I'm hoping to continue to develop my packages and I would really like to implement them in +Sage Mathematical Software System at some point if only to have them available as a great and easy to access teaching tool. I'm sure I must have missed some so I apologise for that (and would love to know about them). (Here's another game theory related list: A reading list) ## March 19, 2013 ### Harald Schilly #### Sage 5.8 Sage 5.8 has been released, download it here. Also, the documentation page has been updated to reflect the new thematic tutorials. I highly recommend to browse through all those topics to learn more about Sage's ever increasing scope. ## March 06, 2013 ### David Joyner #### Paley graphs in Sage Let be a prime power such that . Note that this implies that the unique finite field of order , , has a square root of . Now let and By hypothesis, if and only if . By definition is the Paley graph of order . Paley was a brilliant mathematician who died tragically at […] ### Marshall Hampton #### The Singular Value Decomposition and Congressional Voting I am teaching a class about the SVD (Singular Value Decomposition) of a matrix this week. I was inspired by a nice article of Carla Martin and Mason Porter, "The extraordinary SVD", to compute the SVD of the voting record of the 112th Congress (House of Representatives) to show to my class. If you're interested in how this is done, here is the Sage code I used as a Sage worksheet. Here is the projection onto the first two singular vectors: ## February 10, 2013 ### Sébastien Labbé #### Using sage in the new ipython notebook Ticket #12719 (Upgrade to IPython 0.13) was merged into sage-5.7.beta1. This took a lot of energy (see the number of comments in the ticket and especially the number of patches and dependencies). Big thanks to Volker Braun, Mike Hansen, Jason Grout, Jeroen Demeyer who worked on this since almost one year. Note that in December 2012, the IPython project has received a$1.15M grant from the Alfred P. Sloan foundation, that will support IPython development for the next two years. I really like this IPython sage command line interface so it is really good news!

The IPython notebook

Since version 0.12 (December 21 2011), IPython is released with its own notebook. The differences with the Sage Notebook are explained by Fernando Perez, leader of IPython project, in the blog post The IPython notebook: a historical retrospective he wrote in January 2012. One of the differences is that the IPython Notebook run in its own directory whereas each cell of the Sage Notebook lives in its directory. As William Stein says in the presentation Browser-based notebook interfaces for mathematical software - past, present and future he gave last December at ICERM, there are plenty of projects and directions these days for those interfaces.

In May 2012, I tested the same ticket which was to upgrade to IPython 0.12 at that time. Today, I was curious to test it again.

First, I installed sage-5.7.beta4:

./sage -version
Sage Version 5.7.beta4, Release Date: 2013-02-09


./sage -sh


Install zeromq and pyzmq:

./sage -i zeromq-2.2.0.p0
./sage -i pyzmq-2.1.11.p1


Start the ipython notebook:

./sage -ipython notebook
[NotebookApp] Using existing profile dir: u'/Users/slabbe/.sage/ipython-0.12/profile_default'
[NotebookApp] Serving notebooks from /Users/slabbe/Applications/sage-5.7.beta4
[NotebookApp] The IPython Notebook is running at: http://127.0.0.1:8888/
[NotebookApp] Use Control-C to stop this server and shut down all kernels.


Create a new notebook. One may use sage commands by adding the line from sage.all import * in the first cell.

The next things I want to look at are:

• Test the conversion of files from .py to .pynb.
• Test the conversion of files from .rst to .pynb.
• Test the qtconsole.
• Test the parallel computing capacities of the IPython.

## January 28, 2013

### Sébastien Labbé

#### Understanding Python class inheritance and Sage coding convention with fruits

Since Sage Days 20 at Marseille held in January 2010, I have been doing the same example over and over again each time I showed someone else how object oriented coding works in Python: using fruits, strawberry, oranges and bananas.

Here is my file: fruit.py. I use to build it from scratch by adding one line at a time using attach command to see what has changed starting with Banana class, then Strawberry class then Fruit class which gathers all common methods.

This time, I wrote the complete documentation (all tests pass, coverage is 100%) and followed Sage coding convention as far as I know them. Thus, I hope this file can be useful as an example to explain those coding convention to newcomers.

One may check that all tests pass using:

$sage -t fruit.py sage -t "fruit.py" [3.7 s] ---------------------------------------------------------------------- All tests passed! Total time for all tests: 3.8 seconds  One may check that documentation and doctest coverage is 100%: $ sage -coverage fruit.py
----------------------------------------------------------------------
fruit.py
SCORE fruit.py: 100% (10 of 10)
----------------------------------------------------------------------


## January 21, 2013

### Sébastien Labbé

#### Python function vs Symbolic function in Sage

This message is about differences between a Python function and a symbolic function. This is also explained in the Some Common Issues with Functions page in the Sage Tutorial.

In Sage, one may define a symbolic function like:

sage: f(x) = x^2-1


And draw it using one the following way (both works):

sage: plot(f, (x,-10,10))
sage: plot(f(x), (x,-10,10))


Here both f and f(x) are symbolic expressions:

sage: type(f)
<type 'sage.symbolic.expression.Expression'>
sage: type(f(x))
<type 'sage.symbolic.expression.Expression'>


although there are different:

sage: f
x |--> x^2 - 1
sage: f(x)
x^2 - 1


Now if f is a Python function defined with a def statement:

sage: def f(x):
....:     if x>0:
....:         return x
....:     else:
....:         return 0


It is really a Python function:

sage: f
<function f at 0xb933470>
sage: type(f)
<type 'function'>


As above, one can draw the function f:

sage: plot(f, (x,-10,10))


But be carefull, drawing f(x) will not work as expected:

sage: plot(f(x), (x,-10,10))


Why? Because, the python function f gets evaluated on the variable x and this may either raise an exception depending on the definition of f or return some result which might not be a symbolic expression. Here f(x) gets always evaluated to zero because in the definition of f, bool(x > 0) returns False:

sage: x
x
sage: bool(x > 0)
False
sage: f(x)
0


Hence the following constant function is drawn:

sage: plot(0, (x,-10,10))


which is not what we want.

## January 13, 2013

### Vince Knight

#### Some analysis of the game shut the box

This Christmas, +Zoe Prytherch and I got my mother a small board game called: "Shut the Box" (also known as "Tric-Trac", "Canoga", "Klackers", "Zoltan Box", "Batten Down the Hatches", or "High Rollers" according to the wiki page). It's a neat little game with all sorts of underlying mathematics.
Here's a picture of us playing:

Here's a close up of the box:

A good description of the game can be found on the wiki page but basically the game can be described as follows:
• It is a Solitaire game that can be played in turns and scores compared (but no strategies arise due to interactions of players).
• The aim is to "shut" as many tiles (each being one of the digits from 1 to 9) as possible.
• On any given go, a player rolls two dice and/or has a choice of rolling one dice if the tiles 7,8,9 are "shut".
• The sum total of the dice roll indicates the sum total of tiles that must be "shut". If I roll 11, I could put down [2,9],[3,8],[1,2,8] etc...
• The game ends when a player can't "shut" tiles corresponding to the dice roll (including the case when all tiles are "shut").
Here's a plot of a game we played with a few of my mother's friends:

You can see that myself and Jean did fairly poorly while Zoë made the difference just at the end to win :)

The game could be modelled as a Markov decision process but over the past few days I decided to code it in Sage and investigate whether or not I could find out anything cool with regards to strategies. If you're not aware of Sage I thoroughly recommend you taking a look, it's an awesome open source mathematics package. In this instance I used the Partitions function to rapidly obtain various combinations of dice rolls and/or tile options to get the results I wanted.

The code is all in a github repo and I feel I've written an ok README file explaining how it works so if you're interested in playing with it please do! The code basically has two parts, the first allows you to play using the program instead of a game board (with prompts telling you what your available plays are).

Here's a quick screencast of me demonstrating some of the code:

The second part of the code allows for strategies to be written that play the game automatically ("autoplay"). I've considered 4 strateges:
• Random (Just randomly pick any potential tile combination)
• Shortest (Choose the tile combination that has shortest length: i.e. [3] instead of [1,2] ~ in case of a tie pick randomly)
• Longest (Choose the tile combination that has longest length: i.e. [1,2] instead of [3] ~ in case of a tie pick randomly)
• Greedy. This chooses the tile combination that ensures the best possible chance of the game not ending on the next go. This is calculated by summing the probabilities of obtaining a dice roll that could be obtained.
I've been leaving my computer run the code in the background for a while now so here are some early results. I've got over 90000 instances for Greedy and Random with over 40000 instances for the other two which I thought of later (EDIT: I know have over 400000 instances...). I've collected (as and when one of my home boxes was ideal) more data since writing this and the file is available for anyone to play with here. If I've made any mistakes with my analysis, I'd love to hear it!.

First of all what does the average score look like:

Greedy_Score   Random_Score     Longest_Score   Shortest_Score
Min.   : 0.00  Min.   : 0.00    Min.   : 0.00   Min.   : 0.00
1st Qu.: 6.00  1st Qu.:14.00    1st Qu.:18.00   1st Qu.: 7.00
Median :11.00  Median :21.00    Median :24.00   Median :11.00
Mean   :11.42  Mean   :20.45    Mean   :24.16   Mean   :12.06
3rd Qu.:16.00  3rd Qu.:27.00    3rd Qu.:30.00   3rd Qu.:17.00
Max.   :43.00  Max.   :43.00    Max.   :43.00   Max.   :43.00

Shortest and Greedy seem to do a fair bit better then the other two. If we take a look at the distribution of the scores that is confirmed:

If we also take a look at the length of the game (i.e. how many total dice rolls there were):

Greedy_Length  Random_Length  Longest_Length  Shortest_Length
Min.   :1.000  Min.   :1.000  Min.   :1.00    Min.   :1.00
1st Qu.:4.000  1st Qu.:3.000  1st Qu.:2.00    1st Qu.:4.00
Median :5.000  Median :3.000  Median :3.00    Median :5.00
Mean   :4.897  Mean   :3.414  Mean   :2.86    Mean   :4.82
3rd Qu.:6.000  3rd Qu.:4.000  3rd Qu.:4.00    3rd Qu.:6.00
Max.   :9.000  Max.   :9.000  Max.   :8.00    Max.   :9.00

We see that games last longer when using Greedy and Shortest. If we look at the distribution we see (I've updated this graph since first publishing this post):

It looks like Greedy might allow for slightly longer games than Shortest. Whether or not playing Shortest or Greedy is actually any different seems interesting. A simple analysis of variance (ANOVA) seems to indicate that there is a significant effect:
Df  Sum Sq Mean Sq F value Pstbaov$Method 1 11337 11337 196.3 2e-16 Residuals 127169 7345725 58 This is all experimental of course and analysing whether or not my suggested greedy strategy is indeed the optimal strategy would require some markov decision process modelling. I might look in to this with an undegraduate project student next year. The tricky part about using the greedy method whilst actually playing the game with friends is that it requires a fair bit of calculation. Nothing that Sage can't handle in a split second but something that could keep people waiting a fair bit if you were to work it out with a pen and paper. Despite the significant statistical difference between the methods this analysis seems to indicate that choosing the shortest set of tiles is a pretty good strategy so if in doubt I recommend choosing that... If anyone cares enough about all this to fork the code or suggest a different strategy I'd be delighted :) In case you missed it above, here's the github repo: https://github.com/drvinceknight/Shut_The_Box (On a technical note some of the pre analysis is done using python and the actual graphs seen here are done using R) ## December 23, 2012 ### Lee Worden #### WorkingWiki + sage + adaptive dynamics demo Publish Date: Sat, 12/22/2012 - 23:21 I didn't mean it as a demo - I did it to get some work done. But now I have a wiki page that demonstrates using Sage to do symbolic mathematics in a WorkingWiki page. It also demonstrates how awesome Sage is - in a remarkably short few bursts of code I've managed to create differential equation objects that can evaluate and plot themselves, have Sage do the timescale separation to construct the MacArthur-Levins competition model from a resource-uptake model, and use Sage's differentiation chops to construct the adaptive dynamics of the Mac-Lev model from the model itself. That's right - it generates the adaptive dynamics from the population dynamics, not me! It then confirms that evolution reduces competition (in the one case I tried), not increases it. Here's the link, by the way: http://lalashan.mcmaster.ca/theobio/worden/index.php/Selection_Gradients... ## December 20, 2012 ### Sébastien Labbé #### Percolation and self-avoiding walks Today, I am presenting the Chapter 3 of the book Probability on Graphs of Geoffrey Grimmett during a monthly reading seminar at LIAFA. The title of the chapter is Percolation and self-avoiding walks. I did some computations to improve my intuition on the question. My code is in the following file : bond_percolation.sage. This post is about some of my computations. You might want to test them yourself online using the Sage Cell Server. # Basic Definitions Let $$\mathbb{L}^d=(\mathbb{Z}^d,\mathbb{E}^d)$$ be the hypercubic lattice. Let $$p\in[0,1]$$. Each edge $$e\in \mathbb{E}^d$$ is designated either open with probability $$p$$, or closed otherwise, different edges receiving independant states. For $$x,y\in \mathbb{Z}^d$$, we write $$x \leftrightarrow y$$ if there exists an open path joining $$x$$ and $$y$$. For $$x\in \mathbb{Z}^d$$, we consider the open cluster $$C_x$$ containing $$x$$ : $C_x = \{y \in \mathbb{Z}^d : x \leftrightarrow y \}.$ The percolation probability $$\Theta(p)$$ is given by $\Theta(p) = P_p(\vert C_0\vert=\infty).$ Finally, the critical probability is defined as $p_c = \sup\{p : \Theta(p) = 0 \}.$ The question is to compute $$p_c$$. Results in the Chapter give lower bound and upper bound for $$p_c$$. Many problems are still open like the one claiming that $$\Theta(p_c) = 0$$ for all $$d\geq 2$$: it is known only for $$d=2$$ and $$d\geq 19$$ according to a remark in the chapter. # Some samples when p=0.5 A bond percolation sample inside the box $$\Lambda(m)=[-m,m]^d$$ when $$p=0.5$$ and $$d=2$$: sage: S = BondPercolationSample(p=0.5, d=2) sage: S.plot(m=40, pointsize=10, thickness=1) Graphics object consisting of 7993 graphics primitives sage: _.show()  Another time gives something different: sage: S = BondPercolationSample(p=0.5, d=2) sage: S.plot(m=40, pointsize=10, thickness=1) Graphics object consisting of 10176 graphics primitives sage: _.show()  # Some samples for ranges of values of p From p=0.1 to p=0.9: sage: percolation_graphics_array(srange(0.1,1,0.1), d=2, m=5)  From p=0.41 to p=0.49: sage: percolation_graphics_array(srange(0.41,0.50,0.01), d=2, m=5)  From p=0.51 to p=0.59: sage: percolation_graphics_array(srange(0.51,0.60,0.01), d=2, m=5)  # Upper bound and lower bound for percolation probability $$\Theta(p)$$ In every case, we have the following upper bound for the percolation probability: $\Theta(p) = \mathbb{P}_p(\vert C_0\vert=\infty) \leq \mathbb{P}_p(\vert C_0\vert > 1) = 1 - \mathbb{P}_p(\vert C_0\vert = 1) = 1 - (1-p)^{2d}.$ In particular, if $$p\neq 1$$, then $$\Theta(p)<1$$. In Sage, define the upper bound: sage: p,n = var('p,n') sage: d = var('d') sage: upper_bound = 1 - (1-p)^(2*d)  Also, from Equation (3.8), we have the following lower bound: $\Theta(p) \geq 1 - \sum_{n=4}^{\infty} n (4(1-p))^n.$ In Sage, define the lower bound: sage: p,n = var('p,n') sage: lower_bound = 1 - sum(n*(4*(1-p))^n,n,4,oo) sage: lower_bound.factor() -(3072*p^5 - 14336*p^4 + 26624*p^3 - 24592*p^2 + 11288*p - 2057)/(4*p - 3)^2  This is not defined when $$p=3/4$$, but we are interested in the values in the interval $$]3/4,1]$$. In particular, for which value of $$p$$ is this lower bound strictly larger than zero: sage: root = lower_bound.find_root(0.76, 0.99); root 0.8639366490304586  Let's now draw a graph of the lower and upper bound: sage: U = plot(upper_bound(d=2),(0,1),color='red', thickness=3) sage: L = plot(lower_bound,(0.86,1),color='green', thickness=3) sage: G = U + L sage: G += point((root, 0), color='red', size=20) sage: lower = r"$1-\sum_{n=4}^{\infty} n4^n(1-p)^n$" sage: upper = r"$1 -(1-p)^{4}$" sage: title = r"$1-\sum_{n=4}^{\infty} n4^n(1-p)^n\leq\Theta(p)\leq 1 -(1-p)^{2d}\$"
sage: G += text(title, (.5, 1.05), color='black', fontsize=15)
sage: G += text(upper, (.3, 0.5), color='red', fontsize=20)
sage: G += text(lower, (.7, 0.5), color='green', fontsize=20)
sage: G += text("%.5f"%root,(0.88, .03), color='green', horizontal_alignment='left')
sage: G.show()


Thus we conclude that $$\Theta(p) >0$$ for $$p>0.8639$$ and thus $$p_c \leq 0.8639$$.

# Percolation probability - dimension 2

The code allows to define the percolation probability function for a given dimension d. It generates n samples and consider the cluster to be infinite if its cardinality is larger than the given stop value.

Here we use Sage adaptative recursion algorithm for drawing the plot of the percolation probability which finds the particular important intervals to ask for more values of the function. See help section of plot function for details. Because T might be long to compute we start with only 4 points.

When stop=100:

sage: T = PercolationProbability(d=2, n=10, stop=100)


When stop=1000:

sage: T = PercolationProbability(d=2, n=10, stop=1000)


When stop=2000:

sage: T = PercolationProbability(d=2, n=10, stop=2000)


# Percolation probability - dimension 3

When stop=100:

sage: T = PercolationProbability(d=3, n=10, stop=100)


When stop=1000:

sage: T = PercolationProbability(d=3, n=10, stop=1000)


# Percolation probability - dimension 4

When stop=100:

sage: T = PercolationProbability(d=4, n=10, stop=100)


# Percolation probability - dimension 13

When stop=100:

sage: T = PercolationProbability(d=13, n=10, stop=100)


# Theorem 3.2

Theorem 3.2 states that $$0 < p_c < 1$$, but its proof does much more in fact. Following the computation we just did for Equation (3.8), we get for $$d=2$$ $0.3333 < \frac{1}{2d-1} \leq p_c \leq 0.8639$ and for $$d=3$$: $0.2000 < \frac{1}{2d-1} \leq p_c \leq 0.8639$ This allows to grasp the improvement brought later by Theorem 3.12.

# Connective constant

Using the two following sequences of the On-Line Encyclopedia of Integer Sequences, one can evaluate the connective constant $$\kappa(d)$$

• A001411: Number of n-step self-avoiding walks on square lattice.
• A001412: Number of n-step self-avoiding walks on cubic lattice.

By taking the k-th root of of k-th term of A001411, we may give an approximation of $$\kappa(2)$$:

sage: L = [1, 4, 12, 36, 100, 284, 780, 2172, 5916, 16268, 44100, 120292,
324932, 881500, 2374444, 6416596, 17245332, 46466676, 124658732, 335116620,
897697164, 2408806028, 6444560484, 17266613812, 46146397316, 123481354908,
329712786220, 881317491628]
sage: for k in range(1, len(L)): print numerical_approx(L[k]^(1/k))
4.00000000000000
3.46410161513775
3.30192724889463
3.16227766016838
3.09502148400370
3.03400133198980
2.99705187539871
2.96144397263395
2.93714926770637
2.91369345857619
2.89627439045790
2.87949308754677
2.86632078916860
2.85362749495679
2.84328447096562
2.83329615650289
2.82493415671599
2.81684125361654
2.80992368218258
2.80321554383456
2.79738645741910
2.79172363211806
2.78673687369245
2.78188437392354
2.77756387722633
2.77335345579129
2.76956977331575


By taking the k-th root of of k-th term of A001412, we may give an approximation of $$\kappa(3)$$:

sage: L = [1, 6, 30, 150, 726, 3534, 16926, 81390, 387966, 1853886,
8809878, 41934150, 198842742, 943974510, 4468911678, 21175146054,
100121875974, 473730252102, 2237723684094, 10576033219614, 49917327838734,
235710090502158, 1111781983442406, 5245988215191414, 24730180885580790,
116618841700433358, 549493796867100942,2589874864863200574,
12198184788179866902, 57466913094951837030, 270569905525454674614]
sage: for k in range(1, len(L)): print numerical_approx(L[k]^(1/k))
6.00000000000000
5.47722557505166
5.31329284591305
5.19079831727404
5.12452137580198
5.06709510955294
5.02933019629493
4.99573287588832
4.97111339009676
4.94876680377358
4.93129192790635
4.91521453865211
4.90209314463520
4.88990167518413
4.87964724632057
4.87004597517131
4.86178722582108
4.85400655861169
4.84719703702142
4.84074902256992
4.83502763526502
4.82958688248615
4.82470487210973
4.82004549244633
4.81582557693112
4.81178552451599
4.80809774735294
4.80455755518719
4.80130435575213
4.79817388859565


Then, $$\kappa(2)$$ would be something less than 2.769 and $$\kappa(3)$$ would be something less than 4.798.

# Theorem 3.12

Thus, we may evaluate the lower bound and upper bound given at Theorem 3.12. For dimension $$d=2$$:

sage: k < 2.76956977331575
k < 2.76956977331575
sage: _ / (2.76956977331575  * k)
0.361066909970928 < (1/k)
sage: 1 - 0.361066909970928
0.638933090029072


The critical probability of bond percolation on $$\mathbb{L}^d$$ with $$d=2$$ satisfies $0.3610 < \frac{1}{\kappa(2)} \leq p_c \leq 1 - \frac{1}{\kappa(2)} < 0.6389.$ If we look at the graph of the percolation probability $$\Theta(p)$$ we did above for when $$d=2$$, it seems that the lower bound is not far from $$p_c$$. The lower bound 0.3610 is a small improvement to the simple one got from Theorem 3.2 (0.3333).

Similarly, for dimension $$d=3$$:

sage: k < 4.79817388859565
k < 4.79817388859565
sage: _ / (4.79817388859565 * k)
0.208412621805310 < (1/k)


The critical probability of bond percolation on $$\mathbb{L}^d$$ with $$d=3$$ satisfies $0.2084 < \frac{1}{\kappa(3)} \leq p_c \leq 1 - \frac{1}{\kappa(2)} < 0.6389.$ Again, if we look at the graph of $$\Theta(p)$$ we did above for when $$d=3$$, it seems that the lower bound 0.2084 is not far from $$p_c$$. In this case, the lower bound 0.2084 is a rather small improvement to the lower bound from Theorem 3.2 (0.2000). It might be caused by a poor approximation of $$\kappa(3)$$ from the above sequences of only 30 terms from the OEIS.