A good graphic representation of the Sieve of Eratosthenes being used to generate all primes less than 121. Courtesy Wikipedia: "Sieve of Eratosthenes animation". Licensed under CC BY-SA 3.0 via Wikimedia Commons. |
by Simon Spicer (noreply@blogger.com) at July 21, 2014 04:00 PM
I’m very conscious about wanting to share as much of what I do as a research/instructor in as easy a way as possible. One example of this is the slides/decks that I use for talks I give. 3 or 4 years ago I was doing this with a Dropbox link. More recently this has lead to me putting everything on github but this wasn’t ideal as I’ve started to accumulate a large number of repositories for the various talks I give.
One of the great things about using github though is that for html presentations (reveal.js is the one I’ve used a couple of times), you can use the github deployement branch gh-pages
to serve the files directly.
A while back I put a video together showing how to do this:
So there are positives and negatives. After moving to a jekyll framework for my blog I started thinking of a way of getting all the positives without any negatives…
My immediate thought was to just use jekyll but as far as I can tell I’d have to hack it a bit to get blog posts be actual .pdf files (for my beamer slides) (please tell me if I’m wrong!). I could of course write a short blog post with a link to the file but this was one extra step to just writing my talk that I didn’t want to have to do. Instead of hacking jekyll a bit I decided to write a very simple Python script. You can see it here but I’ll use this blog post to just walk through how it works.
I have a Talks
repository:
~$ cd Talks
Talks$ ls
2014-07-07-Using-a-flipped-classroom-in-a-large-programming-course-for-mathematicians
2014-07-09-Embedding-entrepreneurial-learning-through-the-teaching-of-programming-in-a-large-flipped-classroom
2014-07-25-Measuring-the-Price-of-Anarchy-in-Critical-Care-Unit-Interactions
README.md
favicon.ico
footer.html
head.html
header.html
index.html
main.css
reveal.js
serve.py
What you can see in there are 3 directories (each starting with a date) and various other files. In each of the talk directories I just have normal files for those talks:
Talks$ cd 2014-07-25-Measuring-the-Price-of-Anarchy-in-Critical-Care-Unit-Interactions/
...$ ls
2014-07-25-Measuring-the-Price-of-Anarchy-in-Critical-Care-Unit-Interactions.nav 2014-07-25-Measuring-the-Price-of-Anarchy-in-Critical-Care-Unit-Interactions.snm
2014-07-25-Measuring-the-Price-of-Anarchy-in-Critical-Care-Unit-Interactions.pdf 2014-07-25-Measuring-the-Price-of-Anarchy-in-Critical-Care-Unit-Interactions.tex
I can work on those slides just as I normally would.
Once I’m ready I go back to the root of my Talks directory and run the serve.py
script:
Talks$ python serve.py
This file automatically goes through my sub-directories reading the date from the directory names and identifying .html
or .pdf
files as talks.
This creates the index.html
file which is an index of all my talks (sorted by date) with a link to the right file.
To get the site online you simply need to push it to the gh-pages branch of your github repository.
You can see the site here: drvinceknight.github.io/Talks. Clicking on relevant links brings up the live version of my talk, or at least as live as my latest push to the github gh-pages branch.
The python script itself is just:
1 #!/usr/bin/env python
2 """
3 Script to index the talks in this repository and create an index.html file.
4 """
5 import os
6 import glob
7 import re
8
9 root = "./"
10 directories = sorted([name for name in os.listdir(root) if os.path.isdir(os.path.join(root, name))], reverse=True)
11 talk_formats = ['.pdf', '.html']
12
13
14 index = open('index.html', 'w')
15 index.write(open('head.html', 'r').read())
16 index.write(open('header.html', 'r').read())
17
18 index.write("""
19 <body>
20 <div class="page-content">
21 <div class="wrap">
22 <div class="home">
23 <ul class='posts'>""")
24
25 for dir in directories:
26 if dir not in ['.git', 'reveal.js']:
27 talk = [f for f in glob.glob(root + dir + "/" + dir[:10] + '*') if os.path.splitext(f)[1] in talk_formats][0]
28 date = talk[len(root+dir) + 1: len(root + dir) + 11]
29 title, extension = os.path.splitext(talk[len(root+dir)+11:].replace("-", " "))
30 index.write("""
31 <li>
32 <span class="post-date">%s [%s]</span>
33 <a class="post-link" href="%s">%s</a>
34 <p>
35 <a href="%s">(Source files)</a>
36 </p>
37 </li>
38 """ % (date, extension[1:], talk, title, 'https://github.com/drvinceknight/Talks/tree/gh-pages/' + dir ))
39 index.write("""
40 </ul>
41 </div>
42 </div>
43 </div>
44 """)
45 index.write(open('footer.html', 'r').read())
46 index.write("</body>")
47 index.close()
The main lines that do anything are lines 25-38, everything else just reads in the relevant header and footer files.
So now getting my talks written and online is hardly an effort at all:
# Write awesome talk
Talks$ git commit -am 'Finished talk on proof of finite number of primes' # This I would do anyway
Talks$ python serve.py # This is the 1 extra thing I need to do
Talks$ git push origin # This I would do anyway
There are various things that could be done to improve this (including pushing via serve.py
as an option) and I’m not completely convinced I can’t just use jekyll
for this but it was quicker to write that script then to figure it out (or at least that was my conclusion after googling twice).
If anyone has any fixes/improvements (including: “You idiot just run jekyll with the build-academic-conference-talk-site
flag”) that’d be super appreciated and if you want to see the Talk repository (with python script, css files, header.html etc…) it’s here: github.com/drvinceknight/Talks.
Now to finish writing my talk for ORAHS2014…
Last week I took part in two conferences on the subject of higher education and so here’s a blog post with some thoughts and reflections.
Monday and Tuesday: ‘Workshop on Innovations in University Mathematics Teaching’
The first conference was organised by +Paul Harper, Rob Wilson and myself. The conference website can be found here. The main subject of this was active learning pedagogic techniques, in particular:
The plan for these two days included an almost full day of talks on the Monday and an interactive IBL session on the Tuesday.
Here’s a couple of snippets from each session:
Robert Talbert gave the opening talk describing the flipped learning environment. You can see his slides here.
I was quite nervous about Robert’s talk as he’s an expert in flipping classrooms and I was scheduled to talk after him. He gave a great talk (in fairness every single talk on the day was awesome) and here are a couple of things I noted down:
A flipped classroom does not imply a flipped learning environment!
A traditional classroom encourages the dependency of a student on the instructor.
Flipped learning is not just videos out of class and homework in class.
My favourite:
The most important part of the flipped classroom is not what happens outside of the classroom (videos etc…) but what happens inside the classroom.
I spoke next and you can find my slides here.
I mainly spoke about the programming course that I teach using a flipped class to our first years. I didn’t want to go in to any details about what a flipped learning environment is as I would most certainly not have been able to do it justice after Robert’s talk so I just gave an exemplar of it in practice. I might blog about the particular approach I used another time.
Toby Bailey then gave an excellent talk about the flipped classroom / peer instruction that he has in place for a large class.
One of the highlights was certainly a video of his class in which we saw students respond to a question via in class clickers and then break in to groups of 2 to discuss the particular problem and finally answer the question one more time. Responses were then put up for all to see and it was great to see that students were indeed improving (you could see the distributions of clicker answers improve after the peer instruction).
Here are a couple of other things I noted down during the talk:
It’s not all about the lecturer.
The importance of getting out of the way.
Tell the class why you are doing it.
Stephen Rutherford then spoke about his flipped classroom in biosciences.
This was a great talk and it was very neat to have a non mathematical point of view. My first highlight from Steve’s talk can be seen in the photo above. I think that that fundamental question (‘why am I better than a book’) could in fact help improve the instruction of many.
A flipped classroom allows some control to be put in the hands of the students.
The reason students are at university is to get an education and not necessarily a degree.
We then moved on to a relaxed panel discussion about the flipped classroom, one of the things that I think was a big highlight of that was the importance of involving students in the pedagogic reasoning behind whatever approach is used in a class.
The final ‘talk’ of the day was by Chris Sangwin who talked about his Moore Method class.
This was a fascinating talk as Chris clearly described the success he has had with implementing a Moore Method class.
In particular he highlighted the importance of he role of the instructor in this framework where students are given a set of problems to work through and present to their peers (there is no lecturing in a Moore method class).
Some highlights:
In 2007, after his class finished students found the book from which his problems originated and continued to work through them on their own.
In 2008, students set up a reading group and started to read complex mathematical topics.
The rest of the conference was a natural continuation from Chris’s talk as Dana Ernst and TJ Hitchman spoke about Inquiry Based Learning (a pedagogic term that encompasses the Moore method - I’m sure someone can correct me if I got that subtlety wrong).
This was a really great interactive session that ran over to Tuesday. There is far too much that happened in this session and it was hard to take notes as we were very much involved but here is some of the things that stuck for me.
Another important point was the criteria that defines an IBL approach:
Students are in charge of not only generating the content but also critiquing the content.
You can find all of Dana and TJ’s content on their github repository.
After this session I enjoyed a good chat with TJ who helped me figure out how to make my R/SAS course better. After that my project student who will be working with me to evaluate my flipped classroom had a great talk with Robert, TJ and Dana who gave some really helpful advice. One of the highlights that came out of this was Robert putting very simply what I believe defines an effective pedagogic approach. Hopefully Robert will either correct me or forgive me for paraphrasing (EDIT: He has corrected me in the comments):
Whatever the approach: flipped classroom, IBL, interpretive dance, as long as the system allows you to empower your students and monitor how they are learning it is worth doing.
I’m probably forgetting quite a few details about the workshop (including the 6+ course conference dinner which was pretty awesome). Now to describe the next conference which was the Cardiff University Annual Learning and Teaching Conference
Wednesday: Cardiff University Annual Learning and Teaching Conference
This was my first time attending this conference and I was lucky enough to have my abstract accepted so I was able to give a talk.
You can find my slides here.
In all honesty I was kind of tired so I didn’t take as detailed notes as I would like and/or as many photos but here are some highlights:
I enjoyed Stephen Rutherford discussing the plans of the Biosciences school to bring in peer assessment:
Assessment for learning and not of learning
I liked Anne Cunningham reminding everyone that students obtain content from a variety of sources when talking about their using of scoop.it:
The curators are not just the staff. The prime curators are the students.
Rob Wilson and Nathan Roberts gave an overview of the tutoring system that we as a university will be heading towards.
A great three days
This brings my attendance at education conference to a total of 3 and I must say that I can’t wait to go to the next one (incidentally my abstract for the CETL-MSOR conference got accepted today). I really enjoy the vibe at education conferences as there is a slight sense of urgency in case anyone says anything that someone might be able to use/adapt/steal so as to improve their teaching and have a real impact on our students’ lives.
Two final links:
If anyone who attended the conferences has anything to add it would be great to hear from you and if anyone couldn’t make it but would like to know more: please get in touch :)
by Simon Spicer (noreply@blogger.com) at July 14, 2014 12:37 PM
The plots of the real solutions to four elliptic curves, and their associated real periods. |
by Simon Spicer (noreply@blogger.com) at July 08, 2014 06:04 PM
So the other day I got the following notification on my phone:
This both terrified me and made me smile. It’s nice to think that a bunch of people have (hopefully) been helped with what I put together but also slightly worrying as I think I would be able to put together a much better video if I did it now.
Here it is:
The basic code I use in the video is:
import csv
out = open('data.csv', 'r') # Open a file in read mode
data = csv.reader(out) # Initiate a csv reader object which will parse the data
data = [[row[0], eval(row[1]), eval(row[2])] for row in data] # Read in the data and convert certain entries of each row
out.close() # Close the file
new_data = [[row[0], row[1] + row[2]] for row in data] # Create some new data
out = open('new_data.csv', 'w') # Open a file in write mode
output = csv.writer(out) # Initiate a csv writer object which will write the data in the correct format (csv)
for row in new_data: # Loop through the data
output.writerow(row) # Write the row to file
out.close() # Close the file
There are a couple of other videos that are getting near that landmark, this is possibly my favourite of them:
The above video shows how to simulate a basic queue in Python. My Summer student James Campbell and I are working on something related at the moment.
The main tools I use on a day to day basis are Python/Sage and LaTeX. Since learning to love git and even more so github my usual workflow when starting a new project is something like this:
$ git init
$ vim proof_that_there_are_a_finite_number_of_primes.py
...
$ vim application_for_fields_medal.tex
$ latexmk --xelatex application_for_fields_medal.tex
$ git status
Once I run the git status that’s when I realise that I’ve now got a bunch of files I don’t want to commit .aux
, .pyc
etc…
I then normally throw whatever those files are to a .gitignore
file.
Despite teaching my students to never do anything twice and to always script whatever can be scripted: I have never actually put a .gitignore
file together that has everything in it.
So here’s my first attempt.
This is basically a modification of the Python github .gitignore
and a slight tweak of this gist (there doesn’t seem to be a bespoke LaTeX .gitignore as an option when you create a repository).
# LaTeX Files
*.aux
*.glo
*.idx
*.log
*.toc
*.ist
*.acn
*.acr
*.alg
*.bbl
*.blg
*.dvi
*.glg
*.gls
*.ilg
*.ind
*.lof
*.lot
*.maf
*.mtc
*.mtc1
*.out
*.synctex.gz
*.fdb_latexmk
*.fls
*.nav
*.snm
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
# C extensions
*.so
# Distribution / packaging
.Python
env/
bin/
build/
develop-eggs/
dist/
eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.cache
nosetests.xml
coverage.xml
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Rope
.ropeproject
# Django stuff:
*.log
*.pot
# Sphinx documentation
docs/_build/
You can find this at this gist.
If anyone can suggest anything I’m missing it’d be great to hear it :)
by Vincent Knight (noreply@blogger.com) at June 27, 2014 12:10 PM
Here is my first post using jekyll, I’ll see how this goes and potentially migrate my blog completely to here.
This past year I’ve had a very minor role helping to organise the EURO Summer Institute for Healthcare. This took part from June 11 to 20 and it was really cool with 20 (ish) young academics (from first year PhD student to early years of Post Doc) coming together to present a paper they’re working on and get bespoke feedback from peers and an expert in the field.
The academic activities are really valuable but arguably of more value is the fact that everyone comes together and has a good time. One such ‘good time’ was a treasure hunt that because of some people dropping out of I was lucky enough to take part in. This was really great fun and in this post I’ll describe the problems as well as the Sage code my team used to almost win!
Here’s a photo that Paul Harper took of Pieter, Omar, Jenny and I huddled around Sage:
The treasure hunt involved us running around the small village to find an envelope with a particular mathematical problem. Once we’d solved that problem we would be given an indication as to where our next problem was.
Here are the various challenges (again: thanks to Paul for taking the photos!):
A clinical pathway problem (we did this one by hand):
A queueing problem:
To solve the above we wrote down the Markov chain (that’s actually the picture that Pieter is working on in the first photo of this blog post). Here’s the transition matrix corresponding to the Markov Chain:
\[\begin{pmatrix}
-\frac{4_5}{45} & \frac{4}{45} & 0 & 0 & 0 \\
\frac{1}{20} & -\frac{5}{36} & \frac{1}{45} & \frac{1}{15} & 0 \\
0 & \frac{1}{10} & -\frac{1}{6} & 0 & \frac{1}{15} \\
0 & \frac{1}{20} & 0 & -\frac{13}{180} & \frac{1}{45} \\
0 & 0 & 0 & \frac{1}{10} & -\frac{1}{10}
\end{pmatrix}\]
At the very beginning of the treasure hunt the organisers encouraged everyone to go and get their laptops. We originally thought that we’d manage without but at this point I ran to my room to go get my laptop :)
Here’s how we obtained the steady state probability \(\pi\):
Q = matrix([[-4/45,4/45,0,0,0,1],
[1/20,-(1/20+3/45+1/45),1/45,3/45,0,1],
[0,2/20,-(2/20+3/45),0,3/45,1],
[0,1/20,0,-1/20-1/45,1/45,1],
[0,0,0,2/20,-2/20,1]])
pi = Q.solve_left(vector([0,0,0,0,0,1]))
The above solves the matrix equation \(\pi Q=0\) with an extra column in \(Q\) to ensure that the elements of \(\pi\) sum to 1.
Finally, the particular questions asked for the following weighted sum (corresponding the mean utilisation):
pi[4]+2*pi[2]+1*pi[3]+1*pi[1])/2
This gave a value of about \(0.499\).
I’m obviously missing out a couple of details (the actually description of the state space) but I’ll leave that as an exercise.
A chinese postman problem:
I’m not entirely sure what we did here as Pieter kind of ran with this one but at some point I was asked to calculate a matching on our graph. Here’s the Sage code:
sage: K = 500 # The inbuilt matching algorithm aim to maximise: we wanted to minimise so I threw a large constant in...
sage: G = Graph( { 1: {2: -23+K, 5:-20+K, 8:-19+K},
....: 2: {1: -23+K, 3:-8+K, 5:-18+K},
....: 3: {2: -8+K, 4:-5+K, 5:-16+K},
....: 4: {3: -5+K, 5:-14+K, 7:-7+K, 10:-21+K},
....: 5: {1: -20+K, 2:-18+K, 3:-16+K, 4:-14+K, 6:-1+K},
....: 6: {5: -1+K, 7:-8+K, 8:-20+K, 9:-12+K, 10:-17+K},
....: 7: {4: -7+K, 6:-8+K},
....: 8: {1: -19+K, 6:-20+K, 9:-15+K, 11:-20+K},
....: 9: {6:-12+K, 8:-15+K, 10:-14+K, 11:-12+K},
....: 10: {4:-21+K, 6:-17+K, 9:-14+K, 11:-18+K},
....: 11: {8:-20+K, 9:-12+K, 10:-18+K},
....: }, sparse = True)
sage: G.matching()
[(1, 8, 481), (2, 3, 492), (4, 7, 493), (5, 6, 499), (9, 11, 488)]
I’m not entirely sure that’s the right Graph I was supposed to use but it’s what I have left over in my Sage notebook…
A packing problem:
I had remembered seeing that packing problems were implemented in Sage so as we were running back from collecting the clue I yelled: this will take 3 minutes!
Sadly, this wasn’t the case as the only implementation involves bins of the same size (which isn’t the case here). As I read around the docs the rest of my team was able to solve this heuristically which left us with the last challenge and at this point nicely in the lead!
The problem of doom that made everyone cry:
After having pulled in the data here is about as far as Sage got us:
p = list_plot(healthy, color='red', size=30)
p += list_plot(pathological, color='green',size=30)
p
This was a tricky problem.
We had no wifi in our hotel so downloading a relevant R
package was out of the question.
We eventually formulated a non linear integer program but sadly our solver didn’t seem able to handle it in time. With two minutes to go (the deadline was dinner) one of the teams who had been doing things very steadily ran over claiming that they might have a solution. Everyone went quiet and walked over as their solution was checked and verified. It was a really nice moment as everyone clapped and cheered. Here’s a picture of a bunch of us crowded around the solution understanding how they took a stab at fixing some of the variables so that the solver could get the solution.
This was a phenomenal piece of fun. You can find another description with a bunch of funny photos of us all struggling here.
If anyone has some suggestions to how we could have solved any problems any faster I’d be delighted to hear them :)
by Simon Spicer (noreply@blogger.com) at June 26, 2014 04:23 PM
by Simon Spicer (noreply@blogger.com) at June 19, 2014 02:48 PM
by Simon Spicer (noreply@blogger.com) at June 19, 2014 11:36 AM
by William Stein (noreply@blogger.com) at June 12, 2014 01:31 PM
by Simon Spicer (noreply@blogger.com) at June 11, 2014 04:57 PM
by Simon Spicer (noreply@blogger.com) at June 11, 2014 02:48 PM
The cloud.sagemath.com homepage. |
The project root view, before any files have been created. |
~$ git clone git://github.com/sagemath/sage.git
~$ cd sage
~/sage$ make
GitHub repo creation page. |
~/sage$ git remote add trac git@trac.sagemath.org:sage.git -t master
~/sage$ git remote -v
origin git://github.com/sagemath/sage.git (fetch)
origin git://github.com/sagemath/sage.git (push)
trac git@trac.sagemath.org:sage.git (fetch)
trac git@trac.sagemath.org:sage.git (push)
The -b parameter creates the branch, while checkout switches to that branch. Typing git branch then shows you what branches currently exist locally, as well as the one you're currently on.
~/sage$ git checkout -b demo
Switched to a new branch 'demo'
~/sage$ git branch
* demo
master
And finally, we can push to our repo to sync the local copy therewith. For this you will need your GitHub password - so not just any old Harry can push to your repo.
~/sage$ git remote add demo https://github.com/haikona/demo.git
~/sage$ git remote -v
demo https://github.com/haikona/demo.git (fetch)
demo https://github.com/haikona/demo.git (push)
origin git://github.com/sagemath/sage.git (fetch)
origin git://github.com/sagemath/sage.git (push)
trac git@trac.sagemath.org:sage.git (fetch)
trac git@trac.sagemath.org:sage.git (push)
by Simon Spicer (noreply@blogger.com) at June 10, 2014 03:21 PM
by William Stein (noreply@blogger.com) at June 05, 2014 06:42 PM
by William Stein (noreply@blogger.com) at June 02, 2014 11:07 AM
The second week of GSoC involved a lot of staring at JSON replies, discarded code and unecessary classes, but I am reasonably pleased with the final product.
The main agenda during the week was to redo the JSON parsing code(which currently uses the default Android implementation) with a serializer/deserializer, in this case Gson. Hence, the main requirements were:
Why use Gson ?
Consider a JSON reply from the server, in this case the status reply from the server.
{ "content": { "execution_state": "busy" }, "msg_id": "42c63936-be03-47a3-9892-b48af8246a33", "parent_header": { "msg_id": "92d67417-ded1-4460-a852-9903d392c50d", "session": "ae7cc4cb-8139-455a-b041-b4ffa44a8882", "username": "", "msg_type": "execute_request" }, "header": { "msg_id": "42c63936-be03-47a3-9892-b48af8246a33", "username": "kernel", "date": "2014-05-31T08:05:17.951510", "session": "b86fbd1b-2a1c-456b-b745-504ad2f79f2f", "msg_type": "status" }, "metadata": {}, "msg_type": "status" }
Now normal java code for obtaining, for example, the msg_id in “header” field would look like
try{ JSONObject response = new JSONObject(str); //str contains the response from server in string format. String msg_id= response.getJSONObject("header").getString("msg_id"); }catch(JSONException e){ e.printStackTrace(); }
Which works passably well in most cases, but there are a few caveats.
Now, using GSON, we could do the following:
StatusReply.java
class StatusReply{ private String msg_id; private String msg_type; private Header header; private Header parent_header; private MetaData metadata; private StatusContent content; // +Getters and Setters public static class Header{ private String msg_id; private String session; private String username; private String msg_type; //+Getters and Setters } public static class StatusContent{ private String execution_state; //+Getters and Setters } public static class MetaData{ //Empty for now } }
And then we simply call:
StatusReply reply = new Gson().fromJson(str,StatusReply.class); String msg_id=reply.getHeader().getMessageID();
And that’s basically it.
That one line converts the entire JSON string into a POJO(Plain old Java object). As we can see it is far more concise,useful and pleasing to the eye. No more raw Strings floating around, and no more unecessary exception handling.
Now the Sage server request & replies are fairly consistent in some fields but is wildly different in others.This presented a design challenge wherein I had to find common elements amongst the reply to group in a base class and extend the other, individual replies from this base class. Due to the inherent differences in the various replies, I feel I was only partially successful in doing so, but with some tweaking, I managed to bring down the number of model classes from an initial 30ish(!) to a much more manageable 13.
The only major trouble I had was with the reply to an @interact input:
{ "content": { "data": { "application\/sage-interact": { "new_interact_id": "17bc0888-c7ed-4aad-877d-b230aca49943", "controls": { "n": { "update": true, "raw": true, "control_type": "slider", "display_value": true, "values": [ "1", "2", "3", "4", "5", "6", "7", "8", "9", "10" ], "default": 0, "range": [ 0, 9 ], "subtype": "discrete", "label": "n", "step": 1 } }, "readonly": false, "locations": null, "layout": [ [ [ "n", 1 ] ], [ [ "_output", 1 ] ] ] }, "text\/plain": "Sage Interact" }, "source": "sagecell" }, "msg_id": "ad34a0c6-8109-4314-85e7-698ba6704bbf", "parent_header": { "msg_id": "92d67417-ded1-4460-a852-9903d392c50d", "session": "ae7cc4cb-8139-455a-b041-b4ffa44a8882", "username": "", "msg_type": "execute_request" }, "header": { "msg_id": "ad34a0c6-8109-4314-85e7-698ba6704bbf", "username": "kernel", "date": "2014-05-31T08:05:17.955924", "session": "b86fbd1b-2a1c-456b-b745-504ad2f79f2f", "msg_type": "display_data" }, "metadata": {}, "msg_type": "display_data" }
We can see that the msg_type,msg_id,header,parent_header and content fields all remain unchanged, but the remaining are vastly different. However, deserializing these with Gson is still relatively simple, except the one part,viz the “n” object. Now, this key is dynamic, since it depends on the variables the user has used in their input, and so it’s not really possible to make a class for this. However, Gson provides a solution to this problem in the form of a custom deserializer. A few(~150) lines of code later, and this was done.
What seems easy in writing, was not so in actual implementation.There are a number of ways I could have gone about this(and I tried most of them) and this was the only way which seemed natural and did not rely on hacks to obtain the reply.This took most of 4 days to do, and I finalized the classes & methods in the last two days.
I’ve never needed to write a custom deserializer before, so that was an exciting and informative challenge.
Advantages
Why did I just add a bunch of classes which currently serve not much purpose other than to serve as a template for the JSON replies ?
Future Agenda
Next week’s goals include:
I hope I can get this done in a week, but since I am reasonably ahead at the moment w.r.t my proposal, I do have some extra time available.
I’ll have a chat with Volker about the library and hopefully together we can come to a decision on what to use.
Thats all from me, thanks for reading!
It’s the first week of GSoC, and so I decided that maybe I should stop procrastinating and update this blog.
Here we go then!
Agendas during this week include:
This week I have continued working on the spectrum of the Ricci operator for a monochromatic 4 – valent node dual to an equilateral tetrahedron. This blog post reports on work in progress,
In my last post I indicated how far I had got in porting the code for the monochromatic 4-valent node from Mathematica to sagemath. This is essentially complete now and I have just got to output a graph of eigenvalues of curvature versus spin.
Sagemath code for the spectrum of the Ricci Operator for a monochromatic 4-valent node dual to an equilateral tetrahedron
The curvature is written as a combination of the length of a hinge and the deficit angle around it.
As yet the code is unoptimised. Below is a sample of the output so far:
So now the eigenvalues have been found I can start to make use of them in calculations of the Hamiltonian Constraint Operator.
Update:
# Set up parameters:
K = 200 # Set the upper bound
c = [80,160,60,2000,2500,3500,3000,6000,25000] # The the cost vector
h = [1,1,1,5,2,5,4,14,20] # Set the 'housing space' constraints vector
n = len(c) # Get the number of warriors
C = lambda x: sum([x[i]*c[i] for i in range(n)]) # Define the cost function
H = lambda x: sum([x[i]*h[i] for i in range(n)]) # Define the housing capacity function
# Create and solve the optimisation problem
p = MixedIntegerLinearProgram(maximization=False) # Create an optimisation problem with minimization as the goal
x = p.new_variable(integer=True) # Create a variable with the requirement that it must be integer
p.add_constraint(H(x)==K) # Set the housing constranit
p.set_objective(C(x)) # Set the objective function
p.show() # Show the optimisation problem
p.solve() # Solve the optimisation problem
# Solve the optimisation problem
print 'Has solution:' p = MixedIntegerLinearProgram(maximization=False) # Create an optimisation problem with minimization as the goal
x = p.new_variable(integer=True) # Create a variable with the requirement that it must be integer
p.add_constraint(H(x)==K)
p.add_constraint(x[2]<=10) # Don't want more than 10 goblins
p.add_constraint(x[7]>=1) # Want 1 healer at least
p.set_objective(C(x))
p.show()
p.solve()
print 'Has solution:'
for i, v in p.get_values(x).iteritems():
print 'x_%s = %s' % (i, int(round(v)))
print 'Housing spaces used: %s' % H(p.get_values(x))
damagepersecond = [18, 16, 19, 24, 32, 72, 125, 0, 140]
C = lambda x: sum([x[i]*damagepersecond[i] for i in range(n)])
p = MixedIntegerLinearProgram() # Create an optimisation problem with minimization as the goal
x = p.new_variable(integer=True) # Create a variable with the requirement that it must be integer
p.add_constraint(H(x)<=K)
p.add_constraint(x[2]<=10) # Don't want more than 10 goblins
p.add_constraint(x[7]>=1) # Want 1 healer at least
p.set_objective(C(x))
p.show()
p.solve()
print 'Has solution:'
for i, v in p.get_values(x).iteritems():
print 'x_%s = %s' % (i, int(round(v)))
print 'Housing spaces used: %s' % H(p.get_values(x))
The result of the above is \(x=(0,0,2,0,0,0,46,1,0)\). I.e take 2 goblins, 46 wizards and 1 healer.
p = MixedIntegerLinearProgram() # Create an optimisation problem with minimization as the goal
x = p.new_variable(integer=True) # Create a variable with the requirement that it must be integer
p.add_constraint(H(x) <= K)
p.add_constraint(x[2] <= 10) # Don't want more than 10 goblins
p.add_constraint(x[7] >= 1) # Want 1 healer at least
p.add_constraint(x[1] * h[1] + x[6] * h[6]>=20) # Want at least 20 capacity worth of distance attackers
p.add_constraint(x[5] + x[8]>=1) # Want at least 1 of air attackers
p.add_constraint(5 * x[4] >= x[3]) # Want at least 1 wall breaker per 5 giants
p.add_constraint(20 * x[4] >= x[0]) # Want at least 1 wall breaker per 20 barbarians
p.set_objective(C(x))
p.show()
p.solve()
print 'Has solution:'
for i, v in p.get_values(x).iteritems():
print 'x_%s = %s' % (i, int(round(v)))
print 'Housing spaces used: %s' % H(p.get_values(x))
by Vincent Knight (noreply@blogger.com) at May 11, 2014 10:06 AM
This week thanks to some colleagues I have been working on the spectrum of the Ricci operator for a monochromatic 4 – valent node dual to an equilateral tetrahedron. This blog post reports on working in progress,
I have been reviewing the papers seen in the posts:
Basically I am porting Mathematica code over to sagemath so that I can then use it it the calculation of the matrix elements of the LQG Hamiltonian Constraint operator discussed in the in the posts:
So far I have written code for a number of operators, but I still have the same number still to do, After this I’ll need to join them together.
The matrix defining the operator Q {e1, e2, e3} used in the definition of the volume operator
H ithe matrix defining the operator δij.Yi.Yj used to define the length operator expressed in the intertwiner basis
And B the intertwiner basis
When complete I’ll be able to produce graphs such as those below which is a plot of the spectrum of R as a function of the spin. This can then e used in The numerical investigation of the LQG Hamiltonian Constraint Operator.
I've just pushed out a major update to how synchronization works in https://cloud.sagemath.com.
This change is pretty significant behind the scenes, but the only difference you should notice is that everything should be better. In particular:
Here's a short technical description of what changed. The basic architecture of SageMathCloud is that there are many web browsers connected to many hubs, which are in turn connected to your project (and to many other projects too):
[web browser] <- websocket ----\/
[web browser] <------------> [hub]<------ tcp -------\/
[project]
[web browser] <------------> [hub]<------------------/\
Until today, the differential synchronization implementation involved having a copy of the document you're editing on:
In particular, there were three slightly different implementations of differential synchronization running all over the place. The underlying core code is the same for all three, but the way it is used in each case is different, due to different constraints. The implementations:
Because we're using Node.js, all three implementations are written in the same language (CoffeeScript), and run the same underlying core code (which I BSD licensed at https://github.com/sagemath/cloud/blob/master/diffsync.coffee). The project implementation was easiest to write, since it's very simple and straightforward, and has minimal constraints. The browser implementation is mainly difficult, since the Internet comes and goes (as laptops suspend/resume), and it this involves patching and diff'ing a CodeMirror editor instance; CodeMirror is difficult, because it views the document as a line instead of a single long string, and we want things to work even for documents with hundreds of thousands of lines, so converting back and forth to a string is not an option! Implementing the hub part of synchronization is the hardest, for various reasons -- and debugging it is particularly hard. Moreover, computing diffs can be computationally expensive if the document is large, so doing anything involving differential sync on the hub can result in nontrivial locking cpu usage, hence slower handling of other user messages (node.js is single threaded). The hub part of the above was so hard to get right that it had some nasty locking code, which shouldn't be needed, and just looked like a mess.
A lot of issues people ran into with sync involved two browsers connected to different hubs, who then connected to the same document in a project. The two hubs' crappy synchronization would appear to work right in this part of the picture "[web browser] <------------> [hub]", but have problems with this part "[hub]<-------------->[project]", which would lead to pain later on. In many cases, the only fix was to restart the hub (to kill its sync state) or for the user to switch hubs (by clearing their browser cookies).
Change: I completely eliminated the hub from the synchronization picture. Now the only thing the hub does related to sync is forward messages back and forth between the web browser and local hub. Implementing this was harder than one might think, because the the project considered each client to be a single tcp connection, but now many clients can connect a project via the same tcp connection, etc.
With this fix, if there are any bugs left with synchronization, they should be much easier to debug. The backend scalability and robustness of sync have been my top priorities for quite a while now, so I'm happy to get this stuff cleaned up, and move onto the next part of the SMC project, which is better collaboration and course support.
by William Stein (noreply@blogger.com) at May 06, 2014 10:38 AM
c, c++, cql, cpp, cc, conf, csharp, c#, coffee, css, diff, dtd, e, ecl, f, f90, f95, h, hs, lhs, html, java, jl, js, lua, m, md, mysql, patch, gp, go, pari, php, py, pyx, pl, r, rst, rb, ru, sage, sagews, scala, sh, spyx, sql, txt, tex, toml, bib, bbl, xml, yaml.
%default_mode
to make that mode the default. Also, graphics actually work in the %r automatically, exactly as in normal R (no mucking with devices or png's).cd ~/.snapshots/master
in the terminal.basemap, biopython, biopython, bitarray, brian, cbc, chomp, clawpack, cluster_seed, coxeter3, cryptominisat, cunningham_tables, database_cremona_ellcurve, database_gap, database_jones_numfield, database_kohel, database_odlyzko_zeta, database_pari, database_symbolic_data, dot2tex, fabric, gap_packages, gnuplotpy, greenlet, guppy, h5py, httplib2, kash3, lie, lrs, lxml, mahotas, mercurial, mpld3, munkres, mysql-python, nauty, netcdf4, neuron, normaliz, nose, nose, numexpr, nzmath, oct2py, p_group_cohomology, pandas, paramiko, patsy, patsy, phc, plotly, psutil, psycopg2, pybtex, pycryptoplus, pyface, pymongo, pyproj, pyx, pyzmq, qhull, quantlib, redis, requests, rpy2, scikit_learn, scikits-image, scimath, shapely, simpy, snappy, statsmodels, stein-watkins-ecdb, tables, theano, topcom, tornado, traits, xlrd, xlwt, zeromq
KernSmooth, Matrix, Rcpp, cairo, car, circular, cluster, codetools, e1071, fields, ggplot2, glmnet, lattice, mgcv, mvtnorm, plyr, reshape2, rpart, stringr, survival, zoo
vim git wget iperf dpkg-dev make m4 g++ gfortran liblzo2-dev libssl-dev libreadline-dev libsqlite3-dev libncurses5-dev git zlib1g-dev openjdk-7-jdk libbz2-dev libfuse-dev pkg-config libattr1-dev libacl1-dev par2 ntp pandoc ssh python-lxml calibre ipython python-pyxattr python-pylibacl software-properties-common libevent-dev xfsprogs lsof tk-dev dstat emacs vim texlive texlive-* gv imagemagick octave mercurial flex bison unzip libzmq-dev uuid-dev scilab axiom yacas octave-symbolic quota quotatool dot2tex python-numpy python-scipy python-pandas python-tables libglpk-dev python-h5py zsh python3 python3-zmq python3-setuptools cython htop ccache python-virtualenv clang libgeos-dev libgeos++-dev sloccount racket libxml2-dev libxslt-dev irssi libevent-dev tmux sysstat sbcl gawk noweb libgmp3-dev ghc ghc-doc ghc-haddock ghc-mod ghc-prof haskell-mode haskell-doc subversion cvs bzr rcs subversion-tools git-svn markdown lua5.2 lua5.2-* encfs auctex vim-latexsuite yatex spell cmake libpango1.0-dev xorg-dev gdb valgrind doxygen haskell-platform haskell-platform-doc haskell-platform-prof mono-devel mono-tools-devel ocaml ocaml-doc tuareg-mode ocaml-mode libgdbm-dev mlton sshfs sparkleshare fig2ps epstool libav-tools python-software-properties software-properties-common h5utils libnetcdf-dev netcdf-doc netcdf-bin tig libtool iotop asciidoc autoconf bsdtar attr libicu-dev iceweasel xvfb tree bindfs liblz4-tool tinc python-scikits-learn python-scikits.statsmodels python-skimage python-skimage-doc python-skimage-lib python-sklearn python-sklearn-doc python-sklearn-lib python-fuse cgroup-lite cgmanager-utils cgroup-bin libpam-cgroup cgmanager cgmanager-utils cgroup-lite cgroup-bin r-recommended libquantlib0 libquantlib0-dev quantlib-examples quantlib-python quantlib-refman-html quantlib-ruby r-cran-rquantlib libf2c2-dev libpng++-dev libcairomm-1.0-dev r-cran-cairodevice x11-apps mesa-utils libpangox-1.0-dev
I've also put extra effort (beyond just apt-get) to install the following:polymake, dropbox, aldor/"AXIOM", Macaulay2, Julia, 4ti2
by William Stein (noreply@blogger.com) at May 01, 2014 02:27 PM
Everything below is subject to change.
(by April 27) Major upgrades -- update everything to Ubuntu 14.04 and Sage-6.2. Also upgrade all packages in SMC, including Haproxy, nginx, stunnel, etc.
(by May 4) Streamline doc sync: top priority right now is to clean up sync, and eliminate bugs that show up when network is bad, many users, etc.
(by June 30) Fix all major issues (none major) that are listed on the github page: https://github.com/sagemath/cloud/issues
We plan four distinct products of the SMC project: increased quotas, enhanced university course support, license and support to run a private SMC cloud, supported open source BSD-licensed single-user version of SMC (and offline mode).
PRODUCT: University course support (launch by Aug 2014 in time for Fall 2014 semester)
PRODUCT: License to run a private SMC cloud in a research lab, company, supercomputer, etc. (launch a BETA version by July 2014, with caveats about bugs).
PRODUCT: Free BSD-licensed single-user account version of SMC (launch by December 2014)
local_hub
, CoffeeScript client, terminal.by William Stein (noreply@blogger.com) at April 25, 2014 12:08 PM
I was recently lucky enough to be selected for GSoC 2014 for SageMath, specifically on improvements to the Sage Android App, both internal and external.
The current agenda during the Community Bonding Period(upto 18th May) is:
I also have my end semester exams intermittently during 5th May-17th May,so some balancing will also be necessary.
Other than that I’m very thrilled,happy and pleasantly surprised that I actually got this opportunity.
Here’s to a great summer!
by Harald Schilly (noreply@blogger.com) at April 22, 2014 06:10 PM
This week I have been continuing my collaborative work on the matrix elements of the Hamiltonian Constraint in LQG, see the posts:
So I have been reviewing knot theory especially skein relations, SU(2) recoupling theory and Lieb-Temperley algebra. And reviewing my old sagemath calculations which I’ve written up as a paper and will post as an eprint on arXiv.
The calculations I have done this week are based on the paper ‘Matrix Elements of Thiemann’s Hamiltonian Constraint in Loop Quantum Gravity by Borissov , De Pietri and Rovelli‘.
In this paper the authors present an explicit computation of matrix elements of the hamiltonian constraint operator in non-perturbative quantum gravity. They look at the euclidean term of Thiemann’s version of the constraint and compute its action on trivalent states, for all its natural orderings. The calculation is performed using
graphical techniques from the recoupling theory of colored knots and links. They give the matrix elements of the Hamiltonian constraint operator in the spin network basis in compact algebraic form.
Thiemann’s Hamiltonian constraint operator.
The Hamiltonian constraint of general relativity can be written as
The first term in the right hand side of defines
the hamiltonian constraint for gravity with euclidean signature – accordingly, is called the euclidean hamiltonian constraint.
For trivalent gauge-invariant vertices, (r, p, q) denotes the
colours associated to the edges I, J,K of a vertex v and we have:
where i = 1, 2 for the two orderings studied. The explicit values of the matrix elements and are:
Using Sage Mathematics Software I coded these the trivalent vertex matrix elements :
And then checked the resulting calculations against the values tabulated in the paper.
here we can see A1 and A2 values corresponding to the values of :
My Sage Mathematics Software code produces precisely the numerical values of A1 and A2 as indicated in the figure above.
For example, for (p,q,r) = (1,0,1) I have A1 =0.000 and A2 = 0,07433 which is to 4 digit accuracy the tabulated value of .
From the point of view of my own research this is great and I can be confident in the robustness of my coding techniques. I will continue working through papers on the matrix elements of the LQG Hamiltonian constraints coding up the analyses working towards coding the full hamiltonian constraint and curvature reviewed in the posts:
by William Stein (noreply@blogger.com) at April 15, 2014 04:02 PM
At Sage Days 57, I worked on the trac ticket #6637: standardize the interface to TransitiveIdeal and friends. My patch proposes to replace TransitiveIdeal and SearchForest by a new class called RecursivelyEnumeratedSet that would handle every case.
A set S is called recursively enumerable if there is an algorithm that enumerates the members of S. We consider here the recursively enumerated set that are described by some seeds and a successor function succ. The successor function may have some structure (symmetric, graded, forest) or not. Many kinds of iterators are provided: depth first search, breadth first search or elements of given depth.
Consider the permutations of \(\{1,2,3\}\) and the poset generated by the method permutohedron_succ:
sage: P = Permutations(3) sage: d = {p:p.permutohedron_succ() for p in P} sage: S = Poset(d) sage: S.plot()
The TransitiveIdeal allows to generates all permutations from the identity permutation using the method permutohedron_succ as successor function:
sage: succ = attrcall("permutohedron_succ") sage: seed = [Permutation([1,2,3])] sage: T = TransitiveIdeal(succ, seed) sage: list(T) [[1, 2, 3], [2, 1, 3], [1, 3, 2], [2, 3, 1], [3, 2, 1], [3, 1, 2]]
Remark that the previous ordering is neither breadth first neither depth first. It is a naive search because it stores the element to process in a set instead of a queue or a stack.
Note that the method permutohedron_succ produces a graded poset. Therefore, one may use the TransitiveIdealGraded class instead:
sage: T = TransitiveIdealGraded(succ, seed) sage: list(T) [[1, 2, 3], [2, 1, 3], [1, 3, 2], [2, 3, 1], [3, 1, 2], [3, 2, 1]]
For TransitiveIdealGraded, the enumeration is breadth first search. Althougth, if you look at the code (version Sage 6.1.1 or earlier), we see that this iterator do not make use of the graded hypothesis at all because the known set remembers every generated elements:
current_level = self._generators known = set(current_level) depth = 0 while len(current_level) > 0 and depth <= self._max_depth: next_level = set() for x in current_level: yield x for y in self._succ(x): if y == None or y in known: continue next_level.add(y) known.add(y) current_level = next_level depth += 1 return
sage: succ = attrcall("permutohedron_succ") sage: seed = [Permutation([1..5])] sage: T = TransitiveIdeal(succ, seed) sage: %time L = list(T) CPU times: user 26.6 ms, sys: 1.57 ms, total: 28.2 ms Wall time: 28.5 ms
sage: seed = [Permutation([1..8])] sage: T = TransitiveIdeal(succ, seed) sage: %time L = list(T) CPU times: user 14.4 s, sys: 141 ms, total: 14.5 s Wall time: 14.8 s
sage: seed = [Permutation([1..5])] sage: T = TransitiveIdealGraded(succ, seed) sage: %time L = list(T) CPU times: user 25.3 ms, sys: 1.04 ms, total: 26.4 ms Wall time: 27.4 ms
sage: seed = [Permutation([1..8])] sage: T = TransitiveIdealGraded(succ, seed) sage: %time L = list(T) CPU times: user 14.5 s, sys: 85.8 ms, total: 14.5 s Wall time: 14.7 s
In conlusion, use TransitiveIdeal for naive search algorithm and use TransitiveIdealGraded for breadth search algorithm. Both class do not use the graded hypothesis.
The new class RecursivelyEnumeratedSet provides all iterators for each case. The example below are for the graded case.
Depth first search iterator:
sage: succ = attrcall("permutohedron_succ") sage: seed = [Permutation([1..5])] sage: R = RecursivelyEnumeratedSet(seed, succ, structure='graded') sage: it_depth = R.depth_first_search_iterator() sage: [next(it_depth) for _ in range(5)] [[1, 2, 3, 4, 5], [1, 2, 3, 5, 4], [1, 2, 5, 3, 4], [1, 2, 5, 4, 3], [1, 5, 2, 4, 3]]
Breadth first search iterator:
sage: it_breadth = R.breadth_first_search_iterator() sage: [next(it_breadth) for _ in range(5)] [[1, 2, 3, 4, 5], [1, 3, 2, 4, 5], [1, 2, 4, 3, 5], [2, 1, 3, 4, 5], [1, 2, 3, 5, 4]]
Elements of given depth iterator:
sage: list(R.elements_of_depth_iterator(9)) [[5, 4, 2, 3, 1], [4, 5, 3, 2, 1], [5, 3, 4, 2, 1], [5, 4, 3, 1, 2]] sage: list(R.elements_of_depth_iterator(10)) [[5, 4, 3, 2, 1]]
Levels (a level is a set of elements of the same depth):
sage: R.level(0) [[1, 2, 3, 4, 5]] sage: R.level(1) {[1, 2, 3, 5, 4], [1, 2, 4, 3, 5], [1, 3, 2, 4, 5], [2, 1, 3, 4, 5]} sage: R.level(2) {[1, 2, 4, 5, 3], [1, 2, 5, 3, 4], [1, 3, 2, 5, 4], [1, 3, 4, 2, 5], [1, 4, 2, 3, 5], [2, 1, 3, 5, 4], [2, 1, 4, 3, 5], [2, 3, 1, 4, 5], [3, 1, 2, 4, 5]} sage: R.level(3) {[1, 2, 5, 4, 3], [1, 3, 4, 5, 2], [1, 3, 5, 2, 4], [1, 4, 2, 5, 3], [1, 4, 3, 2, 5], [1, 5, 2, 3, 4], [2, 1, 4, 5, 3], [2, 1, 5, 3, 4], [2, 3, 1, 5, 4], [2, 3, 4, 1, 5], [2, 4, 1, 3, 5], [3, 1, 2, 5, 4], [3, 1, 4, 2, 5], [3, 2, 1, 4, 5], [4, 1, 2, 3, 5]} sage: R.level(9) {[4, 5, 3, 2, 1], [5, 3, 4, 2, 1], [5, 4, 2, 3, 1], [5, 4, 3, 1, 2]} sage: R.level(10) {[5, 4, 3, 2, 1]}
We construct a recursively enumerated set with symmetric structure and depth first search for default enumeration algorithm:
sage: succ = lambda a: [(a[0]-1,a[1]), (a[0],a[1]-1), (a[0]+1,a[1]), (a[0],a[1]+1)] sage: seeds = [(0,0)] sage: C = RecursivelyEnumeratedSet(seeds, succ, structure='symmetric', algorithm='depth') sage: C A recursively enumerated set with a symmetric structure (depth first search)
In this case, depth first search is the default algorithm for iteration:
sage: it_depth = iter(C) sage: [next(it_depth) for _ in range(10)] [(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (0, 6), (0, 7), (0, 8), (0, 9)]
Breadth first search. This algorithm makes use of the symmetric structure and remembers only the last two levels:
sage: it_breadth = C.breadth_first_search_iterator() sage: [next(it_breadth) for _ in range(10)] [(0, 0), (0, 1), (0, -1), (1, 0), (-1, 0), (-1, 1), (-2, 0), (0, 2), (2, 0), (-1, -1)]
Levels (elements of given depth):
sage: sorted(C.level(0)) [(0, 0)] sage: sorted(C.level(1)) [(-1, 0), (0, -1), (0, 1), (1, 0)] sage: sorted(C.level(2)) [(-2, 0), (-1, -1), (-1, 1), (0, -2), (0, 2), (1, -1), (1, 1), (2, 0)]
We get same timings as for TransitiveIdeal but it uses less memory so it might be able to enumerate bigger sets:
sage: succ = attrcall("permutohedron_succ") sage: seed = [Permutation([1..5])] sage: R = RecursivelyEnumeratedSet(seed, succ, structure='graded') sage: %time L = list(R) CPU times: user 24.7 ms, sys: 1.33 ms, total: 26.1 ms Wall time: 26.4 ms
sage: seed = [Permutation([1..8])] sage: R = RecursivelyEnumeratedSet(seed, succ, structure='graded') sage: %time L = list(R) CPU times: user 14.5 s, sys: 70.2 ms, total: 14.5 s Wall time: 14.6 s
I’m back with more on matroid computation in Sage. Previous installments are here, here, and here. As in the previous post, clicking the “Evaluate” buttons below will execute the code and return the answer. The various computations are again linked, so execute them in the right order.
Today we will look at various ways to ask (and answer) the question “Are these two matroids equal?” There are some subtle aspects to this, since we had to balance efficiency of the code and mathematical interpretation. The result is that we have various ways to compare matroids.
Matroids $M$ and $N$ are isomorphic if there is a bijection $\phi: E(M) \to E(N)$ mapping bases to bases and nonbases to nonbases. In Sage, we check this as follows:
In other words, every matroid in Sage has a method .is_isomorphic()
taking another matroid as input and returning True or False. Currently there is no way to extract the actual isomorphism (it is on our wish list), but if you guessed one, you can check its correctness:
We defined $\phi$ using a dictionary: for instance, phi['c']
equals 2
. If you defined your map differently (e.g. as a function or a permutation), Sage will probably understand that too.
Matroids $M$ and $N$ are equal if $E(M) = E(N)$ and the identity map is an isomorphism. We can check for equality as follows:
The standard way to compare two objects in Sage is through the comparison operator ==
. For instance,
When writing the matroid code, we chose to interpret the question M == N
to mean “Is the internal representation of the matroid $M$ equal to the internal representation of $N$?” This is a very restricted view: the only time M == N
will return True
is if
BasisMatroid
or both are of type LinearMatroid
or …)Let’s consider a few examples:
So only matroids $M_1$, $M_2$, and $M_4$ pass the equality test. This is because they are all linear matroids over the field $\mathrm{GF}(2)$, have the same ground set, and the matrices are projectively equivalent: one can be turned into the other using only row operations and column scaling. Matrix $M_3$ is isomorphic to $M_1$ but not equal; matroid $M_5$ is represented over a different field; matroid $M_6$ is represented by a list of bases, and matroid $M_7$ is represented by a list of “circuit closures”.
This notion of equality has consequences for programming. In Python, a set
is a data structure containing at most one copy of each element.
So $S$ actually contains $M_3, M_5, M_6, M_7$, and one copy of $M_1, M_2, M_4$.
This means, unfortunately, that you cannot rely on Python’s default filtering tools for sets if you want only matroidally equal objects, or only isomorphic objects. But those equality tests are way more expensive computationally, whereas in many applications the matroids in a set will be of the same type anyway.
As mentioned above, each instance of the LinearMatroid
class is considered a represented matroid. There are specialized methods for testing projective equivalence and projective isomorphism. Recall that matroids are projectively equivalent (in Sage’s terminology, field-equivalent) if the representation matrices are equal up to row operations and column scaling. So below, matroids $M_1$ and $M_3$ are field-equivalent; matroids $M_1$ and $M_4$ are field-isomorphic; and matroid $M_2$ has an inequivalent representation:
I probably caused a lot of confusion, so feel free to ask questions in the comments below. Also, the documentation for the functions discussed above gives more explanations and examples.
This post is just a bit of informal exploration of the LQG curvature operator which I’ve been doing in preparation for a more formal attack during the next week.
which produces results like those below:
Informally exploring some ideas in sagemath using work already done on length, angle and volume operators.
See posts
This informal work produces reasonable results in terms of magnitudes and form and I’ll refine this over time.
by Vincent Knight (noreply@blogger.com) at April 04, 2014 11:57 AM
import csv
f=open(DATA+'availabilitymatrix','r')
data = [[int(j) for j in row] for row in csv.reader(f)]
f.close()
M = Matrix(data)
by Vincent Knight (noreply@blogger.com) at March 27, 2014 03:35 PM
I've been developing a framework in Sage to work with the kind of dynamic models that my collaborators and I use a lot of the time in population biology and social-science projects. Here's a demo of what it can do:
Today I am presenting the IPython notebook at the meeting of the Sage Paris group. This post gathers what I prepared.
First you can install the ipython notebook in Sage as explained in this previous blog post. If everything works, then you run:
sage -ipython notebook
and this will open a browser.
Create a new notebook and type:
In [1]: 3 + 3 6 In [2]: 2 / 3 0 In [3]: matrix Traceback (most recent call last): ... NameError: name 'matrix' is not defined
By default, Sage preparsing is turn off and Sage commands are not known. To turn on the Sage preparsing (thanks to a post of Jason on sage-devel):
%load_ext sage.misc.sage_extension
Since sage-6.2, according to sage-devel, the command is:
%load_ext sage
You now get Sage commands working in ipython:
In [4]: 3 + 4 Out[4]: 7 In [5]: 2 / 3 Out[5]: 2/3 In [6]: type(_) Out[6]: <type 'sage.rings.rational.Rational'> In [7]: matrix(3, range(9)) Out[7]: [0 1 2] [3 4 5] [6 7 8]
If the output is too big, click on Out to scroll or hide the output:
In [8]: range(1000)
3D graphics works but open in a new Jmol window:
In [9]: sphere()
Similarly, 2D graphics works but open in a new window:
In [10]: plot(sin(x), (x,0,10))
To create inline matplotlib graphics, the notebook must be started with this command:
sage -ipython notebook --pylab=inline
Then, a matplotlib plot can be drawn inline (example taken from this notebook):
import matplotlib.pyplot as plt import numpy as np x = np.linspace(0, 3*np.pi, 500) plt.plot(x, np.sin(x**2)) plt.title('A simple chirp');
Or with:
%load http://matplotlib.org/mpl_examples/showcase/integral_demo.py
According to the previous cited notebook, it seems, that the inline mode can also be decided from the notebook using a magic command, but with my version of ipython (0.13.2), I get an error:
In [11]: %matplotlib inline ERROR: Line magic function `%matplotlib` not found.
Change an input cell into a markdown cell and then you may use latex:
Test $\alpha+\beta+\gamma$
The output can be shown with latex and mathjax using the ipython display function:
from IPython.display import display, Math def my_show(obj): return display(Math(latex(obj))) y = 1 / (x^2+1) my_show(y)
Create a new notebook with only one cell. Name it range_10 and save:
In [1]: range(10) Out[1]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
The file range_10.ipynb is saved in the directory. You can also download it from File > Download as > IPython (.ipynb). Here is the content of the file range_10.ipynb:
{ "metadata": { "name": "range_10" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "code", "collapsed": false, "input": [ "range(10)" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "pyout", "prompt_number": 1, "text": [ "[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]" ] } ], "prompt_number": 1 }, { "cell_type": "code", "collapsed": false, "input": [], "language": "python", "metadata": {}, "outputs": [] } ], "metadata": {} } ] }
A ipynb file is written in json format. Below, we use json to open the file `range_10.ipynb as a Python dictionnary.
sage: s = open('range_10.ipynb','r').read() sage: import json sage: D = json.loads(s) sage: type(D) dict sage: D.keys() [u'nbformat', u'nbformat_minor', u'worksheets', u'metadata'] sage: D {u'metadata': {u'name': u'range_10'}, u'nbformat': 3, u'nbformat_minor': 0, u'worksheets': [{u'cells': [{u'cell_type': u'code', u'collapsed': False, u'input': [u'range(10)'], u'language': u'python', u'metadata': {}, u'outputs': [{u'output_type': u'pyout', u'prompt_number': 1, u'text': [u'[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]']}], u'prompt_number': 1}, {u'cell_type': u'code', u'collapsed': False, u'input': [], u'language': u'python', u'metadata': {}, u'outputs': []}], u'metadata': {}}]}
Download the file vaucanson.ipynb from the last meeting of Paris Sage Users. You can view the complete demo including pictures of automaton even if you are not able to install vaucanson on your machine.
In a Python file, separate your code with the following line to create cells:
# <codecell>
For example, create the following Python file. Then, import it in the notebook. It will get translated to ipynb format automatically.
# -*- coding: utf-8 -*- # <nbformat>3.0</nbformat> # <codecell> %load_ext sage.misc.sage_extension # <codecell> matrix(4, range(16)) # <codecell> factor(2^40-1)
Since release 1.0 of IPython, many conversion from ipynb to other format are possible (html, latex, slides, markdown, rst, python). Unfortunately, the version of IPython in Sage is still 0.13.2 as of today but the version 1.2.1 will be in sage-6.2.
Google’s Summer of Code is an annual event in which Google will pay a bunch of students to spend the summer writing code for open-source projects.
As last year, Sage is one of the mentoring organizations, and as last year, we are hoping to have one of about 4 students work on Sage’s matroid capabilities. We are looking for an undergraduate or graduate student who
On the ideas page, a list of possible features to work on can be found. Students must produce a detailed proposal including a timeline with milestones. Early interaction with the mentors is important: we want to know you’re serious about this, and able to deliver. The place for this is the sage-gsoc Google group.
If you know an interested student, send her or him to this post. If you are an interested student, study the links above to figure out the requirements, timeline, etc. One important date: applications must be in on
and we’d like to start working on the proposal with you well before then.
by Vincent Knight (noreply@blogger.com) at February 21, 2014 03:58 AM
One of the things I’ve been working on this week is : Moduli Space of Shapes of a Tetrahedron with Faces of Equal Area.
This follows on the work done in the posts:
The space of shapes of a tetrahedron with fixed face areas is naturally a symplectic manifold of real dimension two. This symplectic manifold turns out to be a Kahler manifold and can be parametrized by a single complex coordinate Z given by the cross ratio of four complex numbers obtained by stereographically projecting the unit face normals onto the complex plane.
This post shows how this works in the simplest case of a tetrahedron T whose four face areas are equal. For convenience, the cross-ratio coordinate Z is shifted and rescaled to z=(2Z-1)/Sqrt[3] so that the regular tetrahedron corresponds to z=i, in which case the upper half-plane is mapped conformally into the unit disc w=(i-z)/(i+z).The equi-area tetrahedron T is then drawn as a function of the unit disc coordinate w.
Goal 1. Increase resource for Sage: Generate a different longterm revenue stream to support development of Sage, i.e., open source mathematical software. By "different", I mean different than government and foundation grants and donations, which are relatively limited for primarily pure mathematics software development, which is what Sage specializes in. Even in my wildest dreams, it is very unlikely Sage will get more than a million dollars a year in funding (and in practice it gets a lot less); however, a successful commercial product with wide adoption has the potential to generate significantly more than a million dollars a year in revenue -- of course most would go back into the product... but when the product is partly Sage, that's fine. The National Science Foundation (and other donors) have played a major part during the last 8 years in funding Sage, but I think everybody would benefit from another funding source.
Goal 2. Increase the usage of Sage: The number of unique visitors per month to http://sagemath.org grew nicely from 2005 (when I started Sage) until Summer 2011, after which point it has remained fairly constant at 70,000 unique visitors. There is no growth at all: it was 70,332 in Jan 2011, and it was 70,449 last month (Jan 2014), both with a bounce rate of about 50%. A significant obstruction to growth is accessible, which SMC helps to address for certain users (last month the SMC website has 17,700 unique visitors with a bounce rate of about 30%).
Here's an actual email I received from somebody literally as I was writing this, which I think illustrates how SMC addresses the second goal:
Hey William,
Today I stopped by cloud.sagemath.com because
I wanted to do some computation with sage, and
cloud is announced in a big way on sagemath.org
This is after a lengthy hiatus from computing
with sage ( maybe a year ).
Using cloud.sagemath.com completely blew my
mind. At first I did not really understand
why sagenb was ditched after all the work that
went into it. But man, cloud is really a
pleasure to use !
I just wanted to share the joy :)
Thanks for all that you do !
There is, however, a large amount of interesting backend code, which is really the "cloud" part of SMC, and which we do not intend to release as open source. We do intend to sell licenses (with support) for the complete package, when it is sufficiently polished, since many organizations want to run their own private SMC servers, mainly for confidentiality reasons.
Goal 2 above mainly impacts how we market SMC. However, it's easy to completely ignore Sage and still get a lot of value out of SMC. I just glanced at what people are doing as I write this, and the result seems pretty typical: latex'ing documents, some Sage worksheets, some IPython notebooks, editing a perl script.
It's important to understand how SMC is different than other approaches to cloud computing. It's designed to make certain things very easy, but they are quite different things than what "traditional" cloud stacks like OpenStack are designed to make easy. SMC is supposed to make the following easy:
The above design goals are useful for certain target audiences, e.g., people doing Sage/Python/etc. development, teachers and students in courses that make use of Sage/Python/etc., collaborative math research projects. SMC is designed so that a large number of people can make simultaneous small use of ever-expanding resources. SMC should also fully support the "social networks" that form in this context. At the same time, it's critical that SMC have excellent uptime and availability (and offsite backups, just in case), so that people can trust it. By trust, I don't mean so much in the sense of "trust it with proprietary info", but in the sense of "trust it to not just loose all my data and to be there when I'm giving a talk/teaching a class/need to do homework/etc.".
However, exactly the above design goals are at odds with some of goals of large-scale scientific/supercomputing. The following are not design goals of SMC:
What happens in practice with SMC is that people run smaller-scale computations on SMC (say things that just take a few cores), and when they want to run something bigger, they ssh from SMC to other resources they have (e.g., a supercomputer account) and launch computations there. All project collaborators can see what anybody types in a terminal, which can be helpful when working with remote compute clusters.
Anyway, I hope this helps to clarify what exactly SMC actually is.
by William Stein (noreply@blogger.com) at February 12, 2014 06:41 AM
Aujourd'hui avait lieu une rencontre de l'ANR DynA3S. Suite à une présentation de Brigitte Vallée, j'ai codé quelques lignes en Sage pour étudier une fonction qu'elle a introduite. Cette fonction est reliée à la compréhension de la densité des termes sous-diagonaux dans l'exécution de l'algorithme LLL.
D'abord voici mon fichier: brigitte.sage.
Pour utiliser ce fichier, il faut d'abord l'importer dans Sage en utilisant la commande suivante. En ligne de commande, ça fonctionne bien. Dans le Sage notebook, je ne sais plus trop si la commande load permet encore de le faire (?):
sage: %runfile brigitte.sage # not tested
On doit générer plusieurs orbites pour visualiser quelque chose, car les orbites de la fonction sont de taille 1, 2 ou 3 en général avant que la condition d'arrêt soit atteinte. Ici, on génère 10000 orbites (les points initiaux sont choisis aléatoirement et uniformément dans \([0,1]\times[-0.5, 0.5]\). On dessine les derniers points des orbites:
sage: D = plusieurs_orbit(10000) Note: la plus longue orbite est de taille 3 sage: A = points(D[0], color='red', legend_label='derniers') sage: B = points(D[1], color='blue', legend_label='avant derniers') sage: C = points(D[2], color='black', legend_label='2e avant derniers') sage: G = A + B + C sage: G.axes_labels(("$x$", r"$\nu$")) sage: title = r"$(x,\nu) \mapsto (\frac{x}{(x+\nu^2)^2},\frac{\nu}{(x+\nu^2)})$" sage: G.show(title=title, xmax=2)
Un raccourci pour faire à peu près le même dessin que ci-haut:
sage: draw_plusieurs_orbites(10000).show(xmax=2)
On dessine des histogrammes surperposés de la densité de ces points une fois projetés sur l'axe des \(\nu\):
sage: histogrammes_nu(10000, nbox=10)
Le dessin semble indiquer que la densité non uniforme semble provenir simplement par les points \((x,\nu)\) tels que \(x\leq 1\).
On dessine des histogrammes superposés de la densité de ces points une fois projetés sur l'axe des \(x\) (on donne des couleurs selon la valeur de \(\nu\)):
sage: histogrammes_x(30000, nbox=5, ymax=1500, xmax=8)
Le dessin semble indiquer que la densité ne dépend pas de \(\nu\) pour \(x\geq 1\).
This week I have been reviewing the new spinfoam vertex in 4d models of quantum gravity. This was discussed in the recent posts:
In this post I explore the large spins asymptotic properties of the overlap coefficients:
characterizing the holomorphic intertwiners in the usual real basis. This consists of the normalization coefficient times the shifted Jacobi polynomial.
In the case of n = 4. I can study the asymptotics of the shifted Jacobi polynomials in the limit ji → λji, λ → ∞. A convenient integral representation for the shifted Jacobi polynomials is given by a contour integral:
This leads to the result that:
This formula relates the two very different descriptions of the phase space of shapes of a classical tetrahedron – the real one in terms of the k, φ parameters and the complex one in terms of the cross-ratio
coordinate Z. As is clear from this formula, the relation between the two descriptions is non-trivial.
In this post I have only worked with the simplest case of this relation when all areas are equal. In this ‘equi-area‘ case where all four representations are equal ji = j, ∀i = 1, 2, 3, 4, as described in the post: Holomorphic Factorization for a Quantum Tetrahedron the overlap function is;
Using sagemath I am able to evaluate the overlap coefficients for various values of j and the cross-ratios z.
Here I plot the modulus of the equi-area case state Ck, for j = 20, as a function of the spin label k, for the value of the cross-ratio Z = exp(iπ/3) that corresponds to the equilateral tetrahedron. It is obvious that the distribution looks Gaussian. We also see that the maximum is reached for kc = 2j/√3 ∼ 23, which agrees with an asymptotic analysis.
Here I plot the modulus of the equi-area case state Ck for various j values as a function of the spin label k, for the value of the cross-ratio Z = exp(iπ/3) that corresponds to the equilateral tetrahedron.
Here I have have plotted the modulus of the j = 20 equi-area state Ck for increasing cross-ratios Z = 0.1i, 0.8i, 1.8i. The Gaussian distribution progressively moving its peak from 0 to 2j. This illustrates how changing the value of Z affects the semi-classical geometry of the tetrahedron.
Conclusions
In this post I we have studied a holomorphic basis for the Hilbert space Hj1,…,jn of SU(2) intertwiners. In particular I have looked at the case of 4-valent intertwiners that can be interpreted as quantum states of a quantum tetrahedron. The formula
gives the inner product in Hj1,…,jn in terms of a holomorphic integral over the space of ‘shapes’ parametrized by the cross-ratio coordinates Zi. In the tetrahedral n = 4 case there is a single cross-ratio Z. The n=4 holomorphic intertwiners parametrized by a single cross-ratio variable Z are coherent states in that they form an over-complete basis of the Hilbert space of intertwiners and are semi-classical states peaked on the geometry of a classical tetrahedron as shown by my numerical studies. The new holomorphic intertwiners are related to the standard spin basis of intertwiners that are usually used in loop quantum gravity and spin foam models, and the change of basis coefficients are given by Jacobi polynomials.
In the canonical framework of loop quantum gravity, spin network states of quantum geometry are labeled by a graph as well as by SU(2) representations on the graph’s edges e and intertwiners on its vertices v. It is now possible to put holomorphic intertwiners at the vertices of the graph, which introduces the new spin networks labeled by representations je and cross-ratios Zv. Since each holomorphic intertwiner can be associated to a classical tetrahedron, we can interpret these new spin network states as discrete geometries. In particular, geometrical observables such as the volume can be expected to be peaked on their classical values as shown in my numerical studies for j=20. This should be of great help when looking at the dynamics of the spin network states and when studying how they are coarse-grained and refined.
by Vincent Knight (noreply@blogger.com) at January 23, 2014 09:59 AM
In this post, a theory of quantum gravity called Causal Dynamical Triangulation (CDT) is explored. The 1+1 dimensional universe, the simplest case of an empty universe of one spatial and one temporal dimension is simulated in sagemath. This post explains CDT in general and presents and explains the results of the 1+1 dimensional simulations.
Understanding gravity at the fundamental level is key to a deeper understanding of the workings of the universe. The problem of unifying Einstein’s theory of General Relativity with Quantum Field Theory is an unsolved problem at the heart of understanding how gravity works at the fundamental level. Various attempts have been made so far at solving the problem. Such attempts include String Theory, Loop Quantum Gravity, Horava-Lifshitz gravity, Causal Dynamical Triangulation as well as others.
Causal Dynamical Triangulation is an to quantum gravity that recovers classical spacetime at large scales by enforcing causality at small scales. CDT combines quantum physics with general relativity in a Feynman sum over-geometries and converts the sum into a discrete statistical physics problem. I solve this problem using a Monte Carlo simulation to compute the spatial fluctuations of an empty universe with one space and one time dimensions. The results compare favourably with theory and provide an accessible but detailed introduction to quantum gravity via a simulation that runs on a computer.
In order to use the CDT approach, the Einstein-Hilbert action of General Relativity and the path integral approach to Quantum Field Theory are both important. I’ll begin by introducing both concepts as well as the metric and the Einstein Field equations. In this post I attempt, at least briefly, to explain CDT in general and explain what I have found with my simulation.
Quantum gravity
Theories of quantum gravity attempt to unify quantum theory with general relativity, the theory of classical gravity as spacetime curvature. Superstring theory tries to unify gravity with the electromagnetic and weak and strong nuclear interactions, but it requires supersymmetry and higher dimensions, which are as yet unobserved. It proposes that elementary particles are vibrational modes of strings of the Planck length and classical spacetime is a coherent oscillation of graviton modes.
Loop quantum gravity does not attempt to unify gravity with the other forces, but it directly merges quantum theory and general relativity to conclude that space is granular at the Planck scale. It proposes that space is a superposition of networks of quantised loops and spacetime is a discrete history of such networks.
Causal Dynamical Triangulation is a conservative approach to quantum gravity that constructs spacetime from triangular-like building blocks by gluing their time-like edges in the same direction. The microscopic causality inherent in the resulting spacetime foliation ensures macroscopic space and time as we know it. Despite the discrete foundation, CDT does not necessarily imply that spacetime itself is discrete. It merely grows a combinatorial spacetime from the building blocks according to a propagator fashioned from a sum-over-histories superposition. Dynamically generating a classical universe from quantum fluctuations has been accomplished.CDT recovers classical gravity at large scales, but predicts that the number of dimensions drops continuously from 4 to 2 at small scales. Other approaches to quantum gravity have also predicted similar dimensional reductions from 4 to 2 near the Planck scale.
Classical gravity
In the theory of relativity, space and time are on equal footing and mix when boosting between reference frames in relative motion. In the flat Minkowski spacetime of special relativity, the invariant proper space dr and time ds between two nearby events follow from the generalised pythagorean theorem.
which involves the difference in the squares of the relative space dx and time dt between the events in some reference frame. This difference prevents causality violation, because light cones defined by dr = 0 or dt =+/-dx partition spacetime into invariant sets of past and future at each event. Free particles follow straight trajectories or world-lines x[t] of stationary proper time. In the curved spacetime of general relativity, the separation between two events follows from
where the metric g encodes the spacetime geometry. Free particles follow curved world lines of stationary proper time. The vacuum gravitational field equations can be derived from the Einstein-Hilbert scalar action
where the Gaussian curvature is half the Ricci scalar curvature, K = R/2, and Lambda is the cosmological constant. By diagonalizing the metric, the invariant area element
where g = det g. Demanding that the action be stationary with respect to the metric, dS/dg = 0, implies the gravitational field equations
where the Ricci tensor curvature is the variation of the Ricci scalar with the metric, R = dR/dg. The Einstein curvature G is proportional to the cosmological constant, which can be interpreted as the vacuum energy density.
Quantum mechanics
In classical mechanics, the action
is the cumulative difference between a particle’s kinetic energy T and potential energy V[x]. A particle of mass m follows a worldline x[t] of stationary action. Demanding that the action be stationary with respect to the worldline, dS/dx = 0, implies Newton’s equation
In quantum mechanics, a particle follows all worldlines. Along each worldline, it accumulates a complex amplitude whose modulus is unity and whose phase is the classical action S[x] in units of the quantum of action h. The Feynman propagator, or amplitude to transition from place a to place b, is the sum-over-histories superposition
where Dx denotes the integration measure. The corresponding probability:
is the absolute square of the amplitude. If the wave function is the amplitude to be at a place b, and the kinetic energy
then the path integral between infinitesimally separated places implies the nonrelativistic Schrodinger wave equation
In quantum gravity, the corresponding sum is over all spacetime geometries and the quantum phase of each geometry is the Einstein-Hilbert action. The probability amplitude to transition from one spatial geometry to another is
is the probability amplitude of a particular spatial geometry, then this path integral implies the timeless Wheeler-DeWitt equation:
Gauss–Bonnet theorem
In 1 + 1 = 2 dimensions, the Gauss–Bonnet theorem dramatically simplifies the Einstein-Hilbert action by relating the total curvature of an orientable closed surface to a topological invariant. The curvature of a polyhedron is concentrated at its corners where the deficit angle
where Kv is the discrete Gaussian curvature at vertex v, and av is the area closer to that vertex than any other. The Gaussian curvature of a circle of radius r is the reciprocal of its radius 1/r. The Gaussian curvature at a point on a surface is the product of the corresponding minimum and maximum sectional curvatures. Hence, the total curvature of a sphere
like the topologically equivalent cube. More generally,
where X is the surface’s Euler characteristic and the genus G is its number of holes. For a sphere G = 0, and for a torus G = 1. Total curvature can only change discretely and only by changing the number of holes.
Wick rotation
The sum over histories path integral is difficult to evaluate because of the oscillatory nature of its integrand. A Wick rotation converts this difficult problem in Minkowski spacetime to a simpler one in Euclidean space by introducing an imaginary time coordinate. For example,
One motivation for the Wick rotation comes from complex analysis, where the integral of an analytic function f[z] around a closed curve in the complex plane always vanishes. If the function also decreases sufficiently rapidly at infinity, then around the contour C,
and so
which effectively rotates the integration 90o in the complex plane. Multiplying a complex number by similarly rotates the number 90o in the complex plane. The Wick rotation maps the line element by
and the area element by
This gives the Einstein-Hilbert action by
and the the probability amplitude by
This Wick rotation converts the Feynman phase factor to the Boltzmann weight thereby connecting quantum and statistical mechanics, where toroidal boundary conditions and a positive cosmological constant Lambda > 0 ensure the negativity of the Euclidean action Se < 0.
Regge calculus
Triangulation can simplify the continuum to the discrete. The equilateral Euclidean triangle has height
and area
fig4
Regge calculus approximates curved spacetime by flat Minkowski triangles (or simplexes in higher dimensions) whose edge lengths ‘ may ultimately be shrunk to zero to recover a continuum theory. Curvature is concentrated in the deficit angles at the vertices.Dynamical Triangulation uses only equilateral triangles, but incorporates both positive and negative curvature by suitably gluing the triangles together. Causal Dynamical Triangulations uses only dynamical triangulations with a sliced or foliated structure with the time-like edges glued in the same direction
Local light cones match to enforce global causality and preclude closed time-like curves. Regge calculus and the Gauss–Bonnet theorem dramatically simplifies the Wick-rotated Euclidean action from
to
where Nis the number of triangles. Assuming periodic boundary conditions in space and time, so the model universe is toroidal with genus G =1 the Euler characteristic X =2 (G-1) and the action
where the rescaled cosmological constant
The integral in amplitude becomes the sums
where the number of triangles is twice the number of vertices, N = 2Nv,
The Simulation
Monte Carlo analysis
To analyse the statistical physics system defined by the effective partition function, take a random walk through triangulation space sampling different geometries. The 1-2 move and its inverse 2-1 are ergodic triangulation rearrangements in 1+1 dimensions (these Pachner moves are 2-2 and 1-3 in 2d).They are special cases of the Alexander moves of combinatorial topology. The move M splits a vertex into two at the same time slice and adds two triangles, while its inverse M_1 fuses two vertices at the same time slice and deletes two triangles. Both preserve the foliation of the spacetime. Monte Carlo sampling of the triangulations leads to a detailed balance in which moves equilibrate inverse moves. At equilibrium, the probability to be in a specific labelled triangulation of Nv+1 vertices times satisfies
After choosing to make a move, randomly select a vertex from Nv vertices, and then randomly split one of np past time-like edges and one of nf future time-like edges. Hence the move transition probability
After choosing to make an inverse move, fusing the split vertices is the only way to return to the original triangulation. Hence the inverse move transition probability
By the effective partition function, the probability of a labelled triangulation with Nv vertices is the weighted Boltzmann factor
The probability of a move is a non-unique choice. In my simulations, I choose
To satisfy the detailed balance, the probability of an inverse move is therefore
Other choices may improve or worsen computational efficiency]. In practice, a Metropolis algorithm accepts the rearrangements if these probabilities are greater than a uniformly distributed pseudo-random number between 0 and 1. For example, if P[M]=0.01, then the move is unlikely to be accepted.
In the initial spacetime configuration we have the universe before any move or antimove as shown below
In this simulation, Ionly looked at the 1+1 dimensional universe. In this simulation, the 1+1 dimensional universe is decomposed into 2 dimensional simplices, one with two time-like edges and a space-like edge. In the simulation, the data structure stores this kind of information which turns out
to be very useful to implement the simulation. The up pointing triangles have their spatial edges on a past time slice and the down pointing triangles have their spatial edges on a future time slice. Those time-like edges are always future pointing. In this simulation, periodic boundary conditions were chosen. What this means is that the final time-slice and the first time-slice are connected so that every vertex of the beginning time-slice are identifed with every vertex on the finnal time-slice. The toroidal topology S1 x S1 was used to achieve the periodic boundary conditions.
In my simulation, I used a data structure that stores certain information about each triangle in the triangulation. This information is stored in an array. In this array, the following information
about the triangulation is stored: type of triangle (that is, if it is up pointing or down pointing), the time slice which it is located, the vertices as well as the neighbors of the triangles. Using a labeling scheme I was able to store information about the triangles in a triangulation. Each triangle was given a key starting from 0 to n-1. The type of the triangle (which is either type I or type II) was also stored. The neighbors for the different triangles were also stored. In general, the array structure for any triangle takes the form Tn = [type; time; p1; p2; p3; n1; n2; n3] where p1; p2; p3 are the vertices of the triangle and n1; n2; n3 are the neighbor of vertex 1, neighbor of vertex 2 and neighbor of vertex 3 respectively. This entire structure is composed strictly of integers. The integer assignments are in linewith the idea of reducing the problem to a counting problem. This data structure is then taken advantage of to do the combinatorial moves to split a vertex and add two triangles and antimoves to remove two triangles. The moves are done by randomly picking a vertex and splitting it, adding two triangles in the gap left behind where ever the vertex was split. The antimoves involve randomly picking vertices and deleting the triangles associated with the vertex. This is the same thing as doing a random walk through space-time. When the moves and antimoves are done, the arrays are repeatedly updated for every move and antimove. The universe is then made to evolve by repeatedly doing moves and antimoves rapidly and the size of the universe is measured every time over every 100 such combinatorial moves.
Initial runs of the simulation give results such as those below:
In a further post I will be exploring the critical value of the reduced cosmological constant.
This week I have been studying and working on several things. Firstly I have been reading a long paper on ‘Holomorphic Factorization for a Quantum Tetrahedron’ which I’ll review in my next post. This is quite a sophisticated paper with some quite high level mathematics so I’ve taken my time with it.
Then I been upgrading my work on the 3d toy model including the presentation of the equilateral tetrahedron and hyperplanes as seen in the posts:
The other thing I have been working on is 1 +1 dimensional causal dynamical triangulations, starting with two main references quantum gravity ( in addition to Loll, Amborn papers) on a laptop: 1+1 cdt by Normal S Israel and John F Linder and Two Dimensional Causal Dynamical Triangulation by Normal S Israel. So far I have set up a rough framework including initial conditions, detailed balance for monte carlo analysis and initial graphical work.
What I want to be able to do is independently construct a working CDT code and build it up from 2d to 3d and finally 4d. For the 1+1 CDT code I would be aiming to be able to independently construct simulated universes which exhibit quantum fluctuations as shown below:
by Vincent Knight (noreply@blogger.com) at December 19, 2013 11:49 PM
by William Stein (noreply@blogger.com) at December 16, 2013 09:27 AM
In a number of recent posts including:
I have been studying the ‘time-fixed’ quantum tetrahedron in which a quantum tetrahedron is used to model the evolution of a state a into state b as shown below:
In this toy model the quantum tetrahedron evolves from a flat shape:
via an equilateral quantum tetrahedron
towards a long stretched out quantum tetrahedron in the limit of large T.
This can be displayed on a phase space diagram,
Where “Minkowski vacuum states” are the states which minimize the energy.
Using sagemath I am able to model the quantum tetrahedron during this evolution by varying the lenght of c – which is used to measure the time T variable;
By using Vpython I am able to model this same system as shown in these animated gifs, the label in the diagram indicated the value of T.
An animated gif showing the edge a
An animated gif showing a different view of the quantum tetrahedron:
Analyzing the {6j} kernel
An important object in the modelling of the quantum tetrahedron is the Wigner {6j} symbol. As we saw in the post: Physical boundary state for the quantum tetrahedron by Livine and Speziale The quantum dynamics can be studied as by Ponzano-Regge, by associating with the tetrahedron the amplitude
Using sagemath I was able to evaluate the value of this in some extremal cases: when b=0 and b=2j.
Case b=0
Case b=2j
In the next post I will be looking at the mathematics behind the Wigner{6j} symbol in more detail.
by William Stein (noreply@blogger.com) at December 10, 2013 02:24 PM
I’ll continue my exploration of Sage’s matroid capabilities (see also here and here). This time I’ll look at extensions within subclasses of matroids, namely representable matroids. Before we get started, let me remind you of the reference manual, in which the capabilities of Sage’s matroid package are extensively documented.
Once again, the code examples can be run, as well as edited. The cells are linked together, so execute them in order.
A linear matroid is easily defined from a representation matrix:
Ok, so this matroid is over the field $\mathbb{Q}$. What if I want a different field? There are two places to specify that, either in the Matroid() or in the Matrix() function:
Even more concise, we can squeeze the matrix definition into the matroid definition. The second line prints the representation, along with column labels.
Like last time, let’s start by generating all linear extensions:
Note that, different from last time, we immediately get a list of matroids, instead of a generator. That might change in the future, though! Let’s examine our list of extensions a little.
There are a few options built into the method to restrict the output. First, we can request only simple extensions:
Second, we can request only extensions that are spanned by a specific subset $F$:
Our third option deserves its own heading.
A while a go, I started writing about partial fields. Sage has support for them built in, with some caveats: fields of fractions are very unreliable (because of hashing reasons). This will be fixed in some future release of Sage, see this bug report. We will use a partial field that’s safe, the dyadic partial field. For a number of others, you can use the product ring code I listed here (with a more advanced version, supporting loading and saving, in a forthcoming technical report that’s currently on hold on arXiv.org).
The key to the world of partial fields in sage is the set of fundamental elements: given a partial field $\mathbb{P} = (R, G)$, the set of fundamental elements is $\{1\} \cup \{x \in G : 1 – x \in G\}$. These can be used to generate $\mathbb{P}$-matrices (I’ll talk about the theory some other time) as follows:
We can test whether each of these is a valid dyadic matroid, as follows:
Sometimes you may not want to see the matroids, but rather just the new vectors that define the extensions. Sage can give you those!
Read this as follows: if the chain is $\{a: 1, b: 2, d:1\}$, the new column is obtained by adding $1$ times the column labeled by $a$, $2$ times the column labeled by $b$, and $1$ times the column labeled by $d$. The advantage of this representation, rather than just a vector, is that it is independent of row operations on the representation matrix.
To convert a chain, or just a column vector, into an extension, use the linear_extension method:
To get all linear matroid over a specific field or partial field, we can easily modify the code from the last post. Two caveats:
And this prints the counts (in a slightly different form from last time — each line lists the number matroids of a fixed size, broken down by rank):
It is known that the large-volume limit of a quantum tetrahedron is a quantum harmonic oscillator. In particular the volume operator of a quantum tetrahedron is, in the sector of large eigenvalues, accurately described by a quantum harmonic oscillator.
Using Vpython, it is possible to visualise the quantum tetrahedron as a semi classical quantum simple harmonic oscillator. The implementation of this is shown below;
The quantum tetrahedron as a Quantum harmonic oscillator .
Click to view animated gif
Review of the basic mathematical structure of a quantum tetrahedron
A quantum tetrahedron consists of four angular momenta Ji, where i = {1,2,3, 4} representing its faces and coupling to a vanishing total angular momentum – this means that the Hilbert space consists of all states fulfiling:
j1 +j2 +j3 +j4 = 0
In the coupling scheme both pairs j1;j2 and j3;j4 couple frst to two irreducible SU(2)representations of dimension 2k+1 each, which are then added to give a singlet state.
The quantum number k ranges from kmin to kmax in integer steps with:
kmin = max(|j1-j2|,|j3 -j4|)
kmax = min(j1+j2,j3 +j4)
The total dimension of the Hilbert space is given by:
d = kmax -kmin + 1.
The volume operator can be formulated as:
where the operators
represent the faces of the tetrahedron with
and y being the Immirzi parameter.
It is useful to use the operator
which, in the basis of the states can be represented as
with
and where
is the area of a triangle with edges a; b; c expressed via Herons formula,
These choices mean that the matrix representation of Q is real and antisymmetric and the spectrum of Q consists for even d of pairs of eigenvalues q and -q differing in sign. This makes it much easier to find the eigenvalues as I found in Numerical work with sage 7.
The large volume limt of Q is found to be given by:
The eigenstates of Q are just the well-known wave functions diagonalizing the harmonic oscillator in real-space representation:
For a monochromatic quantum tetrahedron we can obtain closed analytical expressions:
Below is shown a sagemath program which displays the wavefunction of a Quantum simple harmonic oscillator using Hermite polynomials Hn:
This an really important step in the development of Loop Quantum Gravity, because now we have a quantum field theory of geometry. This leads from combinatorial or simplicial description of geometry, through spin foams and the quantum tetrahedron, to many particle states of quantum tetrahedra and the emergence of spacetime as a Bose-Einstein condensate.
f(x) = (x + 2) / (x ^ 2 + 3 * x + 2) # Define the function
discontinuity = -1 # The above function has two discontinuities, this one I don't want to plot
hole = -2 # The hole described by Patrick Honner
def make_list_for_plot(f, use_floats=False, zoom_level=10^7, points=1001):
count = 0 # Adding this to count how many tries fail
z = zoom_level
xmin = hole - 10/z # Setting lower bound for plot
xmax = min(hole + 10/z, discontinuity - 1/10) # Setting upper bound for plot only up until the second (messy) discontinuity
x_vals = srange(start=xmin, end=xmax, step=(xmax-xmin)/(points-1), universe=QQ, check=True, include_endpoint=True)
# If we are using floating point arithmetic, cast all QQ numbers to floating point numbers using the n() function.
if use_floats:
x_vals = map(n, x_vals)
lst = []
for x in x_vals:
if x != hole and x != discontinuity: # Robert originally had a try/except statement here to pick up ANY discontinuities. This is not as good but I thought was a bit fairer...
y = f(x)
lst.append((x, y))
return lst
exact_arithmetic = make_list_for_plot(f)
p = list_plot(exact_arithmetic, plotjoined=True) # Plot f
p += point([hole, -1], color='red', size=30) # Add a point
show(p)
float_arithmetic = make_list_for_plot(f, use_floats=True)
p = list_plot(float_arithmetic, plotjoined=True) # Plot f
p += point([hole, -1], color='red', size=30) # Add a point
show(p)
try
except
method but I thought that in a way this was a 'fairer' way of doing things. Ultimately though it's very possible and easy to get an error-less plot.)by Vincent Knight (noreply@blogger.com) at December 01, 2013 07:57 AM
by Vincent Knight (noreply@blogger.com) at November 23, 2013 01:55 AM
f = sqrt(x) + 1 / x complex_plot(sqrt, (-5,5), (-5, 5))
def complex_point_plot(pts):
"""
A function that returns a plot of a list of complex points.
Arguments: pts (a list of complex numbers)
Outputs: A list plot of the imaginary numbers
"""
return list_plot([(real(i), imag(i)) for i in pts], axes_labels = ['Re($z$)', 'Im($z$)'], size=30)
complex_point_plot([3*I, e^(I*pi), e^(I*3*pi/4), 4-4*I])
pts = [e^(I*(theta)) for theta in srange(0, 2*pi, .1)]
complex_point_plot(pts)
by Vincent Knight (noreply@blogger.com) at November 16, 2013 08:07 AM