- I pushed the code onto github to let Allison and Tom review it, and Tom was quite excited by the idea of having it.
- I added a startup script in IPython, to monkey-patch it to use my code for introspection.
- Having never written a C-extension, I read up the Python docs on writing C extensions, and played around a little with pycparser, but it didn't seem to work too well with the Python's sources.
- I later went to the New York Public Library to see a copy of the Declaration of Independence in Thomas Jefferson’s hand.
- I got distracted trying to add a hack to Hacker School's blaggregator, to enable posting to different channels on zulip.
- I learnt a few things about Django. Commented out references to views that don't exists breaks templating. Formsets didn't seem very convenient to use, for editing data showing the user only partial forms.
- I played around with Clang for a bit, to try and use libclang to parse the C code for inspection, instead of writing one myself. It sometimes feels like an overkill, but it feels like it'll make the whole code more robust. I'm not sure. I'll need to play around for a bit more to decide.
- I also finally got around to fix the resolution of my tty shells. I was in a
bus, and wanted my laptop's battery to last longer, and decided to use a tty
shell with just emacs running. But, the resolution sucked. So, I
fixed. Essentially it involved removing a blacklist file that an old version
of Nvidia drivers left behind in
- Later in the evening I also bought the domain
octo.cat, given there were a bunch of people buying cat domains, and talking about it on Zulip!
Update [7/7/14]: I forgot to write about the presentations. There were interesting presentations!
- I happily let myself get distracted from all the math! I had seen a tweet by Tom during the weekend, asking if there was any package that let users inspect C sources of builtin modules like ruby's pry-doc
- I spent the morning playing around with pry-doc, reading its code, etc.
- The latter part of the day was spent pairing with Tom, and figuring out what needs to be done to write a first cut version of something like this. Ugly regexes, FTW! I have something that works for most of the stuff, it seems!
- I also attended an interesting talk about Bit-torrent clients by an alum.
- This Friday we worked on building our own URL shorteners. It was a fun exercise, and I built one that is not persistent, in the two hours. I worked on a not so important problem of having sorted query parameters, instead of the more important problem of having a persistent server. Anyway, I should be building something that works, first, before trying to solve the more interesting problems. At least on Fridays.
- I watched this old but interesting talk by Brandon Rhodes, titled The Mighty Dictionary, and played around with his _dictinfo module after-wards.
- I spent the rest of the evening reading up about Markov chains and Hidden Markov Models. I'm happy to finally understand what these terms mean, after seeing them being thrown around for so many years!
StringIO in the cStringIO module in Python 2.7.2 doesn't handle unicode strings properly. This bug has bitten me on a couple of occasions, in the recent past.
Just making a note for myself:
- Use StringIO if speed doesn't matter so much.
- Whenever possible, just convert the input to StringIO to a string, if what you are doing doesn't really require unicode strings.
Or may be it's just time to move up to Python 2.7.3
PyCon is back to Bangalore for the 4th edition. I've been to all the previous three editions, and I'm looking forward to this one.
I haven't spoken at any of the previous editions, and hadn't even really considered speaking at the previous editions. But, this time, I proposed a couple of talks, and both of them have been accepted! Woot! Lots of work ahead of me, but I'm really looking forward to give good talks. I'll be talking about Enaml, and showing a brief demo of all the wonderful things it makes possible. In the other talk, I'll give a brief overview of the latest happenings in the SciPy community and the projects to look forward to and contribute to. Pankaj and I will be talking together on this.
Also, Kenneth (lawgon) will be deeply missed at this edition of PyCon.
Sadly, John Hunter, the lead author of matplotlib, also passed away last week. RIP. :(
Pankaj and I were stuck with this weird bug recently. We had
A and a subclass
B and an instance of B, b, was
returning False for
After some debugging, we found that it was a problem with
imports, and the same module was being imported twice, using
two different names –
Here's a cooked-up example to demonstrate the "bug". We create a package baz, as shown below.
$ tree baz baz ├── bar.py ├── foo.py └── __init__.py
We have our two classes,
B defined in the
class A(object): pass class B(A): pass
foo.py and scratch our heads for a while, before
figuring out what's wrong…
# We add the directory containing out package 'baz' to sys.path to be # able to import using the package name. import sys sys.path.insert(1, '/tmp') # We import A from bar module present in the same directory as foo from bar import A # We import B from bar, but refer to it, as a submodule of baz from baz.bar import B a = A() b = B() print "isinstance(b, A) -->", isinstance(b, A) for cls in (A, B): print "%s -->" %cls, cls for module in sys.modules: if 'bar' in module: print module, sys.modules[module]
isinstance(b, A) --> False <class 'bar.A'> --> <class 'bar.A'> <class 'baz.bar.B'> --> <class 'baz.bar.B'> baz.bar <module 'baz.bar' from '/tmp/baz/bar.pyc'> bar <module 'bar' from 'bar.pyc'>
Well, as you can see, I've changed my blog a bit. I had been trying to make it gel well with org-mode and looking for ways to allow me to share arbit stuff with separate feeds for tags, separate pages with independent feeds, etc.
I finally got around to tweaking the code of reprise to be able to do this. There are still a few things I would like to change, but I'll make the changes gradually. The looks are also straight from uddegal's [[reprise, with a few tiny tweaks. I hope it is alright to be using it. I really liked the theme.
Every tag now has a separate feed.
/tags/emacs.atom will give
you the feed of
emacs for instance. Also, I plan to have a
couple of more pages, one for all the interesting links I come
across and another for interesting quotes. Both of these pages,
have their own feeds too (
I gave a short talk titled, Pictures, Songs and Python in this year's edition of SciPy India. The talk was a beginner level talk, mainly intended to get newbies excited about Python. My motive was to impress upon the audience that Python is a language to help you think in. It's a good playing ground to experiment with your ideas, quickly prototype ideas and have fun! To my surprise, the talk was liked by quite a few people. Thanks to Prof. Jayant Kirtane for the Bournville!
Here are the talk slides
This is some fun I had, trying to replicate what was written in this post. I had been trying to understand what was happening here, and found this post on Hacker News very helpful.
It is a known fact that our eyes have more cones for green and red as compared to blue. The Bayer filter used for digital camera lenses is based upon this principle. This post tries to illustrate that using the following two arguments.
- Looking at only the blue channel of an image looks very dark.
- Tripling the pixel size of blue channel doesn't cause much distortion in the final image.
Hence, our eyes suck at blue.
Their argument is flawed, but we could try and improve a few things.
Looking at the blue channel.
This is definitely flawed, since the intensity of blue in the image they have taken may be less and hence giving us a false positive.
We could instead gray-scale the image and use these pixel values in the 3 channels and look at the images.
This also eliminates the problem of the image being captured through a Bayer filter.
python code to do the same. (uses
def show_channels(I): for i in range(3): J = zeros_like(I) J[:, :, i] = I[:, :, i] figure(i) imshow(J) def show_grey_channels(I): K = average(I, axis=2) for i in range(3): J = zeros_like(I) J[:, :, i] = K figure(i+10) imshow(J)
Pixelating the blue channel
Again, there was this argument of use of Bayer filter affecting the image and the like.
What I did was to swap the channels, and then look at the images. However I swapped the channels, the image where the green channel was pixelated always looked the worst. The difference between blue and red was less noticeable, I feel.
Here's the code.
def zoom(x, factor=2): rows, cols = x.shape row_stride, col_stride = x.strides view = np.lib.stride_tricks.as_strided(x, (rows, factor, cols, factor), (row_stride, 0, col_stride, 0)) return view.reshape((rows*factor, cols*factor)) def subsample(I): for i in range(3): J = I.copy() J[:, :, i] = zoom(I[::4, ::4, i], 4) figure(i) title("%s channel subsampled" %colors[i]) imshow(J) def swap_subsample(I, k=1): for c, color in enumerate(colors): print "%s <-- %s" %(colors[c], colors[(c+k)%3]) for i in range(3): J = zeros_like(I) for j in range(3): J[:, :, j] = I[:, :, (j+k)%3] J[:, :, i] = zoom(I[::4, ::4, (i+k)%3], 4) figure(i+10) title("%s channel subsampled" %colors[i]) imshow(J)
Here are a few images. (View them in their original size)