2013-12-28

Fun with File Systems

Imagine you have a data logging application, that writes data to disk continuously. Since the application is not very stable, you want it to write out the data in small files, so it does not loose too much data, if the application crashes. This creates some need to find a good trade-off between file size and file system in order to avoid wasting too much disk space with file system overhead.

An approach to measure file system overhead and to explore the design space of different file systems and file sizes quickly is as follows:
  • Create a ramdisk and in this ramdisk create a bulk file of given size (using dd)
  • For all combinations of file size and file system:
    • Format the bulk file with a desired file system (using mkfs) and mount it.
    • Continuously write files with a fixed size to the mounted bulk file until an exception occurs and record how many files could be written to mounted bulk file (using some script).
Operations to the mounted bulk file are very fast, since the bulk file resides in a ramdisk. An experiment using this approach was conducted for a bulk file of 1 GiB. Considered file systems were ntfs, exfat, vfat, ext2,ext3 and ext4. File sizes were varied from 1 byte to 220 bytes. A plot summarizing the relative file system overhead for different file sizes and file systems is shown below:
From this figure it can be seen that file system overhead is excessive for small file sizes. ext2, ext3 and ext4 behave almost identical in terms of overhead. Minimal overhead in this experiment is observed for vfat at a file size of 65536 bytes per file. Strangely exfat is always outperformed by ntfs.

The scripts that were used to conduct this experiment can be downloaded here.

2013-12-19

Creating Tagclouds with PyTagCloud

Tag clouds are a nice way to visualize textual information. They provide a colorful overview of frequent terms of a text and they might also tell you something about it's writing style.

For instance, the following is a tag cloud of the famous paper "Cramming more components onto integrated circuits" by Gordon Moore. The script that was used to create it, can be downloaded here.
The script uses PyTagCloud, which gets most of the job done. Cloning the git repository, building and installing is straight-forward. Do not forget to have pygame installed.

Nice Tagclouds can not be created fully automatically. To create beautiful tag clouds, natural language text usually needs a bit of preprocessing. The script provided above uses NLTK for stop word removal and calculating term frequencies. Moreover it might be necessary to manually change term frequencies or to remove certain terms entirely.

PyTagCloud supports exporting the tag cloud to .png images. Exporting to HTML/CSS is also almost possible, but this feature seems a little broken at the time of this writing. PyTagCloud will not export correctly whether a term should be rotated or not resulting in tag clouds with overlapping terms.

2013-12-06

Matching Bibtex and HTML

Recently I was given two very long lists of scientific publications.  One as a BibTeX file and another as a table in an HTML file. Some of the publications in the BibTeX file were missing in the HTML table and the task was to find out which ones these were. An additional challenge was, that both lists were created manually by different people and therefore author names, titles, etc. did not match character by character. Words with special characters, eg. 'Jörg', would be spelled as 'J\"org' in BibTeX and 'Jörg' in the HTML table.

A simple script that helps with this tedious problem, can be downloaded here. The script reads the .bib and the .html file and compares the title field of every BibTeX entry with every row in the HTML table. The package difflib is used to perform "approximate (sub)string matching". By some string comparison metric, it calculates a value from 0.0 (no match at all) to 1.0 (identical string is contained as a substring).
Finally the script generates a report, that contains all the publications, which are most probably missing.