2013-12-28

Fun with File Systems

Imagine you have a data logging application, that writes data to disk continuously. Since the application is not very stable, you want it to write out the data in small files, so it does not loose too much data, if the application crashes. This creates some need to find a good trade-off between file size and file system in order to avoid wasting too much disk space with file system overhead.

An approach to measure file system overhead and to explore the design space of different file systems and file sizes quickly is as follows:
  • Create a ramdisk and in this ramdisk create a bulk file of given size (using dd)
  • For all combinations of file size and file system:
    • Format the bulk file with a desired file system (using mkfs) and mount it.
    • Continuously write files with a fixed size to the mounted bulk file until an exception occurs and record how many files could be written to mounted bulk file (using some script).
Operations to the mounted bulk file are very fast, since the bulk file resides in a ramdisk. An experiment using this approach was conducted for a bulk file of 1 GiB. Considered file systems were ntfs, exfat, vfat, ext2,ext3 and ext4. File sizes were varied from 1 byte to 220 bytes. A plot summarizing the relative file system overhead for different file sizes and file systems is shown below:
From this figure it can be seen that file system overhead is excessive for small file sizes. ext2, ext3 and ext4 behave almost identical in terms of overhead. Minimal overhead in this experiment is observed for vfat at a file size of 65536 bytes per file. Strangely exfat is always outperformed by ntfs.

The scripts that were used to conduct this experiment can be downloaded here.

2013-12-19

Creating Tagclouds with PyTagCloud

Tag clouds are a nice way to visualize textual information. They provide a colorful overview of frequent terms of a text and they might also tell you something about it's writing style.

For instance, the following is a tag cloud of the famous paper "Cramming more components onto integrated circuits" by Gordon Moore. The script that was used to create it, can be downloaded here.
The script uses PyTagCloud, which gets most of the job done. Cloning the git repository, building and installing is straight-forward. Do not forget to have pygame installed.

Nice Tagclouds can not be created fully automatically. To create beautiful tag clouds, natural language text usually needs a bit of preprocessing. The script provided above uses NLTK for stop word removal and calculating term frequencies. Moreover it might be necessary to manually change term frequencies or to remove certain terms entirely.

PyTagCloud supports exporting the tag cloud to .png images. Exporting to HTML/CSS is also almost possible, but this feature seems a little broken at the time of this writing. PyTagCloud will not export correctly whether a term should be rotated or not resulting in tag clouds with overlapping terms.

2013-12-06

Matching Bibtex and HTML

Recently I was given two very long lists of scientific publications.  One as a BibTeX file and another as a table in an HTML file. Some of the publications in the BibTeX file were missing in the HTML table and the task was to find out which ones these were. An additional challenge was, that both lists were created manually by different people and therefore author names, titles, etc. did not match character by character. Words with special characters, eg. 'Jörg', would be spelled as 'J\"org' in BibTeX and 'Jörg' in the HTML table.

A simple script that helps with this tedious problem, can be downloaded here. The script reads the .bib and the .html file and compares the title field of every BibTeX entry with every row in the HTML table. The package difflib is used to perform "approximate (sub)string matching". By some string comparison metric, it calculates a value from 0.0 (no match at all) to 1.0 (identical string is contained as a substring).
Finally the script generates a report, that contains all the publications, which are most probably missing.

2013-11-21

Finding Research Gaps with Google Scholar

Imagine you want to do some very important research and you are despaired to identify a research gap according to the current state-of-the-art. Moreover let's assume you have the intuition that a research gap can be found by combining two concepts from two different fields.
For instance, you might just have read two textbooks, one about freshwater aquarium fish and one about chemicals dissolved in water. Now you want to combine concepts from these two fields. To do this you need an estimate of 'how much' research has been done on the effect of chemical X on fish Y.

To get a rough estimate 'how much' research has already been done, Google Scholar can be used. For every search, it gives you an approximate number of publications that match your search terms. With this you can build a matrix like the following:


The rows correspond the keywords from one category (here: different types of fish) and the columns correspond to the other category (here: different chemicals). The color corresponds to the approximate amount of publications on Google Scholar that contain both keywords.

Certainly you cannot gain ultimate wisdom from this. Two keywords might just be a nonsensical paring or the keywords might be used in many publications, but in a context totally different from what you anticipated. However it provides a quick and simple way to figure out if you are entering a crowded field or not.

The script that was used to produce this plot can be downloaded here. The text based web browser Lynx needs to be installed to run it.

2013-11-01

Sorting Papers by Keywords

Imagine you are a given an inhumanely big electronic pile of publications to read and an early deadline. Even reading the abstracts will cost you a considerable amount of your time and most of the papers are not all related to what you are up to. How do you select the papers to read first?

A simple approach might be the following: Assume you can come up with a set of keywords with an accompanying quality factor.The quality factor indicates how much you are interested in a given keyword. A very important keyword might be given a quality factor of 1.0 and a more general keyword might have a quality factor of just 0.1.

With this set of keywords and quality factors it is quite easy to compute a score for every publication. For every paper and every keyword the number of occurrence of the keyword is counted and the score of the document is increased according to the quality factor. The papers can be sorted by score and this gives you the priorities in which to read the papers. While this may not be a masterpiece of Information Retrieval, it is still a simple and quick approach to find relevant information.

A simple R script to create a table with paper scores can be downloaded here. The text mining package tm is used, which reads .pdf files conveniently.
The keywords/quality factor pairs need to be provided in an extra file just like the paths to the publications. The script creates a simple .html file for convenient viewing of the scored paper list.

2013-09-13

'Synchronizing' Podcasts with a Portable Device


Do you also like listening to podcasts? I do and my usual use case is the following: At first I discover a new podcast on the web. Then I use a program like gpodder or Miro to download all the episodes, which end up in one plain directory of the hard drive. At last I want to 'synchronize' the episodes with a portable device.

A lot of the time the portable device will have less memory than the size of the downloaded files or it is not desirable to fill up the portable device with just one podcast. So 'sychronizing' should copy only some episodes at a time to the portable device and remember which episodes have been copied. After listening/watching an episode, it can be deleted on the portable device to free some space for new episodes. New episodes should be copied during the next synchronization. No more capabilities except for playback and deleting episodes are assumed on the side of the portable device. My use case is a bit similar to the use cases discussed here.

The following script implements this idea of synchronization. It does so by building a simple Sqlite-Database which contains the information if an episode has been copied to the portable device already.

2013-09-01

Remarks on Presentations

Here are some simple suggestions that I find useful to improve slides for (scientific) presentations. Many suggestions are subjective and no exhaustiveness is claimed. Therefore feel free to differ (and comment).

The suggestions can be downloaded here as pdf and odp.