No conclusive answer to this question will be given within this post, but you may try for yourself using the following script. It can be used to create an Anki deck with sounds. Anki is an excellent flash card program (similar to Mnemosyne). csound is a software synthesizer used for sound generation. Installation instructions are provided with the script.

This blog is supposed to be a collection of random, unrelated, little ideas, thoughts, and discoveries, which I assume to be helpful to a negligible part of the world's population and wish to share out of pure altruism. If posts appear really weird, maybe you have the wrong kind of humor. Many of the posts are science/technology related. If you are opposed to that, stop reading here! Comments, criticism, corrections, amendments, questions are always welcome.
2013-07-31
Ear training with Anki
This post was created in collaboration with Thomas Fischbach after a discussion whether it would be possible for a person with only relative hearing to gain perfect hearing by practicing identifying musical notes with a flash card program.
No conclusive answer to this question will be given within this post, but you may try for yourself using the following script. It can be used to create an Anki deck with sounds. Anki is an excellent flash card program (similar to Mnemosyne). csound is a software synthesizer used for sound generation. Installation instructions are provided with the script.
No conclusive answer to this question will be given within this post, but you may try for yourself using the following script. It can be used to create an Anki deck with sounds. Anki is an excellent flash card program (similar to Mnemosyne). csound is a software synthesizer used for sound generation. Installation instructions are provided with the script.
2013-07-01
Dark Frame Analysis
To improve noisy video, shot at low light conditions, it is useful to measure the distribution of the noise. Therefore I recorded several minutes of video in a setup, where no light would enter the camera (see Dark-frame subtraction). A script was used to simply sum up a big amount of recorded frames. This 'noise accumulation frame' was used for further analysis.
The camera that was being used, is a consumer grade Panasonic HC-V500. Some strange effects will be unveiled further down and you might be interesting in testing if your camera has these as well. The simple script that was used to create the plots can be downloaded here (requires Scipy, Numpy, Opencv and Matplotlib).
Unfortunately there is no 1:1 correspondence of the pixels that end up in the video file and the real pixels on the CMOS sensor. It is therefore unknown if effects seen further down, are a result of the sensor or the image processing in the camera, esp. video compression. It would be favorable to take still images at the highest possible resolution in an automated way, but at least my video camera does not have this feature.
The distribution of the blue-channel in the noise accumulation frame looks like this:
The other channels (red, green) are almost indistinguishable from this. The following plot shows the distribution of red+green+blue channel:
It can be seen that a big part of the noise is approximately normal distributed. However if you look at the noise accumulation frame directly, some structure is visible, which looks a bit like the electric field of a quadrupole:
Even though there is no direct correspondence of the pixels in the video file and the pixels on the CMOS sensors, "hot pixels" are still present (Why?). These can be seen easily by looking at details of the picture above. Keep in mind that mu is around 3.61 and sigma is around 0.47, so all values above 5 should be extremely unlikely. The plots below simply show the same as the plot above subdivided into 4 parts:
The camera that was being used, is a consumer grade Panasonic HC-V500. Some strange effects will be unveiled further down and you might be interesting in testing if your camera has these as well. The simple script that was used to create the plots can be downloaded here (requires Scipy, Numpy, Opencv and Matplotlib).
Unfortunately there is no 1:1 correspondence of the pixels that end up in the video file and the real pixels on the CMOS sensor. It is therefore unknown if effects seen further down, are a result of the sensor or the image processing in the camera, esp. video compression. It would be favorable to take still images at the highest possible resolution in an automated way, but at least my video camera does not have this feature.
The distribution of the blue-channel in the noise accumulation frame looks like this:
The other channels (red, green) are almost indistinguishable from this. The following plot shows the distribution of red+green+blue channel:
It can be seen that a big part of the noise is approximately normal distributed. However if you look at the noise accumulation frame directly, some structure is visible, which looks a bit like the electric field of a quadrupole:
2013-06-13
Octave vs Python
"Don't do loops in Octave." is a well known truth. But sometimes loops are just too handy or cannot be avoided at all. I was curious whether there is a difference in execution time of loops in Python and Octave since both are interpreted languages.
user@machine:~/scripts/python_vs_octave$ octave loops.m
...
39.48
...
user@machine:~/scripts/python_vs_octave$ python loops.py
3.10390710831
Even in this simple example, the Octave script takes more than 10 times as long as the equivalent Python script. The difference becomes even bigger, if the amount of computation inside the loops is increased. Since Python also comes with sophisticated matrix processing capabilities (NumPy, SciPy) and if severe performance degradation for more sophisticated numerical analysis is not acceptable, the proverb above can be simply shortened to "Don't do Octave."
PS.: Another popular data analysis software is R. On the same machine using R version 2.14.1, the execution time of the equivalent script was 18.35s, which is in between the execution time of the Python and Octave scripts.
The following two scripts do nothing, but executing 3 nested loops with 100 iterations each and doing some useless computation within the loops. One script is in Python and one is in Octave. The Octave script is also listed above. Octave version 3.6.4 and Python version 2.7.3 was used for the comparison. The results are devastating.tic; for i=1:100 for j=1:100 for k=1:100 vain1 = i^2+j^2+k^2; vain2 = i^2+j^2+k^2; ... vain10 = i^2+j^2+k^2; endfor endfor endfor t = toc; disp(num2str(t));
user@machine:~/scripts/python_vs_octave$ octave loops.m
...
39.48
...
user@machine:~/scripts/python_vs_octave$ python loops.py
3.10390710831
Even in this simple example, the Octave script takes more than 10 times as long as the equivalent Python script. The difference becomes even bigger, if the amount of computation inside the loops is increased. Since Python also comes with sophisticated matrix processing capabilities (NumPy, SciPy) and if severe performance degradation for more sophisticated numerical analysis is not acceptable, the proverb above can be simply shortened to "Don't do Octave."
PS.: Another popular data analysis software is R. On the same machine using R version 2.14.1, the execution time of the equivalent script was 18.35s, which is in between the execution time of the Python and Octave scripts.
2013-06-02
Characterize CPU Cooling
Did you ever want to find out how good your PC's cooling system does under stress? Maybe you have just bought/built a PC and you want to find out if it keeps sufficiently cool.
The temperatures of the individual cores can be obtained easily with the lm-sensors package. To put a little stress on the CPU, cpuburn is very helpful.
Here is a little Python script that records the temperatures and frequencies of the CPU over time and another Octave script that visualizes the temperature data. The package cpufrequtils is required to obtain the CPU frequencies. For validation purposes, mpstat is used to record the CPU utilization. If you benchmark your CPU with a program that does not fully utilize the CPU during certain periods of time, this data might be helpful to correct for strange effects in the temperature curve.
Assuming a CPU with 2 cores and using burnP6 to create some stress , the script might be executed as follows:
$python rec_sensor_log 5000 sensors_log.csv & burnP6 & burnP6
The first parameter is the time between two temperature samples in milliseconds and the second is the name of the output file. Depending on the amount of available cores in the CPU, several instances of burnP6 (or equivalent) should be started. The script parses the output from lm-sensors in a not very sophisticated way. If you have more/less cores, you will probably have to make modifications to the script.
The visualization script (vis_log.m) will prompt for the input filename. Frequency scaling might occur at high temperatures. If this is detected, a vertical line is drawn into the plot.
Here is a sample plot that was created during a quick test on my laptop. Under stress the temperature rises up to approx. 70°C and the fan spins faster to keep the temperature at about this level. As soon as the stress is removed, the temperature quickly falls below 50°C.
The temperatures of the individual cores can be obtained easily with the lm-sensors package. To put a little stress on the CPU, cpuburn is very helpful.
Here is a little Python script that records the temperatures and frequencies of the CPU over time and another Octave script that visualizes the temperature data. The package cpufrequtils is required to obtain the CPU frequencies. For validation purposes, mpstat is used to record the CPU utilization. If you benchmark your CPU with a program that does not fully utilize the CPU during certain periods of time, this data might be helpful to correct for strange effects in the temperature curve.
Assuming a CPU with 2 cores and using burnP6 to create some stress , the script might be executed as follows:
$python rec_sensor_log 5000 sensors_log.csv & burnP6 & burnP6
The first parameter is the time between two temperature samples in milliseconds and the second is the name of the output file. Depending on the amount of available cores in the CPU, several instances of burnP6 (or equivalent) should be started. The script parses the output from lm-sensors in a not very sophisticated way. If you have more/less cores, you will probably have to make modifications to the script.
The visualization script (vis_log.m) will prompt for the input filename. Frequency scaling might occur at high temperatures. If this is detected, a vertical line is drawn into the plot.
Here is a sample plot that was created during a quick test on my laptop. Under stress the temperature rises up to approx. 70°C and the fan spins faster to keep the temperature at about this level. As soon as the stress is removed, the temperature quickly falls below 50°C.
Labels:
cpuburn,
csv,
lm-sensors,
mpstat,
octave,
temperature
2013-03-08
Remarks on Latex Spell Checking
This post focuses on some remarks how to improve the language of a (bigger) Latex document. A lot of the time technical issues (compiling, fixing syntax errors, adjusting images, tables, etc.) and the Latex typesetting itself ('How do I ... in Latex?') draw away a lot of the attention from the actual content of the document. I think that ordinary word processors like LibreOffice Writer have a significant advantage over Latex here, even though the result will not be so beautiful. This post discusses some techniques I found helpful to mitigate this problem.
You probably want to improve the quality of a document in several stages. There is (should be) spell checking happening on an everyday basis and after certain periods of time you will want to do bigger reviews to improve the overall consistency of the document.
The first thing would be to use the editors integrated spell checking capabilities. In Emacs I found Flyspell Mode quite convenient (M-x flyspell-mode). Otherwise ispell can be run from within emacs (M-x ispell-buffer). The downside of this is, that it will generate lots of false positives if you have a more technical document with lots of acronyms and technical expressions. Therefore it might be quite distracting to have lots of words on the screen marked.
Alternatively you might want to generate spelling reports for your whole document once in a while. The following short script can be used to generate a spelling report for several .tex files using Hunspell.
Usually bigger Latex documents will be spread over many different files. Finding some string in several files and opening every file that contains the string can be quickly accomplished by issuing:
$> find . -name "*.tex" | xargs grep "some word" -isl | xargs emacs
More sophisticated spell checking and grammar checking can be done using LanguageTool. Unfortunately it cannot be used with Latex directly. detex can be used to remove Tex commands from Latex files. This is a bit tedious, because it gives you lots of false positives, but you will probably discover some new language mistakes this way.
$> find ./chapters/ -name "*.tex" -exec detex -n {} \; >> doc_detexed.txt
$> java -jar LanguageTool.jar -l en-US -c utf-8 doc_detexed.txt > doc_languagetool_report.txt
Microsoft Office spell checking and grammar checking is superior to the tools mentioned above. To be able to open your Latex document in Microsoft Word, the tool latex2rtf can be used. Instead of compiling your document to PDF it generates an .rtf file. This is also an alternative to using detex, if you want to use Libre Office and LanguageTool. If you do not have access to an Microsoft Office installation, GDocs might also be an alternative.
Xournal allows you to annotate PDF documents. This is handy because directly writing annotations in the PDF allows you to really focus on the content and saves lots of paper if you would otherwise print intermediate stages of your document frequently.
You probably want to improve the quality of a document in several stages. There is (should be) spell checking happening on an everyday basis and after certain periods of time you will want to do bigger reviews to improve the overall consistency of the document.
The first thing would be to use the editors integrated spell checking capabilities. In Emacs I found Flyspell Mode quite convenient (M-x flyspell-mode). Otherwise ispell can be run from within emacs (M-x ispell-buffer). The downside of this is, that it will generate lots of false positives if you have a more technical document with lots of acronyms and technical expressions. Therefore it might be quite distracting to have lots of words on the screen marked.
Alternatively you might want to generate spelling reports for your whole document once in a while. The following short script can be used to generate a spelling report for several .tex files using Hunspell.
Usually bigger Latex documents will be spread over many different files. Finding some string in several files and opening every file that contains the string can be quickly accomplished by issuing:
$> find . -name "*.tex" | xargs grep "some word" -isl | xargs emacs
More sophisticated spell checking and grammar checking can be done using LanguageTool. Unfortunately it cannot be used with Latex directly. detex can be used to remove Tex commands from Latex files. This is a bit tedious, because it gives you lots of false positives, but you will probably discover some new language mistakes this way.
$> find ./chapters/ -name "*.tex" -exec detex -n {} \; >> doc_detexed.txt
$> java -jar LanguageTool.jar -l en-US -c utf-8 doc_detexed.txt > doc_languagetool_report.txt
Microsoft Office spell checking and grammar checking is superior to the tools mentioned above. To be able to open your Latex document in Microsoft Word, the tool latex2rtf can be used. Instead of compiling your document to PDF it generates an .rtf file. This is also an alternative to using detex, if you want to use Libre Office and LanguageTool. If you do not have access to an Microsoft Office installation, GDocs might also be an alternative.
Xournal allows you to annotate PDF documents. This is handy because directly writing annotations in the PDF allows you to really focus on the content and saves lots of paper if you would otherwise print intermediate stages of your document frequently.
Labels:
detex,
flyspell mode,
hunspell,
ispell,
LanguageTool,
latex,
latex2rtf,
spell checking,
xournal
2013-03-02
Print PDFs by Creating Multiple Print Jobs
Printing long PDF documents is sometimes tedious, especially if you have a dull network printer. You might know the case that for mysterious reasons the printer just does nothing for a very long time and thereafter it seems to have forgotten about the actual print job.
Sometimes printing can still be accomplished by sending only a few pages of the PDF document in distinct print jobs. Doing this manually is also tedious, so here is a simple Python script that breaks up a PDF document into many PDFs with just 2 pages (using pdftk) and spools them via lpr (the package cups-pdf is required for this).
Sometimes printing can still be accomplished by sending only a few pages of the PDF document in distinct print jobs. Doing this manually is also tedious, so here is a simple Python script that breaks up a PDF document into many PDFs with just 2 pages (using pdftk) and spools them via lpr (the package cups-pdf is required for this).
2013-02-07
Detect Printer Steganography
![]() |
click on the image to enlarge |
I was curious if it was possible to visualize the pattern. In the article mentioned above, a test setup with a microscope and blue LED is described. Alternatively I found that it was sufficient to use a simple scanner and do some post processing with GIMP. Many scanners allow to scan with equal or higher resolutions and color depths than the printer can print.
The test page was printed with 600 dpi and also scanned with this resolution. The color depth was set to 16 bit. To make the little dots visible, an edge detection filter (in Gimp 2.6 under Filters, Edge-Detect, Edge...) was used. In the resulting image, the dots were visible best in the blue channel. The other two channels were removed as described here by adding an entirely green and an entirely red layer and setting the blending mode to subtract.
An excerpt of the empty part of the page can be seen here:
![]() |
click on the image to enlarge |
Subscribe to:
Posts (Atom)