Canon's Ink Sink

My Canon Pixma recently stopped working with an error that seems unrecoverable, basically meaning that the device reached its end of life. I opened the device to salvage some electronic parts like motors. What I found was horrifying.

Continue reading →

Making Things With Maths (Steven Wittens at Full Frontal 2012)

Steven Wittens talks about having an enjoyable experience with math and what he does with it.

Continue reading →

Ich bin MMT

After 6 years at my job working as a web developer I decided to take a break and go back to university. To the University of Applied Science in Salzburg to be precise. I’ll get a master’s degree in Multimedia Technology and solidify what I learned in practice while working in that field and go way outside my comfort zone by getting deeper into computer science, machine learning, project management, etc.

I guess that's it

After six years of service my Macbook Pro (2006) is broken. It was in a shoddy state for a while, but now it’s time for retirement.

Hello iPhone 4S

This new iPhone 4S will relieve my iPhone 3G of its duty. When helping a friend move into his new place it started raining and somehow water got between the actual screen and the covering glass resulting in a lot of pixel errors. The new iOS versions make it slower and more frustrating to use.

Chase Jarvis and Jasmine Star on Wedding Photography

Chase Jarvis interviewing Jasmine Star about her life as a wedding photographer. They talk about how she got started, the meat of the business and the challenges she faces.

Continue reading →

iPad light painting

BERG did a brief image video for Dentsu London. It involves using iPads, long term expsure and stop motion.

Continue reading →

Backup Redundancy Redundancy

Chase Jarvis and his crew share some insights about their workflow from camera to final product - always with data redundancy in mind.

Continue reading →

Eyjafjallajökull Erruption Timelapse by Sean Stiegmeier

An amazing timelapse of what went down in Iceland.

Continue reading →

How to split and join large files

Sometimes it might be necessary to chop up a large file into several parts (e.g. to place it on drives that are to small for the whole file). Keep in mind that corruption to any of the parts will result in a corrupt file once it is stiched back together.

To split a large file:

split -b 1m /path/to/large/file /path/to/output/file/prefix

-b defines the size of the chunks (1m stands for 1 megabyte). Please refer to man split for all the options the tool offers.

This will give you a bunch of files whose names start with the defined prefix and a incremental suffix (starting with aa depending how many parts need to be created).

To join the parts back into one file:

cat prefix* > new_filename

Writing and reading harddrive images

Be careful with these commands, you can lose a lot of data quickly! Be mindful of the required disk space for any of those operations!

With Unix tools its pretty straight forward to create and play back images of hard drives.

Create an image of a harddrive:

dd if=/dev/hda of=/mnt/hdb1/myimage.img

Write the image back onto the harddrive:

dd if=/mnt/hdb1/myimage.img of=/dev/hda

How to make a static copy of a website

wget -k -K -E -r -l 10 -p -N -F -nH -w 2 <http://website.com/>
  • -k: convert links to relative
  • -K: keep an original versions of files without the conversions made by wget
  • -E: rename html files to .html (if they don’t already have an htm(l) extension)
  • -r: recursive… of course we want to make a recursive copy
  • -l 10: the maximum level of recursion. if you have a really big website you may need to put a higher number, but 10 levels should be enough.
  • -p: download all necessary files for each page (css, js, images)
  • -N: Turn on time-stamping.
  • -F: When input is read from a file, force it to be treated as an HTML file.
  • -nH: By default, wget put files in a directory named after the site’s hostname. This will disabled creating of those hostname directories and put everything in the current directory.
  • -w: Be a good neighbor and wait between requests (in seconds) to not overwhelm the server

Via StackOverflow