Choose color scheme

Category Archives: I.T.

  • Google drive API file upload script

    If you want to upload file to Google Drive, you will naturally gravitate towards the Google Drive API. While reading about the APIs that Google publishes for almost anything, you will learn about their SDK. The SDK provides you with easy functions for interfacing with the API for various programming languages.

    The problem is that the more I used the SDK the more confused I was. The documentation is often unclear, not all bindings are implemented across all languages. But above all, the thing that made me dislike the SDK is the fact that uploading a file to Google Drive took 5 times the amount of memory than the file itself. Meaning that the datastructure they use and how they pass it around is SUPER LAME. Not a huge deal if you’re uploading a few doc files but definitely crappy for GB range files. It’s just not right and completes the pattern of “meh” surrounding the SDK.

    What is clear & consistently documented is every single API call you can make. It was time to go straight to the API. Sure there is still quite a bit of poking involved in getting something working exactly right but my experience with the API has been a lot better. It has also helped me understand the mindset & design so figuring out new things is much faster.

    How?

    HTTP, every API call uses it. You can use any technology that you are familiar with to do your HTTP call but I have a penchant for PHP + cURL.

    The script

    It will read a file in chunks and upload them consecutively. Do to so it uses the API’s resumable uploads. As such it will never consume more RAM that the configurable chunk size.

    download

    It still needs a few improvements at the moment but it’s functional.

    What it solves

    • authentication: getting a new access token from a refresh token as needed
    • curl custom HTTP requests with custom headers
    • file chunking
    • Google Drive API resumable upload
    • passing JSON encoded requests in body
    • exponential backoff

    Doesn’t sound like much but it took a while to piece it all together.

  • Dr. Meter B003+ 300X USB digital endoscope/miscroscope camera

    TLDR: an awesome cheap device wrapped in Chinese funkiness.

    The details

    It is hard, very hard to not pay attention to all the funny details that go around the device. But it’s a solid device that performs great for a good price. As far as I can tell, it does not do zooming per se, it is only able to get very close to a subject and thus when the resulting picture is displayed on a bigger screen, small details are visible. As such, what you see is strictly dependent on how close you stick the camera to your subject. In fact the camera has a focus length of a few millimeters to infinity, which means you ca use it a a regular camera but you’ll have to turn the focus knob quite a bit for that.

    First, some pics of what it’s capable of

    They are seriously lacking online

    The device itself

     On its little tripod

    The lens

    Everything else

    The device has multiple attachments referred to as “beauty inspection tools” which are meant to stick the camera in various orifices of one’s body. They are nicely sealed in sterilized bags (but not the anal one).

    Some of the various “beauty inspection tools”…

    The unboxing feels like opening a Chinese treasure chest, the mechanism, the texture, the looks; this product is made in China and not pretending otherwise. What else feels Chinese is pretty much anything written in English. It’s super funny to read it all.

    Technically

    The device is recognized as a standard camera in Windows, MacOS & Linux! No extra drivers necessary. This is what I love about buying products from smaller companies, they go after existing standards. As such you can open it with any webcam software, I took my test shots with photobooth. They provide some software for filming & measuring among other things but I care not about this functionality so I won’t spend the time loading it.

    I’ll take it to the beehive this week-end and we’ll see how it does there.

  • Chicken cam – back online!

    But with a serious loss of functionality. Given the internet connection that I have (cellular) I can’t reasonably set it up to do live streaming. I’ve also disabled interaction with the cam. What’s left is an image uploaded every hour. Not super duper cool but I’ll take what I can get in this neck of the woods.

    Hopefully this will get better when better internet is available.

  • ZFS send/receive accross different transport mechanisms

    Sending ZFS snapshots across the wires can be done via multiple mechanisms. Here are examples of how you can go about it and what the strengths and weaknesses are for each approach.

    SSH

    strengths: encryption / 1 command on the sender

    weaknesses: slowest

    command:

    zfs send tank/volume@snapshot | ssh user@receiver.domain.com zfs receive tank/new_volume

    NetCat

    strengths: pretty fast

    weaknesses: no encryption / 2 commands on each side that need to happen in sync

    command:

    on the receiver

    netcat -w 30 -l -p 1337 | zfs receive tank/new_volume

    on the sender

    zfs send tank/volume@snapshot | nc receiver.domain.com 1337

    (make sure that port 1337 is open)

    MBuffer

    strengths: fastest

    weaknesses: no encryption / 2 commands on each side that need to happen in sync

    command:

    on the receiver

    mbuffer -s 128k -m 1G-I 1337 | zfs receive tank/new_volume

    on the sender

    zfs send tank/volume@snapshot | mbuffer -s 128k -m 1G -O receiver.domain.com:1337

    (make sure that port 1337 is open)

    SSH + Mbuffer

    strengths: 1 command / encryption

    weaknesses: seems CPU bound by SSH encryption, may be a viable option in the future?

    command:

    zfs send tank/volume@snapshot | mbuffer -q -v 0 -s 128k -m 1G | ssh root@receiver.domain.com 'mbuffer -s 128k -m 1G | zfs receive tank/new_volume'

    Finally, here is a pretty graph of the relative time each approach takes:

    SSH + MBuffer would seem like the best of both worlds (speed & encryption), unfortunately it seems as though CPU becomes a bottleneck when doing SSH encryption.

  • MDNS/Bonjour printer discovery script

    Here’s a script I wrote whose purpose is to discover the printers that are currently being advertised by Bonjour on the network. The reason I wrote it was for a Nagios check that would in term verify that our printers were present. Writing it took me through the meanders of MDNS in Python & on Linux with multiple vlans. Let’s just say non-trivial.

    Download

    find_mdns_printers_1.0.tar.gz

    Sample output

  • FreeBSD 9.0: higher MTU & NIC bonding

    Here’s is some information that took me a good while to gather.

    With the igb driver in FreeBSD, the mbuf cluster size needed is a mathematical formula involving the number of CPUs & the desired MTU. Unfortunately, it is currently hard set. On enterprise machines with many cores and higher MTUs, it is quite easy to reach this set limit. It will express itself with the following error message after an ifconfig:

    igb0: Could not setup receive structures

    This limit can be overridden with the following in /etc/sysctl.conf

    kern.ipc.nmbclusters=131072
    kern.ipc.nmbjumbo9=38400

    These are the value that worked for 16 cores & an MTU of 9000.

    While we’re at it, it took me a while to nail the exact syntax require for NIC bonding so here it is:

    /etc/rc.conf

    if_lagg_load="YES"
    ifconfig_igb0="mtu 9000 UP"
    ifconfig_igb1="mtu 9000 UP"
    cloned_interfaces="lagg0"
    ifconfig_lagg0="laggproto failover laggport igb0 laggport igb1 192.168.0.123 netmask 255.255.255.0"

    As far as I can tell, capitalization matters…

  • The impairing lack of light pollution

    When we lived in the city, ambient light pollution was such that I could set my CCTV cams to a certain brightness/contrast and the limited auto adjustments they did were enough to cope with day & night. In the middle of the forest, the night gets full on #000000 dark. The poor cams can’t adjust and I need to pick whether I want to record at night and get white frames during the day, or at daytime and get black frames during the night.

    I wrote the following script which computes the average brightness of a cam’s current frame and issues more drastic adjustments if needed. It is obviously tailored for my FI8918Ws but the same idea can be used for others.

    #!/usr/bin/php
    <?php
    
    $img = @imagecreatefromjpeg( 'http://192.168.1.203:8003/snapshot.cgi?user=<username>&pwd=<password>' ) ;
    if( $img===false ) {
        die( "Unable to open image" ) ;
    }
    
    $w = imagesx( $img ) ;
    $h = imagesy( $img ) ;
    
    $total_r = 0 ;
    $total_g = 0 ;
    $total_b = 0 ;
    for( $i=0 ; $i<$w ; $i++ ) {
        for( $j=0 ; $j<$h ; $j++ ) {
            $rgb = imagecolorat( $img, $i, $j ) ;
            $total_r += ($rgb >> 16) & 0xFF;
            $total_g += ($rgb >> 8) & 0xFF;
            $total_b += $rgb & 0xFF;
        }
    }
    
    $average_brightness = round( ( $total_r / ($w*$h) + $total_g / ($w*$h) + $total_b / ($w*$h) ) / 3 ) ;
    echo $average_brightness, "n" ;
    
    if( $average_brightness<30 ) {
        echo "night time!n" ;
        echo "moden" ;
        $result = file_get_contents( 'http://192.168.1.203:8003/camera_control.cgi?param=3&value=0&user=<username>&pwd=<password>' ) ;
        sleep( 10 ) ;
        echo "contrastn" ;
        $result = file_get_contents( 'http://192.168.1.203:8003/camera_control.cgi?param=2&value=6&user=<username>&pwd=<password>' ) ;
        sleep( 10 ) ;
        echo "brightnessn" ;
        $result = file_get_contents( 'http://192.168.1.203:8003/camera_control.cgi?param=1&value=240&user=<username>&pwd=<password>' ) ;
    } else if( $average_brightness>170 ) {
        echo "day time!n" ;
        echo "moden" ;
        $result = file_get_contents( 'http://192.168.1.203:8003/camera_control.cgi?param=3&value=2&user=<username>&pwd=<password>' ) ;
        sleep( 10 ) ;
        echo "contrastn" ;
        $result = file_get_contents( 'http://192.168.1.203:8003/camera_control.cgi?param=2&value=4&user=<username>&pwd=<password>' ) ;
        sleep( 10 ) ;
        echo "brightnessn" ;
        $result = file_get_contents( 'http://192.168.1.203:8003/camera_control.cgi?param=1&value=64&user=<username>&pwd=password>' ) ;
    }
    
    ?>[/code]
  • Verizon’s 4620L, a great device for the technically inclined

    My family recently moved to a fairly remote area, the question of internet access has been a major one for the couple of months leading to the move. Besides satellite & dial-up, our only option was Verizon’s MiFi (3G or 4g if you’re lucky) in the form of a hotspot device: the 4620L.

    I was afraid that the 4620L would try to be too smart and not let you tinker with it very much, very few decent reviews are available online and the official documentation is seriously lacking. Fortunately this couldn’t be further from the truth, it is a great little device that performs well and lets you turn all its knobs.

    When using “USB tethered mode” I was afraid I’d need specific drivers and a software suite running but lo and behold, it actually just pretends to be an ethernet device over USB. Absolutely perfect to put a Linux router in front of it!

    One thing that did not get properly QA’d is the “Enable DCHP Server” checkbox which simply doesn’t work. But guess what, I want to do my own routing and I’d like to avoid NATing from the 4620L to the Linux router. One way to circumvent this is to use the “Config File Download” and “Config File Upload” options which are meant as a way to backup & restore configuration but since the file is all intuitively labeled XML it’s easy to disable the DHCP server from there.

    While you’re in there, you can also override the maximum number of “Available Wi-fi Connections” (5 when using 3G). They probably have this restriction so regular Joe user doesn’t hook a gazillion device and complain about speed over 3G. Reaching this limit is very easy nowadays.

    A new mission

    Verizon’s plan is pretty pricy and very metered… All we get is 5GB per month, each additional 1GB will cost us $10. Ouch… I need to configure the network to consume as few bytes as possible. Netflix is out, AdBlock is in, automatic updates of various types are out. Above all, my home server will now be doing some serious routing, the goal of which is to allow devices to be on the home intranet while minimizing their use of the internet.

    No inbound connection

    That’s right, the IP you get from Verizon is in the private range (RFC 1918), this means they are doing some NATing of their own. You can forward ports all you want on your 4620L this will have no effect. Your only option is some cumbersome hole punching.

    We’ll be talking routing in a next post, I would have liked to find this information about the device & Verizon’s setup so I wanted to put it out there sooner rather than later.

  • Change default home Unity lens

    Because we don’t necessarily want the home lens to be the default one in Unity, and unlike other lenses it is hardcoded left & right. Here’s a little trick that will let you pick a different lens as the default for when you click on Dash.

    edit the file: /usr/share/unity-2d/shell/dash/Dash.qml

    replace line 79 “onDashActivateHome: activateHome()” by “onDashActivateHome: activateLens(X)” where X is the index of the lens you want to load (count from left to right starting from 0).

    You’ll want to restart Unity for this to take effect.

    Done!

  • Loopback & crypt: a filesystem, within an encrypted partition, within a file

    So here we are, 2012 and physical media are going away really fast. We won’t even talk about CDs which have been relegated to the role of plastic dust collectors; hard drives even are being abstracted by a myriad of cloud based solutions. Their purpose is shifting towards a container for the OS and nothing else. Filesystems & their hierarchies become hidden in a bid to remove any need to organize files, rather, you are supposed to throw it all up in the cloud and search on metadata.

    While moving away from physical media is convenient and inevitable, I like the hierarchical organization that directories provide. What’s more intuitive than a labeled container with stuff in it?

    How can we detach our hard drives from their physical shells, move them around in an omnipresent cloud and keep them secure?

    By creating a file, attaching it to loopback & creating an encrypted partition in it!

    Here’s how to do it
    • Create a file that will be your soft hard drive with:
    dd if=/dev/zero of=/tmp/ffs bs=1024 count=524288

    This will create a 512MB file (524288/1024).

    • Make sure that the loopback device #0 is free:
    losetup /dev/loop0

    You should see something telling you that there is “No such device or address”.

    • Attach the soft hard drive to the loopback device:
    sudo losetup /dev/loop0 /tmp/ffs
    • And then make sure it was indeed attached by re-running:
    losetup /dev/loop0
    • Create an encrypted partition on your attached soft hard drive:
    sudo cryptsetup --verify-passphrase luksFormat /dev/loop0 -c aes -s 256 -h sha256
    • Open your encrypted partition:
    sudo cryptsetup luksOpen /dev/loop0 ffs
    • Create a filesystem in it:
    sudo mkfs.ext3 -m 1 /dev/mapper/ffs
    • And mount it like a regular disk:
    sudo mount /dev/mapper/ffs /mnt
    • When you are done using your encrypted soft hard drive you will want to umount it:
    sudo umount /mnt
    • Close it:
    sudo cryptsetup luksClose ffs
    • Detach it from loopback:
    losetup -d /dev/loop0

    These steps can be automated of course. As a quick reminder, using the drive goes “loopback attach -> crypt open -> mount” and when you’re done it’s “umount -> crypt close -> loopback detach”.

    That’s it! media-less & secure storage.

    Tested on: Ubuntu 12.04 64b