Choose color scheme

Category Archives: I.T.

  • Feature feature feature draw feature feature

    Since the success of the Mandala maker, I’ve been pumping out a ton of features, improvements and bug fixes. They are too numerous to list but a few stand out.

    • Collaborative editing using websockets for drawing mandalas with multiple people on the same session.
    • Drawing without mandalas, there are only so many Mandalas one can collaboratively draw and so I created http://draw.akrin.com which leverages all the Mandalagaba goodness for drawing and removes the mandala specific layer.
    • Read only mode guided by artists who like to livestream their drawing, I created a read-only mode to the collaboration. This way, people can watch but not participate.
    • An iOS app was born
    • High resolution renders are possible for $2, the charge helps with server costs and makes it a bit fairer if one was going to make money using the tool.
    • Not visible but noteworthy nonetheless, an intricate server strategy was put in place to alleviate future waves, load balancing had to be built from scratch because of the collaboration layer.
    • many, many, many other little things :)

     

    In terms of use, while the initial tsunami is dead, the project was picked up by artists and educators. I can’t post all all the pictures for privacy but I can’t tell you how awesome it feels to receive pictures like these:

    Kids enjoying a Mandala making lab somewhere in China

    19518702961134851451

     

     

     

    Artist Peter Draws created more amazing work:

    peterdraws

     

     

     

    The mandala maker was deployed on big touch screens which turned it into a more social activity much like arcade games.

    IMG_8010IMG_0485

     

     

     

    Here’s draw.akrin.com: Click to pop out.

  • HTML Canvas smooth drawing & websocket live collaboration

    Intro

    For a while I’ve been polishing a way to have not only a smooth drawing/writing algorithm for HTML Canvasses, but also have it be “streamed” over the network for live collaboration. While the work has been mostly integrated into projects such as Mandalagaba, here I present it in its most basic form so that it may be dissected.

    Demo

    Draw by dragging your mouse/finger/stylus bellow, fire up another browser to test network repeat. Canvas is used by others online (sorry for anything obsene the internet has left on it) and cleared every hour.

    Quick start

    1. download & decompress html_canvas_smooth_writing.tar.gz
    2. if you don’t have it already, install NodeJS
    3. run the websocket server
      node websocket_server.js
    4. edit index.html and replace all occurences of “ben.akrin.com”  by the host/ip which is running your websocket server. If you are testing on your computer, 127.0.0.1 will do. Alternatively, you can leave it set to “ben.akrin.com” and use my websocket server, in which case step 2 & 3 aren’t necessary, and you’ll have limited latitude as to how many changes you can implement. But it’s perfect for just trying & dissecting the code.
    5. navigate to index.html

    (tested on Mac, Raspbian & Ubuntu)

    Rendering Pen Strokes

    The usual method

    Drawing on an HTML Canvas is usually done by collecting coordinates at which “touch” is being detected and drawing straight lines in between. While this makes for a simple implementation with decent results it has multiple issues:

    • straight lines do not represent well the curvatures of human drawing & writing
    • the joins between lines of various orientations can add seams
    • these problems are exacerbated on devices which sample touch slowly, resulting in less coordinates to represent a pen stroke

    Here is a classic example of what this looks like:

    IMG_0196The quadratic curve method

    To make drawing and writing smoother, we use quadratic curves to link our coordinates. Here’s a basic explanation of how it works:

    you need 2 canvasses overlaid on top of each other (z-index is highly relevant here). The way it works is that the top canvas is the one that you draw on.
    IMG_0197IMG_0198

    The reason for this is that a pen stroke is getting redrawn entirely every time new coordinates come in. This is because with quadratic curving, the final shape of a stroke is never fully known until all coordinates are. So every time coordinates come in (mouse move event),  we clear the temp_canvas and redraw the whole stroke. The operation happens fast enough that it is invisible.

    When you are finished with your stroke (mouse up event), the temp_canvas is cleared and the whole stroke is committed (redrawn) on the permanent canvas.

    What it looks like with our quadratic curving algorithm:

    IMG_0201

    Network Streaming

    Here is how we add network streaming to the pen strokes. Emitting your pen stroke to other clients is easy, you simply blast your current coordinates to a websocket which will repeat it to other clients. When you receive coordinates from other clients though, you can’t use temp_canvas to render them as it might conflict with your current drawing. To this effect we add yet another canvas between permanent_canvas and temp_canvas which will render network events.

    IMG_0199IMG_0200

    Much like temp_canvas, collaboration_canvas is meant for temporary rendering and when other clients finish their pen stroke (mouse up), the instruction to commit to the permanent canvas is sent through the websocket.

    That’s it

    It’s hard for me to document every step of the code; I don’t know your coding level, it’s asynchronous and has lots of bits & pieces which serve specific purposes. I hope however with the basic theory explained, and the code boiled down to its essentials, that you can dissect it easily. Feel free to use the comments section for questions.

  • Multiaxis symmetrical drawing – A Mandala maker that doesn’t suck

    I’ve had a terrible time finding a good piece of software to draw mandalas with. To be honest, I don’t care what mandalas are but I’m obsessed with how cool it is to draw with replicated symmetry on multiple axis.

    Without further ado, here it is (drag your mouse to draw):

    I hope you find it addictive. Click to pop out.


    So wow… just wow, this blew up. This little tool ended up making the front page of Reddit in one amazing thread in which  many people shared their mandalas. It was an amazing day in many ways, first of all I’ve never seen so many positive comments in a single thread online. The amount of people who seem to have been positively touched by this program is humbling. Drawing mandalas is apparently great stress relief for many and I’ve received several personal notes on how much this program had done for them. I did not see that coming to say the least. Then the fact that this tools was picked up by real artists pushed it to build creations I didn’t even know it was capable of. Lastly, my solar powered raspberry Pi handled hundred of thousands of connections in a single day which turned out to be a technical challenge on top of the overwhelming response. When I set out to create this program, I did not have the slightest idea that it would hit such a sweet spot. I mainly wanted to scratch an itch and couldn’t find any good apps out there. It is a true privilege to have had the chance so see so many people use a tool I made, and have them report they were positively touched by it.

    Here is a few of the most amazing mandalas that were posted on the Reddit thread, this is what it looks like when real artists take over your tool :)

     

  • At the junction of I.T. & homesteading – continued

     

    Figuring out a good repeatable & maintainable way to deploy Pi Zeros.IMG_7684

    My favorite project screws in action.IMG_7693

    The boxes I picked a very tight and leave no room for any other hardware.IMG_7694

    I made a hole for a cable gland which is very helpful for cable strain relief, removing friction on sharp edges and making a right cable entryway.IMG_7695

    This little guy is only monitoring temperature, I’ll need a bigger box for the greenhouse device as it needs a bit more hardware.IMG_7746

  • At the junction of I.T. & homesteading

    I started acquiring multiple Raspberry Pi Zeros for the purpose of starting to figure out a consistent deployment scheme for the various automation related projects I envision for our homestead.

    For now I’ve simply deployed 2 DS18b20 temperature sensors. One on the existing Pi in the Solar shed which serves this blog, and another on a Pi Zero in the house. Only sensing for now which complements the data I’m gathering from the solar array.

    The Pi Zero consumes between 0.1 and 0.2 AmpsIMG_7476

    Sample data being gatheredScreen Shot 2016-12-10 at 10.25.03 PM

    Here are my current install notes for the Pi Zero.

    To limit power consumption, add this to /etc/rc.local to turn off HDMI output

    /usr/bin/tvservice -o

    To be able to read from the temperature probe, add the following line to /boot/config.txt

    dtoverlay=w1-gpio:3

    Get the python-w1thermsensor package

    sudo apt-get install python-w1thermsensor

    Reboot & make sure devices are listed in /sys/bus/w1/devices

    The python code necessary to read the probe is:

    from w1thermsensor import W1ThermSensor
    # assuming only 1 sensor
    sensor = W1ThermSensor.get_available_sensors( [W1ThermSensor.THERM_SENSOR_DS18B20] )[0]
    temperature = sensor.get_temperature()
    if temperature is not None:
        print '%.1f' % (temperature)
    else:
        print "failed to get reading."
  • Nosy Monster

    Robin & I have been working on a rover for the land since his toy RC car broke. I opened it up to see if I could fix it, and as with many things, I quickly came to the conclusion that “I’ll just throw a Pi in there and do it myself”.

    Here’s the supposedly amphibian piece of shit that broke withing 1 hour of use.

    Screen Shot 2016-10-16 at 6.04.40 PMThe engines still worked so I bought a Raspberry Pi Zero with a Pi cam, some super cheap Sunfounder Relays

    From the ground up

    Before anything else, we introduced the notion of a relay. In the past we used Lego motors and batteries to apply power directly to actuators and create little robots. I just snipped one of the wires and had Robin create contact manually so he could make the correlation between a closed circuit and the motor going.

    nosy_monster_01

    With this “manual relay” in mind, we added a Pi controlled relay to make him realize that what the new gizmos do, is what he was doing by hand.IMG_7013

    nosy_monster_02

    Ok we have a web controlled Lego motor going. Let’s see if we can replicate with the RC car’s motors.

    IMG_7020IMG_7021First the manual relay

    nosy_monster_03

    Then with the Pi controlled relaysIMG_7024nosy_monster_04Our first iteration looked like this and had a few issues. I separated the circuit powering the DC motors and each were powered by only 1 AA battery. I also had many adjustments to make in the logic.

    IMG_7064Eventually, by adding a DROK voltage regulator, I was able to power everything from a single USB charger and prevent the motors from affecting the rest of the circuits.

    IMG_7127But the extra hardware is hard to fit in the Nosy Monster so it’s unlikely that I will be able to fit the solar panel that would turn it into a completely autonomous robot. So I started googling for other potential frames and OH GOD I JUST STUMBLED INTO THE WORLD OF RC ROBOTICS. Oops…

    In any case, I broke down the control into a step by step process. Instead of pressing “Go” and “Stop”, pressing “Go” will make it go for 1 second. There is 2 reasons for this. First, web based control introduces delays which make for a shitty live driving experience. Second, I would like this to behave like an actual rover on another planet. It reports back its sensors status and human decide on the next steps to follow. Heck I’m even thinking the next steps could be something that is voted on online. This would not be possible with “live” control.

     

  • Adding collaborative editing to the Ace web code editor with web sockets

    Using Ace‘s excellent API, it is relatively easy to enhance it to allow for live collaborative editing.

    The gist of what we’re doing here is to use Ace’s API for extracting and applying delta when changes occur in the editor. Then we simply transmit them over a websocket that all clients are connected to. This example is functional but in no way comprehensive to what a full code editing collaboration could be. It’s meant to be simple thus understandable. It’s a great starting point for whatever other pieces of functionality you want to send across web sockets.

    Loading Ace in a webpage with some custom Javascript

    This is what your web page looks like, load Ace as instructed and add Javascript to handle interaction with the websocket server.

    <!DOCTYPE html>
    <html lang="en">
        <head>
    
            <title>Collaborative Ace Coding!</title>
    
            <style type="text/css" media="screen">
                #editor { 
                    position: absolute;
                    top: 0;
                    right: 0;
                    bottom: 0;
                    left: 0;
                }
            </style>
    
            <script src="https://<?=$_SERVER['HTTP_HOST']?>:1337/socket.io/socket.io.js"></script>
            <script src="ace-builds/src/ace.js" type="text/javascript" charset="utf-8"></script>
            <script src="ace-builds/src/ext-language_tools.js"></script>
            <script>
                var session_id = null ;
                var editor = null ;
                var collaborator = null ;
                var buffer_dumped = false ;
                var last_applied_change = null ;
                var just_cleared_buffer = null ;
    
                function Collaborator( session_id ) {
                    this.collaboration_socket = io.connect( "https://code.thayer.dartmouth.edu:1337", {query:'session_id=' + session_id} ) ;
    
                    this.collaboration_socket.on( "change", function(delta) {
                        delta = JSON.parse( delta ) ;
                        last_applied_change = delta ;
                        editor.getSession().getDocument().applyDeltas( [delta] ) ;
                    }.bind() ) ;
    
                    this.collaboration_socket.on( "clear_buffer", function() {
                        just_cleared_buffer = true ;
                        console.log( "setting editor empty" ) ;
                        editor.setValue( "" ) ;
                    }.bind() ) ;
                }
    
                Collaborator.prototype.change = function( delta ) {
                    this.collaboration_socket.emit( "change", delta ) ;
                }
    
                Collaborator.prototype.clear_buffer = function() {
                    this.collaboration_socket.emit( "clear_buffer" ) ;
                }
    
                Collaborator.prototype.dump_buffer = function() {
                    this.collaboration_socket.emit( "dump_buffer" ) ;
                }
    
                function body_loaded() {
    
                    session_id = "meow" ;
    
                    editor = ace.edit( "editor" ) ;
                    collaborator = new Collaborator( session_id ) ;
                    
    
                    // registering change callback
                    editor.on( "change", function( e ) {
                        // TODO, we could make things more efficient and not likely to conflict by keeping track of change IDs
                        if( last_applied_change!=e && !just_cleared_buffer ) {
                            collaborator.change( JSON.stringify(e) ) ;
                        }
                        just_cleared_buffer = false ;
                    }, false );
    
                    editor.setTheme( "ace/theme/monokai") ;
                    editor.$blockScrolling = Infinity ;
    
                    collaborator.dump_buffer() ;
    
                    document.getElementsByTagName('textarea')[0].focus() ;
                    last_applied_change = null ;
                    just_cleared_buffer = false ;
                }
            </script>
        </head>
    
        <body onLoad="body_loaded()">
            <div id="editor"></div>
        </body>
    </html>
    

    Parallel to this, run the following Node.js server script

    Following is the Node.js websocket server which must be instantiated on the same server serving the web page above. It needs to be up for the page above to work.

    1. Make sure to have port 1337 open in the same capacity as ports 80 & 443, this is what this listens on.
    2. Make sure to update the paths to SSL certs, we use SSL on the websocket server. We do SSL here so browsers can run the websocket Javascript regardless of whether their original context it SSL or not.
    3. You need to have Socket.IO installed
    // config variables
    verbose = false ;
    session_directory = "/tmp" ; // it has to exist
    
    /* https specific */
    var https = require('https'),
        fs =    require('fs');
    
    var options = {
        key:    fs.readFileSync('/path/to/your/ssl.key'),
        cert:   fs.readFileSync('/path/to/your/ssl.crt'),
        ca:     fs.readFileSync('/path/to/your/CA.crt')
    };
    var app = https.createServer(options);
    io = require('socket.io').listen(app);     //socket.io server listens to https connections
    app.listen(1337, "0.0.0.0");
    
    // will use the following for file IO
    var fs = require( "fs" ) ;
    
    //io = require('socket.io').listen(2015) ;
    if( verbose ) { console.log( "> server launched" ) ; }
    
    collaborations = [] ;
    socket_id_to_session_id = [] ;
    
    io.sockets.on('connection', function(socket) {
        var session_id = socket.manager.handshaken[socket.id].query['session_id'] ;
    
        socket_id_to_session_id[socket.id] = session_id ;
    
        if( verbose ) { console.log( session_id + " connected on socket " + socket.id ) ; }
    
    
        if( !(session_id in collaborations) ) {
            // not in memory but is is on the filesystem?
            if( file_exists(session_directory + "/" + session_id) ) {
                if( verbose ) { console.log( "   session terminated previously, pulling back from filesystem" ) ; }
                var data = read_file( session_directory + "/" + session_id ) ;
                if( data!==false ) {
                    collaborations[session_id] = {'cached_instructions':JSON.parse(data), 'participants':[]} ;
                } else {
                    // something went wrong, we start from scratch
                    collaborations[session_id] = {'cached_instructions':[], 'participants':[]} ;
                }
            } else {
                if( verbose ) { console.log( "   creating new session" ) ; }
                collaborations[session_id] = {'cached_instructions':[], 'participants':[]} ;
            }
        }
        collaborations[session_id]['participants'].push( socket.id ) ;
    
    
        socket.on('change', function( delta ) {
            if( verbose ) { console.log( "change " + socket_id_to_session_id[socket.id] + " " + delta ) ; }
            if( socket_id_to_session_id[socket.id] in collaborations ) {
                collaborations[socket_id_to_session_id[socket.id]]['cached_instructions'].push( ["change", delta, Date.now()] ) ;
                for( var i=0 ; i<collaborations[session_id]['participants'].length ; i++ ) {
                    if( socket.id!=collaborations[session_id]['participants'][i] ) {
                        io.sockets.socket(collaborations[session_id]['participants'][i]).emit( "change", delta ) ;
                    }
                }
            } else {
                if( verbose ) { console.log( "WARNING: could not tie socket_id to any collaboration" ) ; }
            }
        });
    
    
        socket.on('change_selection', function( selections ) {
            if( verbose ) { console.log( "change_selection " + socket_id_to_session_id[socket.id] + " " + selections ) ; }
            if( socket_id_to_session_id[socket.id] in collaborations ) {
                for( var i=0 ; i<collaborations[session_id]['participants'].length ; i++ ) {
                    if( socket.id!=collaborations[session_id]['participants'][i] ) {
                        io.sockets.socket(collaborations[session_id]['participants'][i]).emit( "change_selection", selections ) ;
                    }
                }
            } else {
                if( verbose ) { console.log( "WARNING: could not tie socket_id to any collaboration" ) ; }
            }
        });
    
    
        socket.on('clear_buffer', function() {
            if( verbose ) { console.log( "clear_buffer " + socket_id_to_session_id[socket.id] ) ; }
            if( socket_id_to_session_id[socket.id] in collaborations ) {
                collaborations[socket_id_to_session_id[socket.id]]['cached_instructions'] = [] ;
                for( var i=0 ; i<collaborations[session_id]['participants'].length ; i++ ) {
                    if( socket.id!=collaborations[session_id]['participants'][i] ) {
                        io.sockets.socket(collaborations[session_id]['participants'][i]).emit( "clear_buffer" ) ;
                    }
                }
            } else {
                if( verbose ) { console.log( "WARNING: could not tie socket_id to any collaboration" ) ; }
            }
        });
    
    
        socket.on('dump_buffer', function() {
            if( verbose ) { console.log( "dump_buffer " + socket_id_to_session_id[socket.id] ) ; }
            if( socket_id_to_session_id[socket.id] in collaborations ) {
                for( var i=0 ; i<collaborations[socket_id_to_session_id[socket.id]]['cached_instructions'].length ; i++ ) {
                    socket.emit( collaborations[socket_id_to_session_id[socket.id]]['cached_instructions'][i][0], collaborations[socket_id_to_session_id[socket.id]]['cached_instructions'][i][1] ) ;
                }
            } else {
                if( verbose ) { console.log( "WARNING: could not tie socket_id to any collaboration" ) ; }
            }
            socket.emit( "buffer_dumped" ) ;
        });
    
    
        socket.on('disconnect', function () {
            console.log( socket_id_to_session_id[socket.id] + " disconnected" ) ;
            var found_and_removed = false ;
            if( socket_id_to_session_id[socket.id] in collaborations ) {
                //var index = collaborations[socket_id_to_session_id[socket.id]].participants.indexOf( socket.id ) ;
                var index = collaborations[socket_id_to_session_id[socket.id]]['participants'].indexOf( socket.id ) ;
                if( index>-1 ) {
                    //collaborations[socket_id_to_session_id[socket.id]].participants.splice( index, 1 ) ;
                    collaborations[socket_id_to_session_id[socket.id]]['participants'].splice( index, 1 ) ;
                    found_and_removed = true ;
                    //if( collaborations[socket_id_to_session_id[socket.id]].participants.length==0 ) {
                    if( collaborations[socket_id_to_session_id[socket.id]]['participants'].length==0 ) {
                        if( verbose ) { console.log( "last participant in collaboration, committing to disk & removing from memory" ) ; }
                        // no one is left in this session, we commit it to disk & remove it from memory
                        write_file( session_directory + "/" + socket_id_to_session_id[socket.id], JSON.stringify(collaborations[socket_id_to_session_id[socket.id]]['cached_instructions']) ) ;
                        delete collaborations[socket_id_to_session_id[socket.id]] ;
                    }
                }
            }
            if( !found_and_removed ) {
                console.log( "WARNING: could not tie socket_id to any collaboration" ) ;
            }
            console.log( collaborations ) ;
        });
    
    });
    
    
    function write_file( path, data ) {
        try {
            fs.writeFileSync( path, data ) ;
            return true ;
        } catch( e ) {
            return false ;
        }
    }
    
    
    function read_file( path ) {
        try {
            var data = fs.readFileSync( path ) ;
            return data ;
        } catch( e ) {
            return false
        }
    }
    
    
    function file_exists( path ) {
        try {
            stats = fs.lstatSync( path ) ;
            if (stats.isFile()) {
                return true ;
            }
        } catch( e ) {
            return false ;
        }
        // we should not reach that point
        return false ;
    }
    
  • Using Google’s APIs with Python scripts

    I was never able to find centralized, succinct and example based documentation for doing domain delegated API calls with Google. Hopefully here is exactly this documentation from all the pieces I gathered along the way.

    Service Account Creation

    1. Go to https://console.developers.google.com/start and create a new project.
      Screen Shot 2016-03-15 at 10.11.31 AM
    2. Call it whatever you want
      Screen Shot 2016-03-15 at 10.11.48 AM
    3. Enable the right APIs that this project will use We’ll do drive API for the purpose of this testing
      Screen Shot 2016-03-15 at 10.15.06 AMScreen Shot 2016-03-15 at 10.16.40 AMScreen Shot 2016-03-15 at 10.16.48 AM
    4. Go to the “Credentials” screen
      Screen Shot 2016-03-15 at 10.18.08 AM
    5. Create a “Service Account Key”
      Screen Shot 2016-03-15 at 10.18.33 AM
    6. Make it a “New service account” and give it a nameScreen Shot 2016-03-15 at 10.19.17 AMScreen Shot 2016-03-15 at 10.20.09 AMScreen Shot 2016-03-15 at 10.23.45 AM
    7. Download that JSON file that follows.
      Screen Shot 2016-03-15 at 10.23.53 AM
      This file contains the credentials for the account you just created, treat it with care, anyone getting their hands on it can authenticate with the account. This is especially critical since we are about to grant domain delegation to the account we created. Any one with access to this file is essentially able to run any API call masquerading as anyone in your Google Apps domain. This is for all intents and purposes a root account.

    Domain Delegation

    1. Back on the “Credentials” screen, click “Manage service accounts”
      Screen Shot 2016-03-15 at 10.26.43 AM
    2. Edit the service account you just created
      Screen Shot 2016-03-15 at 10.28.23 AM
    3. Check the “Enable Google Apps Domain-wide Delegation” checkbox and click “Save”.
      Screen Shot 2016-03-15 at 10.30.28 AM
      Google at this points needs a product name for the consent screen, so be it.
    4. At this point, if everything went well, when you go back to the “Credentials” screen, you will notice that Google create an “OAuth 2.0 client ID” that is paired with the service account you created.

    Domain delegation continued, configuring API client access

    Granting domain delegation to the service account as we just did isn’t enough, we now need to specify the scopes for which the account can request delegated access.

    1. Go to your Google Apps domain’s Admin console.
    2. Select the Security tabScreen Shot 2016-03-09 at 11.15.40 AM
    3. Click “Show more” -> “Advanced Settings” Screen Shot 2016-03-09 at 11.15.52 AM
    4. Click “Manage API client access Screen Shot 2016-03-09 at 11.16.08 AM
    5. In the “Client Name” field, use the “client_id” field from the json file you downloaded earlier. You can get it via the following command:
      cat ~/Downloads/*.json | grep client_id | cut -d '"' -f4

      In the “One or More API Scopes” field use the following scope:

      https://www.googleapis.com/auth/drive

      Screen Shot 2016-03-15 at 11.00.36 AM
      If you want to allow more scopes], comma separate them. This interface is very finicky, only enter URLs and don’t copy/paste the description that show up for previous entries. There also might be a few minutes delay between you granting a scope and its taking effect.

    6. Click “Authorize”, you should get a new entry that looks like this:
      Screen Shot 2016-03-15 at 11.01.51 AM
      If you need to find the URL for a scope, this link is helpful.

    Scripting & OAuth 2.0 authentication

    Okay! The account is all set up on the Google side of things, let’s write a Python script to use it. Here’s your starting point:

    google_api_script.py

    This scripts contains all the functions to get you started with making API calls to Google with Python. It isn’t the simplest form it could be presented in but it solves a few issues right off the bat:

    • All Google interactions are in the “google_api” class, this allows for efficient use of tokens. When “subing as” a user in your domain, the class will keep track of access tokens for users and only re-generate them when they expire.
    • Exponential back-off is baked-in and generalized to anything unusual gotten back from Google (based on observation).
    • SIGINT will get handled properly

    Before running the script, you may need to:

    sudo apt-get update && sudo apt-get install python-pycurl

    Running the script is done as such:

    ./google_api_script.py /path/to/json/file/you/downloaded/earlier.json account.to.subas@your.apps.domain

    It will simply run the “get about” Drive API call and print the result. This should allow you to verify that the call was indeed executed as the account you specified in the arguments.

    Once you’ve ran this script once, the sky is the limit, all the Drive API calls can be added to it based on the get_about function.

    Important note on scopes: the same way that you granted domain delegation to certain comma separated scopes in the Google Apps Admin Console earlier; this script needs to reflect the scopes that are being accessed and the same space separated list of scopes need to be part of your jwt claim set (line 78 of the script). So if you need to make calls against more than just drive, make sure to update scopes in both locations or your calls won’t work.

    More scopes & more functions

    Taking it one step further with the Google Enforcer. This is the project that lead me down the path of writing my own class to handle Google API calls. While it is not quite ready for public use, I’m publishing the project here as it is an excellent reference to making all kinds of other Google API calls; some doing POSTs, PUTs, DELETEs, some implementing paging, et cetera.

    Download:
    google_drive_permission_enforcer_1.0.tar.gz

    The purpose of this project is to enforce on the fly permissions on a directory tree. There is a extravagant amount of gotchas to figure out to do this. If you are interested in implementing it with your organization, please leave a comment and I can either help or get it ready for public use depending on interest.

    This project works towards the same end as AODocs, making Google Drive’s permission not completely insane as they are by default.

    Here are the scopes I have enabled for domain delegation for this project.

    Screen Shot 2016-03-15 at 4.55.25 PMProblems addressed by this project:

    • domain account “subbing as” other users AKA masquerading
    • a myriad of Google Drive API calls focused on file permissions
    • watching for changes
    • crawling through directory hierarchy
    • threading of processes to quickly set the right permissions
    • disable re-sharing of files
    • access token refreshing and handling
    • exponential back-off
  • IPv6 link-local address to MAC address online converter

    The converter

    It can also be addressed directly via:
    http://ben.akrin.com/ipv6_link_local_to_mac_address_converter/?mode=api&ipv6=fe80::5074:f2ff:feb1:a87f
    for all your API needs.

    Description

    This converter was implemented per Dave Russell’s suggestion as a follow up to the MAC address to IPv6 link-local address online converter. If you are interested in the steps behind this conversion, they are simply a reverse of the original Mac->IPv6 converter.

    Please note that of the various IPv6 notations, the one this script will expect is fe80::xxxx:xxxx:xxxx:xxxx.

  • Remove all Exif data from JPEGs recursively

    Because I always spend 20 minutes googling it

    apt-get update && apt-get install libimage-exiftool-perl
    find /var/www -type f -iname *.jpg -exec exiftool -all= {} \;