Choose color scheme

About the Author

544 Posts By ben

  • Mounting a partition from a disk image

    So you’ve dded a disk and you would like to mount its partitions from the resulting image file. Easy enough, first:

    fdisk -l -u /path/to/disk.img

    Which will yield a variation of the following output:

    You must set cylinders.
    You can do this from the extra functions menu.
    
    Disk disk.img: 0 MB, 0 bytes
    255 heads, 63 sectors/track, 0 cylinders, total 0 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000080
    
       Device Boot      Start         End      Blocks   Id  System
    disk.img1              63    15631244     7815591   82  Linux swap / Solaris
    disk.img2   *    15631245   113290379    48829567+  83  Linux
    Partition 2 has different physical/logical endings:
         phys=(1023, 254, 63) logical=(7051, 254, 63)
    disk.img3       113290380   210949514    48829567+  83  Linux
    Partition 3 has different physical/logical beginnings (non-Linux?):
         phys=(1023, 254, 63) logical=(7052, 0, 1)
    Partition 3 has different physical/logical endings:
         phys=(1023, 254, 63) logical=(13130, 254, 63)

    Partitions available on the disk image are listed as disk.img1, disk.img2 & disk.img3. Great, pick which one you want to mount and look at where it starts.
    disk.img2 starts at 15631245, multiply that by 512. 15631245 * 512 = 8003197440.
    Finally, mount the disk image at the offset you calculated as such:

    mount -o loop,offset=8003197440 -t auto /path/to/disk.img /mnt/disk_img_partition2

    And done!

  • 2-factor authentication & writing PAM modules for Ubuntu

    Download

    2ndfactor.c

    The problem

    Passwords are often seen as a weak link in the security of today’s I.T. infrastructures. And justifiably so:

    • re-usability, which we’re all guilty of, guarantees that credentials compromised on a system can be leveraged on many others. And given the world we live in, password re-use is inevitable, we just have too many accounts in too many places.
    • plain text protocols are still used to transmit credentials, and the result is that they are exposed to network sniffing. This is worsened by the increase in wireless usage which broadcasts information. Telnet, FTP, HTTP come to mind but they aren’t the only ones.
    • lack of encryption on storage is a flaw that too often makes it way into architecture design. How many databases have we heard about getting hacked & dumped? How many have we not heard about?
    • password simplicity & patterns are also factors weakening us against bruteforce attacks.

    So far, the main counter measure we’ve see out there is complexity enforcement. Sometimes IP restriction, or triggering warnings on geographic inconsistencies (Gmail, Facebook). But these barely help alleviate problem.

    A solution

    One hot solution that is making its way into critical systems (banks, sensitive servers) is Multi-factor authentication, and by “multi” we’ll stick to 2-factor authentication (2FA) because, well 3 factor authentication might be getting a little cumbersome :). The goal is to have more than one mean of establishing identity. And as much as possible, the means have to be distinct in order to reduce the chances of having both mechanisms compromised.

    Let’s see how to implement 2FA on an Ubuntu server for SSH. Ubuntu uses PAM (Pluggable Authentication Modules) for SSH authentication among other things. PAM’s name speaks for itself, it’s comprised of many modules that can be added or removed as necessary. And it is pretty easy to write your own module and add it to SSH authentication. After PAM is done with the regular password authentication it already does for SSH, we’ll get it to send an email/SMS with a randomly generated code valid only for this authentication. The user will need access to email/cell phone on top of valid credentials to get in.

    Implementation

    Let’s do an ls on /lib/security, this is where the pam modules reside in Ubuntu.

    Let’s go ahead and create our custom module. First, be very careful, we’re messing with authentication and you risk locking yourself out. A good idea is to keep a couple of sessions open just in case. Go ahead and download the source for our new module.

    Take a look at the code, you’ll see that PAM expect things to be laid out in a certain way. That’s fine, all we care about is where to write our custom code. In our case it starts at line 35. As you can see, the module takes 2 parameters, a URL and the size of the code to generate. The URL will be called and passed a code & username. It is this web service that will be in charge of dispatching the code to the user. This step could be done in the module itself but here we have in mind a centrally managed service in charge of dispatching codes to multiple users.

    Deploying the code is done as follows:

    gcc -fPIC -lcurl -c 2ndfactor.c
    ld -lcurl -x --shared -o /lib/security/2ndfactor.so 2ndfactor.o

    If you got errors, you probably need to first:

    apt-get update
    apt-get install build-essential libpam0g-dev libcurl4-openssl-dev

    Do an ls on /lib/security again and you should see our new module, yay!

    Now let’s edit /etc/pam.d/sshd, this is the file that describes which PAM modules take care of ssh authentication, account & session handling. But we only care about authentication here. The top of the file looks like:

    # PAM configuration for the Secure Shell service
    
    # Read environment variables from /etc/environment and
    # /etc/security/pam_env.conf.
    auth       required     pam_env.so # [1]
    # In Debian 4.0 (etch), locale-related environment variables were moved to
    # /etc/default/locale, so read that as well.
    auth       required     pam_env.so envfile=/etc/default/locale
    
    # Standard Un*x authentication.
    @include common-auth

    The common-auth is probably what takes care of the regular password prompt so we’ll add our module call after this line as such:

    auth       required     2ndfactor.so base_url=http://my.server.com/send_code.php code_size=5

    The line is pretty self descriptive: this is an authentication module that is required (not optional), here’s its name and the parameters to give it.

    send_code.php can be as simple as:

    <?php mail( "{$_GET['username']}@mail_server.com", "{$_GET['code']}" ) ; ?>

    Or a complex as you can make it for a managed, multi-user, multi-server environment.

    Lastly, edit /etc/ssd/sshd_config and change ChallengeResponseAuthentication to yes. Do a quick

    /etc/init.d/ssh restart

    for the change to take effect.

    That’s it! try and ssh in, the code will be dispatched and you will be prompted for it after the usual password. This was tested on Ubuntu 10.04 32b / Ubuntu 10.04.2 64b / Ubuntu 11.04 64b / Ubuntu 12.04 64b.

    A few disadvantages of this 2FA implementation worth mentioning
    • more steps required to get in
    • doesn’t support non TTY based applications
    • relying on external services (web service, message delivery), thus adding points of failure. Implementing a fail-safe is to be considered.
    • SSH handles key authentication on its own, meaning a successful key auth does not go through PAM and thus does not get a chance to do the 2nd factor. You might want to disable key authentication in sshd’s config.
  • Chicken cam

    As part of my CCTV installation at home, a cam is placed in the chicken coop. This has very little direct purpose although it is fun to watch chicken behaviors with no humans around. And I guess it is nice to check if we have eggs or if everything is all right.

    Really, this is an experiment towards what our future farm will be like. We’d like for people to be able to watch how their food is grown. Maybe even interact remotely with the animals.

    It’s a little slow due to my 3Mbps connection and the proxying but have fun with it:

    The chicken cam has been disabled as we get ready to move to a new state.

    A year a and half after our big jump, it has been re-enabled :)

  • And Minecraft for all

    With video games becoming ever more realistic, one game stands out and defies the pursuit of polygons: Minecraft. With the charm of old video games, big fat pixels and a very square geometry. It stands to remind us of a time when video games left room for imagination.

    This is a pig

    It has the charm of an old Ultima and sandbox spirit of Legos. Indeed, there is no purpose in the game other than just build random stuff. And to get the material needed to create, it will take some serious world exploration. The world generator is very well tuned and will leave you in awe of the majestic landscapes & mystical caves it comes up with.

    Can you feel the wind on your cheeks?

    As with Legos, the possibilities are endless ranging from basic fort construction to advanced engineering.


    A slight downside is in the choice of technology: Java. Which as always sucks the everliving crap out of all the resources your machine has, both on the client and the server. Which is ironic given the simplistic nature of the game, it should have a very low footprint on the system.

    I’m running a server on Akrin, feel free to ask for a whitelist if you feel like building cool things with cool people.

  • Markov chains based random text generation

    We’ve already seen how to use Markov chains to generate random words that are based on the essence of a previously analyzed corpus. Well the exact same algorithm can be applied to text. The base entities become words instead of letters. I make punctuation be part of the entities, this way, sentence flow becomes part of the extracted statistical essence.

    Feel free to send me ideas of cool corpora to analyze.

    You can play with it here:

  • Tripwiring your linux box

    Privilege escalation, trojan’ed SSH daemons, key loggers… While the focus is still mostly on MS platforms, Unix boxes aren’t free of exploits. As they are made popular by Macs and ever more approachable distributions like Ubuntu, they become more of a focus. The large share of the server market they represent is a considerable source of information that is mouth-watering to hackers.

    A good tool in the fight against ever evolving malware is Tripwire (the open source version cause we’re cheap). It takes the signature of key files on your systems (configuration, binaries) and checks them regularly for changes. Its major strength is the fact that no matter what exploit was used to compromise a certain binary, if this binary is infected, tripwire will go off. Modern antivirus softwares look for specific signatures of known infections, and there are so many of them that they only look for the ones that are thought to be in the wild at any given time. They also are in reactive mode against 0days and usually take a few days to adjust. Their behavioral analysis methods are based on heuristics and generate too many false positives to be worthwhile.

    Tripwire doesn’t care what the infection is, it just goes off if something changed. This is simple and efficient. Now it should only be one piece of a comprehensive security policy.

    In this article we’ll look at getting it installed and going on Ubuntu in a matter of minutes. You’ll want to be root for all this.

    ——————————————

    First, get the package:

    aptitude install tripwire

    It’ll ask you for the passphrases used to secure itself.

    You’ll end up with these config files in /etc/tripwire:

    ——————————————

    Edit /etc/tripwire/twpol.txt to define which areas to keep an eye on, a pretty ok default is provided but needs some tweaking for Ubuntu and personal preference. I’d publish mine but hey, that’d be pretty stupid. Just keep in mind that you can use an exclamation mark “!” to negate a line, let’s say you want it to look at /etc but not /etc/shadow (user will want to change passwords in most cases) you’ll have a rule that looks like that:

    {
    /etc        -> $(SEC_BIN) ;
    ! /etc/passwd ;
    }

    ——————————————

    When you’re done, run:

    twadmin --create-polfile -S /etc/tripwire/site.key /etc/tripwire/twpol.txt

    This will create the secured policy file based on the text file you just edited.

    ——————————————

    The config file (/etc/tripwire/twcfg.txt) can be edited too but the defaults are nice too. When done run:

    twadmin --create-cfgfile -S /etc/tripwire/site.key /etc/tripwire/twcfg.txt

    Again, this creates it secured equivalent.

    ——————————————

    Make sure that the created file are only readable/writable by root

    chmod 600 /etc/tripwire/tw.cfg /etc/tripwire/tw.pol

    Good practice dictates that you also should be removing plain text configuration files but you’ll want to keep them around for a little while, as you tweak your original config.

    ——————————————

    Finally, you can initialize the database with:

    tripwire --init

    What this does is take a snapshot of everything you’ve specified in the policy file. If any of it changes, you’ll be notified.

    ——————————————

    The following will run the check for changes manually.

    tripwire --check

    When you installed the package with aptitude, /etc/cron.daily/tripwire was automatically created to have this run everyday, root will received a mail report every day.

    ——————————————

    If you want to make a change to the base config:

    edit /etc/tripwire/twpol.txt
    twadmin --create-polfile -S /etc/tripwire/site.key /etc/tripwire/twpol.txt
    tripwire --init

    If you want to update the base config, for example to acknowledge changes that happened on the box:

    tripwire --update --twrfile /var/lib/tripwire/report/<hostname>-<date>-<hour>.twr
  • The static experiment – all done!

    The little static box is up & running, Akrin has been fully migrated to it. I absolutely love that there are no moving parts in there. The running temperature of the CPU is what worried me the most since nothing is making the air flow in & out of there. At the heat of heavy processing, the temperature of the CPU doesn’t go above 67 degrees Celsius. That’s pretty all right! Quite frankly this little box handles stress very well but my point of reference is so obsolete I’m bound to be impressed :).

    Picture bellow, the new & old Akrin together for a soul transfer

    So there you have it, a kick ass little box discrete to the eyes & ears.

  • The static experiment – WTF Habey?

    The hardware showed up! So I get busy installing the RAM and the SSD. Habey in all its generosity included a SATA data cable with its barebone server. This is cool I guess, I mean I already have a bunch and hard disks always have cables but I’ll take it.

    I proceed to start hooking the SSD when I realize that there are no SATA power slots anywhere.

    Do you see anything?

    The problem is that apparently I’m the only person who ever bought one of these systems. There is literally no information available on any site (including www.habeyusa.com) on how to power your hard drives. Even though it has an IDE slot, there is no 4 pin Molex power available either, so no luck hijacking one of these for the SATA SSD.

    After careful examination of the motherboard, there is one slot that’s labeled “POWOUT1”. It’s a slot whose shape I haven’t seen for ages. I hope you’re sitting as you’re about to read this: it is shaped for 3.5″ floppy disk drive power. And that’s the only power that seems tap-able for hard drives. Much research on the web yields many 4 pin Molex to SATA cable converters. Eventually some Floppy power to to 4 pin Molex. Ultimately I found just the cable I needed.

    You’re reading right; SATA Power 15pin to FDD (as in Floppy Disk Drive) power 4 pin…

    Habey thought to include a standard SATA data cable but not their weird ass power equivalent. And it you look carefully, SATA power cables have 5 cables, the picture above has only 4. The 3 Volts cable has just been gotten rid of. Doesn’t this affect functionality?

    Well fuck everything, I’m not waiting 5 more days for a silly cable. Thankfully we have a master hardware tinkerer at work, and after verifying the voltage of the slots on the motherboard (to verify that it was indeed FDD power), we cannibalized a couple of old power supplies to come up with a Frankenstein cable.

    TADAAAAAA!!


    And it works perfectly. Seriously Habey: better labeling, a motherboard manual (online or paper) or a weird ass cable included would have been nice.

    Tomorrow we’ll stress test the box and it’d better take the beating without crashing.

    Thanks to playtool.com for their very helpful resource.

  • The static experiment

    Akrin is an server whose soul has been through many iterations of old hardware. It never needed much resources so I easily got away with $30 PCs bought at the university surplus.

    It currently resides on an aged Pentium IV with just 500MB of RAM and some old IDE hard drive. With the addition of more & more projects (recently: CCTV installation, new sites such as www.blindspotis.com, database intensive Markov chains generation), it’s close to maximum capacity and could use an upgrade.

    More than new hardware I’ve decided it was time to change how computing was done at home.  And I’m going for no moving parts. This means no fans, no spinning disks and no moving heads.

    What are the advantages?

    • no vibrations, not an iota of noise
    • no jet take off sound when running heavier computation
    • no malfunctioning fans that could result in a fire hazard
    • supposedly hardware that is more resistant to shocks
    • fanless means less powerful which in terms means less power consumption

    Here’s what I ordered:

    It doesn’t come with RAM or a hard drive. I like the small form factor and the fact that it has 2 NICs. This means it can easily be recycled in a nice router should the experiment fail.

    • Some RAM (DDR2 SODIMM), I went for the max 2GB that the EPC-6542 will support. ($45) link
    • A 2.5″ SATA II 128GB solid state disk (SSD) ($223 – $75 mail in rebate = $148) link

    Now SSDs are pretty expensive compared to traditional hard drives so it is a high price to pay for no moving parts. But they are also much faster, and because of the CCTV cams recording  24/7, I think that the I/O speed gain will have a tremendous overall effect on the server.

    Akrin will soon run on $423 of new hardware, this is unprecedented :)

    To be continued…

  • The death of the internet

    Let me throw a few of concepts we’ve been hearing about more & more lately:

    • metered bandwidth
    • end of net neutrality
    • content censorship
    • protocol restrictions
    • geographic restrictions
    • wiretapping
    • deep packet inspection
    • malware becoming crimeware
    • dataleaks
    • DDoS
    • internet kill switch

    The way that we used to see the internet as an unrestricted web of information is changing rapidly. And it looks like the free ride is coming to an end.

    Corporations want to dictate our internet usage, politicians don’t understand the issues of a technology from the next generation; and if they do, lobbyist money has a strong convincing power. And quite frankly your average user has no clue either. What was once a free and unrestricted flow of information is quickly becoming a metered and port/site/protocol restricted happy network.

    references:

    Traffic discrimination & Net Neutrality

    Comcast’s P2P throttling suit

    What was revolutionary about the internet was its lack of boundaries, the world was connected. Since then the marketing & licensing geniuses have caught on to the fact that it is possible to restrict content by geographic location. Like regions on DVDs you now cannot consume certain media in certain regions. It is a travesty to the human accomplishment that is the internet and inevitably leads to the absurdity that it is easier to consume pirated content than legal one.

    Organized crime also has caught on, the obnoxious malware & viruses that were once spreading for fame or installing dumb toolbars are now becoming very targeted at committing crimes. From harvesting financial information to generating DDOS attacks. A black market of stolen information and network hitmen is emerging on an internet that many companies handling your data do not understand. Viruses much like biologic organisms are becoming polymorphic with self defense mechanisms. Their technological advancement clearly shows funded work as opposed to the classic image of the basement hacker we all have ingrained in our heads.

    references:

    Zeus botnets specialized in harvesting financial data

    Researchers hijack control of the Torpig botnet for 10 days and recover 70 GB of stolen data from 180,000 infections

    Governments are starting to play their silly international politics game on this new field, releasing cyber attacks against one another. The amount of information & critical infrastructure facing the great network is making it a strategic field of military and intelligence importance. It is clear that the network in its current state of international openness is an issue to government interests, and we can fully expect to find cyber borders erected in the near future, not unlike the great firewall of China even though this last example has other applications. Applications that pertain to opinion control via censoring, China isn’t the only country doing that, Australia is pretty good at it. And the U.S. is working on creating a presidential “interet kill switch”, you know just in case people here get sick enough of 2 everlasting wars and 4th amendment tramplings to take the streets. Egypt has just done it, they shut down internet and cell phone communications during their 2011 protests.

    references:

    Stuxnet’s specific targeting of Iran’s SCADA controled systems

    The Great Firewall of China

    Australia’s intenet censorship

    Obama’s internet kill switch

    How Egypt shut down the internet

    At a time when Wikileaks is putting to shame governments and corporations, more controls are inevitable.

    So what’s next?

    Computers and network devices have become increasingly powerfull. So much so that this blog you’re reading is instantiated on a 8 years old server sitting on a fridge behind a home DSL. Besides computing & networking power, something else has been growing that you might have heard about: social networks.

    I think that one day, a couple of geeks will be tired of the state of the internet and will throw a home-made link between their houses to share what they want when they want without getting advertised, wiretapped, datamined or attacked. This can currently be done with long range wireless devices (WiMAX) or even by adding a layer to the current infrastructure (think VPN).  Soon a third geek friend will want in, and provided that he is trusted by the founders, he’ll get in. After a while, adding friends of friends will become too far out of reach for the founders to decide and they will implement a social reputation based system for dealing with users.

    And that’s it, you have a social network (at the strictest send of the term) that is growing & correcting itself based on reputation. This will of course be completely decentralized (unlike the internet) which means you will be relaying information for individuals you don’t know, hence the criticality of its reputation element.

    This network will eventually be overrun by corporate, mafia & government interests finding ways to abuse the reputation systems, it will slowly die and be replaced by another couple of geeks down the road.

    The end.