Page 3 of 60

Why a password manager is a good idea

lost passwd

Do you remember your password for this site?

If you are like me, you tend to use a lot of different website and services on the net, with the consequent number of passwords.

There are a bunch of strategies which tend to be used to handle this situation:

  1. re-use the same password over and over. This is one of the most dangerous one
  2. use several passwords, usually with a decreasing security level. For example your super secure password for your email, and then less complicated and less secure passwords as soon you are for all the rest. Usually a pool of five or six passwords.
  3. refer to the trusted document sitting on your computer, with your passwords in clear text.

If any of this scenarios looks familiar, then it’s time to re-vamp what you are doing and change approach.

Let me introduce you clipperz:

Online password manager with client side encryption

Online password manager with client side encryption

Clipperz is an opensource online password manager which knows nothing about your data, and sports a client encryption system.

What does it mean?

It means that the encryption is done at client level (in your browsers, via Javascript), and then only the encrypted data are sent to the server to be stored. So if somebody hacks the servers, they will get some encrypted nonsense which they cannot decode without your passphrase

To sign up you need to pick a username and a passphrase. The only catch is, because clipperz does not know anything about you, there is no way to recover the passphrase. So if you forget it, it’s gone.

Once you have your account set up, after you have logged in, you can record any type of information that you want to keep secret and secure.

Classic entry for a new clipper record

Classic entry for a new clipper record

For example, if you have just created an account for a new website, you can record the url and the username and then for the password you used. If you want, you can use the password icon on the right, to generate a new random password. This is extremely handy, because all your password will be random, and if somebody will be able to get the passwords from this website, then you do not have to worry, but you just have to generate a new one and change it!

Once you have saved, that record will be available online for you from any device,  just go to clipperz again and log in. Additionally there is a Search and a Tagging system available, and also the possibility to take a read-only backup of clipperz on your computer.

It’s quite a long time that I’ve bitten the bullet, and started to use a password manager. I never regretted it. More over, I think clipperz is extremely good and I am extremely happy with it.

Take it for a ride, one passphrase to rule them all!

Recovering bitcoins from old wallets

To the moon and back?

To the moon and back?

Bitcoin, and the blockchain specifically, is a pretty cool technology. The price of a bitcoin, as shown in the above image, is still in flux. That’s euphemism for a bloody roller-coaster. 🙂

Anyway, this post is about something connected, but not entirely the same. This post is about recovering some bitcoins which I had in an old wallet, and where I thought they would stay.

Some background info

When I was involved with coinduit I used to have some bitcoins (some part of bitcoins, of course) on Mycelium wallet on my android mobile phone. Unfortunately my phon encountered a kind of strange problem: even if connected to the charger, the phone was unable to charge. This meant I had small amount of time to transfer all my bitcoins, and take a backup of my existing wallet. I tried to do the 12 words backup with Mycelium, but didn’t manage. However I’ve managed to export the private keys of my three accounts that I had created at that time…

What happened

Who has the private keys of a bitcoin address, owns the address, therefore keeping the private key secure is paramount. The private key is necessary to sign a transaction, which is the way the bitcoins do get transfer from one address to another one. Basically without the private key, you can’t move the bitcoins from a bitcoin address.

So before my mobile ran out of juice, I’ve managed to transfer some coins on a new wallet I’ve just opened on blockchain.info. For some reason that I do not remember, I didn’t send them all of them, but some millibitcoins stayed behind. I think just after the transaction, my phone poweroff. That was basically the swan’s song for my phone.

I’ve sent the phone to be repaired, but as usual, they did a factory reset and everything that was stored on the phone, was gone.

Quick fast forward to today. I’ve got a new phone, a bit of time ago, and I’ve decided to see if I could recovered the coins.

It was super easy!

I have just scanned the three private keys into Mycelium, regaining all my bitcoins that were left there. As I said at the beginning, having the private key makes you the owner of these bitcoins, or at least gives you the power to move them to an address you control. So I have transferred the bitcoins from the old address to the new Hierarchical one that comes with the latest Mycelium.

After that, I’ve logged into blockchain.info, and sent the bitcoins from that address to the new one. Now all the bitcoins are once again on my device and I am in full control.

This time I’ve managed to back up the seed to recreate the Hierarchical Mycelium wallet, therefore next time I have a problem, I have just to recreate the address using the 12 random words and I’m sorted.

I’m using clipper.is as password manager, to store all this details, so the solution is pretty secure.

So, yeah, pretty pleased with bitcoins and the ability of rescue them. Get some if you didn’t so far.

Packaging Neuronvisio for conda

New Neuronvisio website

New Neuronvisio website

Neuronvisio is a software that I wrote quite long time ago to visualize computational models of neurons in 3D.  At the point in time when I was actively developing it, few services and software were not existing:

  1. conda was not available (kickass package manager able to deal also with binary dependencies)
  2. read the docs were not avaiable (auto-updated docs on each commit)
  3. github pages didn’t have nice themes available (it was there, but you had to do all the heavy lifting, and I was hosting there the docs, which were updated manually.)

To be able to have a smooth way to release and use the software, I was using Paver as a management library for the package, which served very well until it broke, making Neuronvisio not installable via pip anymore. Therefore I’ve promised to myself that, as soon I had a little bit of time, I was going to restructure the thing and make Neuronvisio installable via conda, automatically pulling all the dependencies needed to have a proper environment working out of the box. Because it will be nice.

Read the docs and github pages

This one was relatevely easy. Neuronvisio docs were always built using sphinx, so host them on read the docs was going to be trivial. Therefore the idea was to point neuronvisio.org to the neuronvisio.rtfd.org and job was done.

Not so fast!

So in the classic yak shaving, which you can read here, or watch the gif below:

View post on imgur.com


Yak shaving: recursively solving problems, with the classic case where your last problem is miles away from where you have started.

It turns out that apex domains cannot point to subdomain (foo.com cannot resolve to zap.blah.com), because DNS protocol does not like it and the internet will burn (or email will get lost, which is pretty much the same problem), so you can only point a subdomain (zip.foo.com) to a subdomain (zap.blah.com).

Therefore my original idea, to use the sphinx generated website as entry point was not a possibility. I could still point neuronvisio.org to whatever I was hosting on the gh branch of the neuronvisio repo. It couldn’t be the docs, because I wanted them automatically updated, so I had to design some kind of presentation website for the software. As I said, Github Pages is now sporting some cool themes, so I’ve picked up one, and just used some bits from the intro page.

At the end of this, I had a readthedocs hook which was recreating the docs on the fly at each commit, without manual intervention required; a presentation website written in Markdown using githubpages infrastructure, everything hosted and responsive with the proper domain set in place. Note that I even didn’t start on the package. Yeah \o/.

Creating the package

To create the conda package for Neuronvisio I had to create the meta.yaml and the build.sh. It was pretty easy to create given the fact Neuronvisio is a python package, and it was already using setup.py. The docs are good, and googling around, with a lot of test and try, I’ve got the package done in no (too much) time.

Solving the dependencies

Neuronvisio has a lots of dependencies, but most of them were already packaged for conda. The only big dependencies I was missing was the NEURON package and the Interview library. So I created a PR on the community maintained  conda-recipes repository. As you can see from the PR and the commit, this was not easy at all and it was super complicated.

It turned out to be impossible to make a proper package for neuron which works out of the box. What we’ve got so far is the support for python and interview out of the box, however not the hoc support. This is due to the way NEURON figures out the prefix of the hoc file at compilation time, and due to the re-location done by conda when the package is installed, this tend to differ and it’s not easy to be patched.

Anyway, there is a workaround, which is to export $NEURONHOME environment variable and you are good to go.

After all this a new shiny release of Neuronvisio is available (0.9.1), which it’s goal is to make the installation a bit easier, and get all the dependencies ready to go with one command.

Happy installing.

Running Google AdSense for 4 months, a report

I always was curious to see how running ads on my blog will turn out, and how much money I was going to make, and how the amount of traffic received would impact these numbers.

So, I decided to put them on after I moved the blog to my own server. I went for the classic, and I have installed google adsense plugin. Due to the fact that now I pay the server, I was wondering if I could bring the blog into a self-sufficient state, i.e. where it was making enough money to pay its own server. (The server is a digital ocean 5$/month, which runs also other little hobby projects, so it’s not that expensive).

Let’s see the numbers

visitors-train-of-thoughts

Visitors on Train of Thoughts on the same period covered by the ads

As you can see from the graphs, this blog scores around 120 sessions per day, with mostly new user coming from google, with a massive drop during the WE.

Most of this traffic is composed by technical users, who are looking for one post in specific. They tend to read it, and then they go about their own business.

In the same period this is what the Google AdSense income looks like:

googleadsenseclicks

Google AdSense gain during the same period

The estimated earnings is what is interesting: running this ads on this blog has summed up to a whopping £6.32. Considering that google will only pay when £60 is reached, I can expect to see the payments in more or less 3 years.  W00t?

I was wondering if this is because everybody runs AdBlock, and the ads are always shielded. To discover this I have added a plugin to keep track of this, and you can see below how it looks like:

adblocks_notify_stats

40% with AdBlock on! And no one deactivates it!

From the data I’ve got, it seems there are 40% of the people with AdBlock on, and it seems no-one, so far, read my message and decided to white list the website. It can be concluded that 60 sessions (the one withoput AdBlock running), achieved only during peak days, do not bring any kind of decent income.

I had ads on top of the post, on the sidebar and also between the posts. They were very prominent and were really annoying, but I thought they were going to pay for the server, so they were a necessary evil. I guess we can conclude that this is not the case.

Different strategy

Given this situation, I’ve decided to slash the ads severely and leave only one in the sidebar, and with colour that integrates in the site and it does not look too much alien, hopefully.

I do not expect people to click on it, or to increase the revenue, however I am more happy about the state of the blog, with less clutter and visual noise adding up, and a more gratifying and pleasant navigation. We’ll see how it goes.

Happy reading!

How to transfer files locally

file transfer

Transfer files is still hard in 2015. And slow

When I have decided to go on with my “new computer”, I had the classic problem: transfer all my data from the old computer to the new one.

So I’ve installed a SSH server on my old computer, both computers where connected on the same wireless network, therefore I have launched an rsync to copy recursively all my home from my old laptop to the new one. Not so fast, cowboy!

Unfortunately this did not work as expected, for a series of reasons:

  • the packets were continuously dropped by the router: it seems the route to host was not available at certain time, with rsync stalling
  • re-launching the command was overwriting all the files on my home directory, however my old computer was running a 12.04 LTS, while this one is on a 14.04, hence every time a program was upgrading some of the preference, it was overwritten by rsync. And then, as soon the program was launched, the files were changed again.

So I needed a different approach.

I plugged the two computers, created two wired connections, given two diffent Ips to the two computer, and used that to do the transfer (once I’ve switched off the wireless.) Win!!

Details how to do it are on this Stackoverflow answer, so I won’t repeat them here.

Go ahead and transfer your files fast!

Running Ubuntu on an old Mac Book Pro

ubuntu_on_mac

Ubuntu on a Mac, because of reasons

Prolugue

My computer served me well up to now, however it started to show it’s age. It is an old DELL XPS 13 inches, from 2008. The .. emh, beast, has one CPU, virtualized to two and a whopping 4 Gb of RAM (which is not bad). However the right hinge cracked not too long ago, and it was becoming too slow when trying to play with new toys (docker, I’m looking at you).

This fact, together that the operating temperature was always around 80/90 degrees, had transform it on a very cumbersome machine to work on. Maybe with an external monitor/keyboard and mouse could have improved the situation a little, however I tend to work always on the go, and I do not have a proper working desk, mostly working in the kitchen.

Therefore I’ve started to look in the market for a replacement. The problem was I want a unicorn laptop, which does not exist due to physics’ laws.

Let me explain: I’ve wanted the latest badass videocard from NVIDIA, the craziest Intel CPU and the laptop should also run both quietly and cool.

Now this sounds pretty insane specs, and they are. There are laptop out there, especially the gaming ones, that have super powerful CPUs and video card, however they run pretty hot, and the fans are always on, making them quite noise and not exactly a pleasure to work with.

On top of this the battery also gets a beating, due to the extra power used by these hungry appliances.

Therefore nothing was fitting the bill.

Ubuntu on Mac Book Pro

Given the fact I was not really falling in love with any computer out there, I start to look for other solutions. It turned out that my girlfried had an old Mac Book Pro (2010) that she was not using any more, because she upgraded to a mac book air a bunch of years ago.

This left a decently powerful computer (4 virtualized CPUs, 8 Gb of RAM) available, so I’ve decided to test drive it.

There was never an option to use OSX because: a) I’m not a fan, b) I really like Linux and the freedom that comes with it, c) it’s impossible to get it to do what you need to do it.

I guess also, having used Linux for more than 15 years as my primary system, and loving it for development and data science, has influenced my view on this.

Few constraints I had before attacking the problem:

  1. the OSX partition must remain and still be useful. Was not a must but a nice to have thing
  2. The data attached to that OS should be conserved, and having access to them via the OSX would be the best solution

We are going for dual boot (oh yeah!!!), without blowing up the whole disk (oh double yeah!!!).

This laptop sports a 500 Gb harddrive, and the total amount of space used by the OSX plus the data is around 150 GB, hence I was able to cut more or less 350 GB to use for Ubuntu as I pleased.

How I did it:

  • I’ve installed rEFIt on it to manage the dual boot (yep, it’s been abandoned, but it does the job)
  • I’ve installed Ubuntu 14.04 from a live CD (you need to press the Alt Left or also known as Option key to get the machine to read the CD at boot)
  • I’ve tried to use the Nvidia driver, failed badly, given up and sticked with the opensource nouveau, which do a pretty decent job.

For the partition I’ve followed the same strategy I’ve written about here

Impressions so far

  • The computer is quieter than my previous one, and it’s a lot snappier. Tha fan do work, once you have installed the macfanctld package
  • The keyboard is nice, however I’m not used to the layout of the Command key and the Alt key. So I may look into re-mapping them.
  • The Fn button together with the Function key (F1-F12) are inverted. This means that to press F4 you have actually to press (wrong) Alt-Fn-F4. Basically I’m better off to click close with the mouse.
  • The right click can be obtained touching the trackpad with two fingers at the same time, ’cause we’ve got only one button.
  • The layout picked does not really match the keyboard anyway, but I do not care, ’cause I’m using my muscle memory, and I’m doing just fine (the is where there is the “, for example).

So I’ve gained in speed and got a slightly odd keyboard and right click, which I need to relearn (no other option for the right key), and maybe re-map for the keys. I guess if I have used this layout all the time, these were not an issue.

I’ll keep it, for now.

Upgrade to git 2.5.0 on Ubuntu

Git-Logo

Git, kickass distributed VCS

Running an Ubuntu 12.04 on a 2008 laptop (which I should change, but didn’t find a decent replacement yet) means that sometimes I’m not getting the latest software available by default.
For example, I just figure out that my git version was 1.9.1, while the latest available, ATOW, is 2.5.0, so I have decided to upgrade.

Conveniently there is a ppa that provides the latest and the greatest version of git for Ubuntu. Enable it and you will have the latest git running on your system:

sudo add-apt-repository ppa:git-core/ppa
sudo apt-get update
sudo apt-get install git

Check you have the latest

$ git --version
git version 2.5.0

Now the big question: Is upgrading really necessary? I do not really know. However being a person that creates software, I can tell you that running the most up-to-date system and software is, in most cases, a good idea.

Pandas best kept secret

Pandas by default limits the number of printing character to 80. This is good and basically suits most of the use-cases. However, sometimes, you have either very long names for your dataframe columns, or you have lots of columns, which results on having your dataframe splitted in several lines, while half of our big screen it’s not used at all. That’s quite a lot of real estate thrown away for not good reason and it makes a tad more complicated to read it.

You can live with it, and all will be fine, or you can change it!
I always knew that there was an easy way to change this and avoid to have the line being splitted on the new line. Today I’ve researched and found it, and I am sharing here to remember it!

Just import the module and bump the width to 180 (default is 80)

import pandas as pd
pd.options.display.width = 180

 

The result is pretty cool.

Consider this dataframe:

import pandas as pd
df = pd.DataFrame({"very_long_column_name_for_example_purposes1" : [1,2,3], "very_long_column_name_for_example_purposes2": [4,5,6], "very_long_column_name_for_example_purposes3": [1,2,3]})

You can go from this:

pandas dataframe 80 width

pandas dataframe 80 width

 

To this (click on it to see it in full size):

Pandas dataframe with 180 width

Pandas dataframe with 180 width

Upgrading dokku to 0.3.22: some gotchas

but than I write about it

but then I write about it

I’ve upgraded dokku to the latest master release, to make sure I was running the latest version.

The reason for the upgrade was that I wanted to install supervisord plugin, so when I have to reboot my server due to an upgrade, all my application will come back to life automatically.

After the upgrade of dokku, all my container where down, so I’ve launched the command to rebuild all of them:

dokku ps:rebuildall

Unfortunately this didn’t work as expected.

My web containers (running three apps: django/python, flask/python, wordpress/php) got deployed as expected, instead my databases did not come back to life.

The two plugins I am using to run my databases are: dokku-pg-plugin and dokku-md-plugin.

While both plugins do not offer a clear way to restart the databases containers, I think I found out a way that worked for me as a workaround. It’s different for each plugin.

For the mariadb you have to fake to re-create the database, which will use your old database container and just re-attach to it.

 
dokku mariadb:create <olddbname>

For the postgresql instead, you have to re-link the old database:

dokku postgresql:link <myapp> <mydb>

Each of this command should trigger an instant redeploy, and your application should be back online.

One thing to know: if you stop a command execution with a Control-C, you may leave your application in a blocked state. If you run a rebuild or any other command, you may found out saying “Error your application is locked”. To get rid of that go on your server and blow away the /home/dokku/app_name/.build.lock file.

Watch out: the name of the file and/or error could be different, I just recall from memory.

Switch private repo from github to gitlab

Hello gitlab

Hello gitlab

Github: the good part

Github is awesome for opensource software. The collaboration, the audience, and the integration offered right now (July 2015) is very good.

You want your opensource projects to be on github, because of SEO and the ability to have them found. The several features offered, like the documentation integration, the Pull Request and so forth are just too good. I have got several projects there, and you can browse them here.

Github: the expensive part

However, if you are looking to host there also your private repo, it’s when github is not any more what you are looking for.

The major problem they have is their price structure. The micro plan, is 7$ for 5 repos, and than it’s 12$ for 10. It gets expensive very quickly.

Until today I used to pay for a micro plan. However yesterday I’ve started another project, I have created a repo for it, and than I wanted to push it online in private mode. But it was my six repos. Either I was going to opensource it, or I had to increase my plan from 7$ to 12$.

All these repos are from personal project, that I may not develop anymore, which however I don’t want to opensource and I cannot archive either. The number of collaborators on these repo is either 0 or 1 at most. I think if they were offering unlimited private repository, with small number of collaborators I could have considered to stick with them for my private repo.

Not an option, so I had a look around.

Looking for alternative: Bitbucket or Gitlab?

The big competitor of github is of course bitbucket. Back in the days bitbucket was supporting only mercurial, but than they also integrated the support for git. So you could put your project there, and than be happy. Their pricing structure just count the number of collaborators in a project, so in my case I can have all my repos with the free account.

However, it’s a bit of time that we use at work a self-hosted gitlab , which it served me pretty well so far, and I love the slick integration with the GitlabCI.

GitLab is very similar to github, and offers similar features: once you know that Pull Request are called Merge request, you’re golden.

The cool thing is there is an hosted version, where you have as many as you want private and public repos.

At the end I decided to got for gitlab, due to the integration with the Gitlab CI, which will give me the ability to run tests for all my private repositories, given the fact I provide a runner.

Of course all my opensource repo swill stay on github, and in case I will opensource some project I will just migrate them on github.

As I said, If there was an Indie developer price point, (unlimited private repos with small number of collaborators for 7$), I was going to stay on github and be happy with that, however given the circumstances and the automatic integration with the CI, Gitlab is my choice for now.