I like my ASUS C300 Chromebook: it’s my way to go laptop. It is cheap, something like 200~250€ few years ago, light and solid, only plastic here and no hard-drive which poorly handles transport, and it lasts long enough to endure any conference. Yes, even 8am to 10pm long Devoxx days.
I previously owned a HP laptop, five time more expensive than this Chromebook and I had to change the keyboard each year, touches broke too easily, the screen ribbon two times, the motherboard, the speaker, the fingerprint sensor, which was shocking my wrist when typing, one time each. Even the case itself broke after 3 years in my backpack. So Chromebook is definitely a good deal: 5 years now and not even a (deep) scratch!
But earlier this month I discovered this surprising new notification on my Chromebook:
The last few weeks, I attended to two conferences about the Cloud by Google. The first one was the Google Cloud Summit, the first toke place at Paris, the 19th of October in the Palais des Congrès. It started with a keynote by the VP (EMEA) of Google Cloud to welcome everyone and remember some impressive figures. Whereas Google deploys a new data-centers each month, the cloud only represents only 5% of the overall companies workload but it uses 40% of the Internet traffic! They invested $30 million until now and they want to be ready when the shift will happen!
The conference was divided in three tracks with theirs own subjects: Data, DevOps and Collaboration & Infrastructure. I personally chose to go to at least one conference on each subject so I covered various topics from technical ones with Kubernetes and serverless applications with Function to machine learning with Google APIs or using TensorFlow passing by secure applications on ChromeOS and Android. It helped a lot to understand the product catalog and offered a global vision of the Google Cloud Platform. Conference like “Where should I run my code” summarized it well.
The second event I attend was Google Cloud OnBoard, a training session about Google Cloud the 8th of November in the Salle Wagram at Paris too. Whereas the Google Cloud Summit was composed by 60% of developers and the others were mainly decision makers, the Google Cloud OnBoard was a technical training in order to (re)discover the Google Cloud Platform and its products. It started by some key concepts, like roles and permissions with IAM, then moved to key products like App Engine, Compute Engine and Datastore. OnBoard sessions trained more than 300k people across 50 countries.
Both events were very well planned and organized. Communication was excellent and everything was done in order to enjoy the moment. I mean, no queue to get your badge, WiFi password on the back of your badge, training notes and drinks set up on every seat of the room, people everywhere to answer your question or guide you to your next conference room: impressive! I really enjoyed those two days and it helped me a lot to understand the platform and how it can fit my needs. Don’t hesitate to create your account and give it a try: there is free trier on a lot of products and you will be offer $300 for the first year!
A quick blog post about the past Thursday at Microsoft conference center where the (French) Azure Days #1 took place. As a previous insider, I was invited to a full day of conferences about the Microsoft cloud platform and I was really impressed how it becomes massive: about 80 services from VM and hosting to database and AI. I won’t make ads here but numbers speak by them self:
During the day, we learned basics of the platform: platform specific wording and concepts, how to prevision VM, deploy applications, setup networking and deal with the resource manager. The speakers, 4 architects, were very motivating and knew how to keep people attention (specific mention for Pierre’s distraction when a live demonstration temporary failed). The event will be repeated each 6 to 8 weeks and, good news, it’s totally free! Except if you have to take a day off like me because your company doesn’t think cloud is important… Anyway, I should be present the 12th of September for the Azure Day #2 and I’m now looking forward to participate to the first Google Cloud summit at Paris this October!
With the latest Raspberry PI zero and Raspberry PI 3, there will be more and more Pi connected to the Internet. As Shodan lights us, there is about several thousands of Pi with Rasbian and SSH server enabled. Most of them still have the default pi user (and maybe some of them still have the default password…). And for those who don’t know, the pi user is allowed to sudo any command. If you planned the let your Raspberry Pi connected on the wild Internet, take some minutes to read this blog post to learn how to create a new and more secured user and remove the old pi one.
It was a long time since the previous post and it’s the opportunity of trying a new format I call « dev story ». A less verbose post format but more based on day-to-day coder life. The story I would like to share is about the port of one of my Firefox add-on: Scroll Up Folder. I recently made big changes for this project on which I could share: first I had to change its hosting then I rewrote the whole add-on using a new SDK.
Guess what? I’m left handed! And I’m sad to see that Google is still not offering a left handed option to theirs users. Until now, one solution is to use a custom rom witch allows you to customize navigation bar. But with the recent Lollipop release, no custom rom is yet ready for daily use… Moreover, ART, the new runtime, breaks Xposed Framework compatibility. Xposed Framework was another solution for stock rooted rom to install a module for navigation bar customization. I recently bought a Nexus 6 (still shipping!) and I don’t want to wait a new solution (Xposed Framework compatibility is not coming soon). Never mind, I’m a developer so let’s do this myself!
The last weeks demonstrate how personal information are sensitive and valuable. Companies like Ebay, Spotify, AVAST have been hacked and stolen of their client databases. Those facts motivate me to host my own Firefox Sync server instead of uploading my data to another big cloud company.
Firefox Sync is a solution to store and keep synchronize Firefox data like bookmarks, history, passwords or preferences. Since Firefox 29, a new version of Sync is available (version 1.5). It uses the new Firefox accounts as authentication mechanism. The service definition and separation between authentication, token and storage allow to change and plug new servers on the fly. So you could host your own Sync server without having to worry about auth. Auth will be provided by Mozilla servers, which don’t store your plain text passwords or encryption keys. You may check the source code of the authentication server on Github or the Sync protocol for more details.
The Sync server installation procedure is quite well described by Mozilla. It explains how to get, build and run and test a custom Sync server on the built-in server (some git and make commands). Once everything works, you could set up your Firefox browser to point at your own server and test with your account and data. For production use, you could bind it on your Apache on Ngnix server throw WSGI or Gunicorn module (the built-in server is not intended to be use in production context).
In conclusion, I run my own Sync server to store my personal data. The server is lightweight and data takes less than 10 mo of storage. I enforced the security with requested client certificate and IP filtering and I could have a look to all access done with the Apache logs. So even if Firefox accounts are leaked (and we should consider they will be), attacker needs to know location of the server, get a certificate, find a valid IP address before getting access to my Sync server. According the interest of my data, the risk is very low.
I encourage your to host as often as possible your data. Nowadays, it is the real people value. So take care of it and thanks the Mozilla company to allow us to do it (hey Google, what about Chrome ?).
Early in the week, I was to the security conferences led by the AFUP about the software security. The main goals of those conferences was to make developers aware of the real dangers of security breaches. The first conference was given by the OWASP organization, a non-profit organization focused on improving security. The main key points of its talk was:
If your application wasn’t attacked yet, it will be,
If your are aware of the most critic security risks and you choose to handle them, you could prevent the bigger part of coming attacks,
You could handle security risk easily with theirs documentations and tools freely available.
The second conference was made by an AFUP member, Christophe Villeneuve, the creator of the elephpant. The talk focused on how to secure you PHP applications. It tooks the most common security risks previously described and explains how to prevent it with PHP language. He deals with subjects like database request escapement, user input cleaning or risky specific language features (PHP-SELF, global, …).
To conclude, the conferences were interesting and networking very pleasant. I would like to thanks talkers for their time and Mozilla for their premises and I end with some picture of the night and the video will be soon online.
This week was released an Android version of Popcorn Time. For those who don’t know the project, it’s similar to Netflix: you select the movie or serie you want to watch and could instantly play it. Unlike Netflix, it’s free, based on user torrent seed and illegal in almost all countries. So don’t use it and go buy you DVD instead !
Existing versions still confine to desktop releases until now. So be happy mobile users, you day is coming ! Except one thing (or two, which career will allow you to download 1080p movie with p2p ?), it’s not the official / legacy team of Popcorn Time which releases the application. What does it means ? An alternative team is releasing the same software under an alternate name « Time4Popcorn ». But what for ?
This week I faced the need to edit commit authors. Thanks to the new LDAP authentication, I need to change the way users log on the server. The side effect is the manually declared users differ from the LDAP user names. To keep consistent, I updated the previous commit authors with their new LDAP names. For those who have the same need, I share my script below:
# Subject: Bash script to update commit authors.
# Author: Bruce BUJON (firstname.lastname@example.org)
# Description: This script use a dictionary (AUTHORS) to replace a Subversion repository commit authors.
# Usage: Edit REPOSITORY_PATH and AUTHORS variables then run the script.
# The repository path
# Get the repository head revision
HEAD=`svn info $REPOSITORY_PATH | grep Revision: | cut -c11-`
# The authors (keys are original authors, values are replaced authors)
declare -A AUTHORS
AUTHORS=( ["olduser1"]="newuser1" ["olduser2"]="newuser2")
# Process each revision up to head
for revision in $(seq 1 $HEAD)
echo -n "Processing revision $revision: "
# Get revision author
author=`svn propget --revprop svn:author -r $revision $REPOSITORY_PATH`
# Check if replacement author is available
if [[ ! -z "$newauthor" ]]; then
# Update commit author
output=$(svn propset --revprop svn:author $newauthor -r $revision $REPOSITORY_PATH 2>&1)
# Check update status
if [ $result != 0 ]; then
# An error occurred
echo "an error occurred!"
# Author replaced
echo "author replaced ($author > $newauthor)."
# Author kept
echo "author kept."
# End of script
echo "$HEAD revisions successfully proceed."
Note that you will the server allows revision property changes. To do that, ensure the pre-revprop-change hook return 0, at least the time of the maintenance. You don’t want your users editing the commit authors and logs to make you a joke.