Safe Browsing in Epiphany

I am pleased to announce that Epiphany users will now benefit from a safe browsing support which is capable to detect and alert users whenever they are visiting a potential malicious website. This feature will be shipped in GNOME 3.28, but those who don’t wish to wait that long can go ahead and build Epiphany from master to benefit from it.

The safe browsing support is enabled by default in Epiphany, but you can always disable it from the preferences dialog by toggling the checkbox under General -> Web Content -> Try to block dangerous websites.

Safe browsing is implemented with the help of Google’s Safe Browsing Update API v4. How this works: the URL’s hash prefix is tested against a local database of unsafe hash prefixes, and if a match is found then the full hash is further requested from the Google Safe Browsing server to be compared to the URL’s full hash. If the full hashes are equal, then the URL is considered unsafe. Of course, all hash prefixes and full hashes are cached for a certain amount of time, in order to minimize the number of requests sent to the server. Needless to say that working only with URL hashes brings a big privacy bonus since Google never knows the actual URLs that clients browse. The whole description of the API can be found here.



This year’s GUADEC came a bit unexpectedly for me. I wasn’t really planning to attend it because of my school and work, but when Iulian suggested that we should go, I didn’t have to think twice and agreed immediately. And I was not disappointed! Travelling to Manchester proved to be a great vacation where I could not only enjoy a few days off but also learn things and meet new and old friends.

Much like last year’s GUADEC, I attended some of the talks during the core days where I got to find out more things about new technologies such as Flatpak, Meson, BuildStream (I’m really looking forward to seeing how this one turns out in the future) etc, and also about the GNOME history and future prospects.

One of this year’s social events was GNOME’s 20th anniversary party held Saturday night at Museum of Science and Industry. I have to thank the organization team for arranging such a great party and taking care of everything. This was definitely the highlight of this year!

As usual, I’m gonna lay a few words about the location that we were in – Manchester. I found Manchester a nice and cozy city, packed with everything: universities, museums, parks, and restaurants of all kinds for all tastes. The weather was not the best that you can get, with all the rainy and sunny sessions that alternate on an hourly basis, but I guess that’s typical for UK. Overall, I think that Manchester is an interesting city where one would not ever get bored.

Thanks again to the GUADEC team and to GNOME for hosting such an awesome event!



One of the perks of being a GSoC student for GNOME is that you get to be invited to the anual GNOME Users And Developers European Conference. Therefore, I had the pleasure of travelling to Karlsruhe, Germany together with other summer students colleagues and have a really amazing week.

Not only I got the opportunity to meet my mentor, Michael Catanzaro, and other people from the community too, but also I got to learn more about the whole GNOME stack and how to use it to its full power. All the presentations and talks that I attended have proven really enlightening!

More than this, Karlsruhe is a beautiful city. I enjoyed all its pubs, restaurants and parks that I had the pleasure to go to, but I was really impressed by the Karlsruhe Zoo and Karlsruhe Palace which are quite amazing places to visit.

I can easily say that this was the best part of the summer. Many thanks to GNOME for its sponsorship and I hope I’ll be able to attend GUADEC next year in Manchester too!


GSoC 2016: Final report

Picking up where I left in my previous post. Shortly after GUADEC, I managed to implement the sync logic, which proved a bit tricky, but worked out well in the end. Last week I’ve asked my mentor, Michael, to review my code, so for the past few days I’ve worked to fix the things that he suggested through his review comments.

Since my code heavily relies on Mozilla’s protocols (thus it may appear a bit confusing for someone that is not already familiar with it), Michael suggested that I should write some thorough documentation/comments for the important functions, so that will be my next step for the following days.

Hopefully, all of my work will go into master in the next weeks, but only after Iulian is finished with his work on bookmarks, since a relevant part of my code relies on the new bookmarks code.

Currently, the bookmarks are the only items that are synced between different Epiphany instances, therefore, some future tasks would be to sync the other important items too, such as history/password/tabs, but also enhance the current code, since there is always room for improvement.

Google Summer of Code has ended now, and I want to thank GNOME for giving me the chance to be part of the community and do some great work, but also thank Michael and Iulian for supporting me and guiding me through the whole summer. I hope that this is only the beginning of my involvement with GNOME, and that I get the opportunity to work with as many of you in the future! 🙂


GSoC 2016: Progress #5

Like I said in my previous post, the final part of my project represents the implementation of the actual Sync logic. This is done exclusively by sharing data with the Storage Server. Since Mozilla’s Storage Server does not support push notifications, this is going to require a bit of tinkering from my part in order to make the Sync work correctly.

What you need to know about the Storage Server is that it is essentially a dumb storage bucket – it only does what you tell him. Therefore, most of the complexity of Sync is the responsibility of the client. This is good for users’ data security, but can be bad for people implementing Sync clients 🙂

What else you need to know about the Storage Server is how it stores the data. The base elements are the simple objects so called Basic Storage Objects (BSOs), which are organized into named collections. A BSO is the generic JSON wrapper around all items passed into and out of the Storage Server and is assigned to a collection with other related BSOs (i.e. the Bookmarks collections, the History collection etc).

Among other optional fields, every BSO contains the following mandatory fields:

  • id – an identifying string which must be unique within a collection.
  • payload – a string containing the data of the record.
  • modified – the timestamp at which the BSO was last modified, set by the server.

As for talking to the Storage Server, we just send specific HAWK signed HTTP requests (i.e. GET, POST, DELETE etc) to a given collection endpoint or to a given BSO endpoint.

Since I will only deal with bookmarks sync for the moment, I’ll continue to talk from the bookmarks’ point of view. Maybe now is a good time to state that my current work is highly dependent to Iulian’s work, who is doing the Bookmarks Subsystem Update, therefore I had to rebase his branch into mine so that I can work with the new bookmarks code.

Before a bookmark is uploaded to the server, there are a few steps that we need to take into consideration:

  1. Serialize. For an object to be serializable, it has to implement GLib’s Serializable Interface, so that’s what we did for EphyBookmark too.
  2. Encrypt. To be more specific, this is an AES 256 encryption using the Sync Key retrieved from the FxA Server.
  3. URL safe encode. Since the BSO’s payload is a string that is sent over HTTP, we can’t send the raw encrypted bytes, so we have to base64 url-safe encode it.

Next, we create the BSO with the given id and payload, send it to the server, and set the modified value as returned by the server. Obviously, when downloading a BSO from the server, the steps are going in reverse order: decode -> decrypt -> deserialize.

OK, now that you know how to interact with the Storage Server, back to the Sync logic. I’m not sure if I’ll have time to finish implementing it before GUADEC, maybe I’ll do it there during one of the BOFs, who knows?

However, the actual Sync process should look something along the lines:

  • The user signs in.
  • Retrieve the Sync Key from the FxA Server.
  • Retrieve the storage endpoint and credentials from the Token Server.
  • Merge the local bookmarks with the remote ones from the Storage Server, if any.
  • Every time a bookmark is added/edited (re)upload it to the server too.
  • Every time a bookmark is deleted delete it from the server too.
  • Periodically check the server for changes to the Bookmarks collection. If any, mirror them to the local instance (this is going to prove a bit tricky for deletion, since we can no longer track a BSO that has been deleted).

That’s it for the moment, see you at GUADEC!

GSoC 2016: Progress #4

In order to have a working form of sync with the help of the Mozilla servers, there are mainly three steps that need to be taken:

  1. Obtain a sessionToken and a keyFetchToken from the Firefox Accounts Server. These are automatically sent by the server upon sign in. The former allows us to obtain a signed certificate needed to talk to the Token Server, while the latter allows us to retrieve the sync keys needed to encrypt/decrypt synchronized data records.
  2. Obtain the storage endpoint, together with the storage credentials (id + key) from the Token Server. The storage endpoint represents the URL of the Storage Server that is assigned to the user upon the creation of the account. The storage credentials are used to sign all the HAWK requests sent to the Storage Server.
  3. Develop an algorithm based on multiple GET/PUT/POST/DELETE requests to the Storage Server to implement the actual sync logic. This should not only keep the data up to date on both the server and the remote clients, but also resolve any conflicts that may appear between clients due to concurrent requests.

As I’ve mentioned in my previous post, step #1 is complete, so the last couple of weeks I have focused on steps #2 and #3. I’ll only talk about #2 now and leave #3 be the subject of another post later this week.

OK so, in order to talk to the Token Server, one must possess a so called signed BrowserID assertion. This is basically a signed certificate that Mozilla enforces in order to convince subsequent relying parties that we control the account. In order to obtain one, we need to do the following:

  • Derive the sessionToken into the tokenID.
  • Generate a random RSA key pair.
  • Use the RSA public key together with the tokenID to sign the HAWK request to the certificate/sign endpoint of the FxA Server.
  • Check if the certificate received from the server is valid (i.e. contains the correct uid and algorithm).
  • From the certificate and the RSA key pair, generate the BrowserID assertion for the URL of the Token Server.

Next, we append the previously computed BrowserID to the ‘authorization’ header of the request that is going to be sent to the Token Server. If everything is OK, the server will reply the endpoint of the Storage Server, together with an one-hour-valid storage credentials. As stated before, these will be later used to sign all the requests to the Storage Server.

I’ve only described brief parts of the algorithms, but I think this is just enough to get an idea of how the client-server communication works. Stay tuned for the next posts!

GSoC 2016: Progress #3

My last week has been quite busy, but it all paid off in the end as I’ve managed to overcome the issue that I had with the login phase. Thankfully, I was able to take a look at how the postMessage() API is used to do the login in Firefox iOS and implement it myself in Epiphany.

To summarize it, this is how it’s done:

  1. Load the FxA iframe with the service=sync parameter in a WebKitWebView.
  2. Inject a few JavaScript lines to listen to FirefoxAccountsCommand events (sent by the FxA Server). This is done with a WebKitUserContentManager and a WebKitUserScript.
  3. In the event listener use postMessage() to send to back to WebKit the data received from the server.
  4. In the C code, register a script message handler with a callback that gets called whenever something is sent through the postMessage() channel. This is done with webkit_user_content_manager_register_script_message_handler().
  5. In the callback you now hold the server’s response to your request. This includes all the tokens you need to retrieve the sync keys.
  6. Profit!

Basically, postMessage() acts like a forwarder between JavaScript and WebKit. Cool!

With this new sign in method, users can also benefit of the possibility to create new Firefox accounts. The iframe contains a “Create an account” link that shows a form by which the users will create a new account. The user will have to verify the account before he signs in.