Hello world


Like every other software developer, I wrote Hello Worlds hundreds of times. This time, however, is the first time I write an Hello World to inaugurate a blog.

I’m currently working as a Deep Learning (DL) / Machine Learning (ML) researcher, therefore expect the following posts to be a crescendo about these topics. I’m planning to write something like a ML/DL article a day/a week (or at least I’ll try).

I decided to start a blog because I used to write to fix concepts in my mind, so instead of write them down on a paper I’ll write them here. In this way everyone can read it: in my opinion knowledge must be shared as much as possible.

Furthermore, using jekyll and GiHub everyone can contribute!

How to contribute

  1. Fork the repository
  2. Create a new branch
  3. Contribute!
  4. Open a pull request

These steps are OK if you want to fix typos and similar things.

If you want to change the meaning of an article because you believe that I’m wrong, please open an issue and lets discuss it before.

Related Posts

Tensorflow 2.0: models migration and new design

Tensorflow 2.0 will be a major milestone for the most popular machine learning framework: lots of changes are coming, and all with the aim of making ML accessible to everyone. These changes, however, requires for the old users to completely re-learn how to use the framework: this article describes all the (known) differences between the 1.x and 2.x version, focusing on the change of mindset required and highlighting the pros and cons of the new implementation.

Understanding Tensorflow's tensors shape: static and dynamic

Describing computational graphs is just a matter connecting nodes correctly. Connecting nodes seems a trivial operation, but it hides some difficulties related to the shape of tensors. This article will guide you through the concept of tensor's shape in both its variants: static and dynamic.

Camera calibration guidelines

The process of geometric camera calibration (camera resectioning) is a fundamental step for machine vision and robotics applications. Unfortunately, the result of the calibration process can vary a lot depending on various factors. There are a lot of empirical guidelines that have to be followed in order to achieve good results: this post will drive you through them.

Ethereum on Raspberry Pi: secure wallet and complete node with redundant storage

Ethereum is a relatively new player in the crypto-currencies ecosystem. If you are a researcher, an algorithmic trader or an investor, you could want to run an ethereum node to study, develop and store your ETH while contributing to the network good.

Understanding Tensorflow using Go

Tensorflow is not a Machine Learning specific library, instead, is a general purpose computation library that represents computations with graphs. Its core is implemented in C++ and there are also bindings for different languages. The bindings for the Go programming language, differently from the Python ones, are a useful tool not only for using Tensorflow in Go but also for understanding how Tensorflow is implemented under the hood.

Analysis of Dropout

Overfitting is a problem in Deep Neural Networks (DNN): the model learns to classify only the training set, adapting itself to the training examples instead of learning decision boundaries capable of classifying generic instances. Many solutions to the overfitting problem have been presented during these years; one of them have overwhelmed the others due to its simplicity and its empirical good results: Dropout.

Convolutional Autoencoders in Tensorflow

How to implement a Convolutional Autoencoder using Tensorflow and DTB.

Convolutional Autoencoders

The convolution operator allows filtering an input signal in order to extract some part of its content. Autoencoders in their traditional formulation do not take into account the fact that a signal can be seen as a sum of other signals. Convolutional Autoencoders, instead, use the convolution operator to exploit this observation. They learn to encode the input in a set of simple signals and then try to reconstruct the input from them.

Introduction to Autoencoders

Autoencoders are neural networks models whose aim is to reproduce their input: this is trivial if the network has no constraints, but if the network is constrained the learning process becomes more interesting.