HTTP 2.0 – what´s new

HTTP 2.0 has been around since 2015, but many developers working with REST-APIs haven´t learned about it yet. A real shame, because you can now do a lot more with it!

Most browsers started supporting it immediately.

As the flexibility of the data transfer has been improved,  new different sending methods possibilities have been developed. Probably frontend developer catched this. But most backend developers are not fully aware of it I think.


In HTTP 2.0 a new approach has come to life through the introduction of a binary framing layer. The requests are no longer sent as plain text. The messages are encapsulated in binary format, decomposed into smaller packets.

Multiple streams of data, consisting of multiple request/response messages that are now splitted in smaller units (frames), make use of the same TCP connection, by being interleaved and then reassembled at destination.

The use of one single TCP connection allows to reduce the resources required for the transmission.


A stream priorization mechanism has been introduced to  tackle the performance issues due to several streams waiting for the same resource .

If you are developer you can take advantage of it, by setting the weights in your requests based on your rendering and user interactions needs. For instance, large image loading might have a low priority.


While previously resources (like images) and html pages where sent separetely, you can now send them together before the client requests them. This mechanism is called “server push” and enables to fix the major drawback of resource inlining. The client can then decide wheter to cache or decline the resource pushed along with the HTML document.

Although the client may have not requested the resources yet, with the server push mechanism introduced in HTTP 2.0  you can send cacheable resources along with the HTML content anyway and even prioritize them (stream prioritization).

The client can choose to:

  • cache the pushed resources and then reuse them for faster navigation
  • limit the amount of concurrent pushed streams
  • deactivate the resource pushing completely.

With the server push you can spare a significant amout of loading time. Even seconds.


Since HTTP 1.1. compression algorithms have been used to to reduce the size of the trasmitted HTTP messages (especially because of CSS and JavaScript files).

However, since header messages (including cookies) hadn´t been compressed before HTTP 2.0 came out, bottleneck issues have always been a problem.

For this reason with HTTP 2 the headers compression  has been introduced, by using HPACK, a tool developed on purpose.


For more information check:

Spring Framework – Hot swapping!!!

I have already worked with Spring on a couple of projects and seen several tutorials online, but only one of them showed me how to reload the changes without stopping and restarting the application each time.

Well it´s supereasy. If you are using Maven, all you have to do is just adding the following dependency:


that´s it!

You can find more information at


Java 14 Feature: Records

Two weeks ago I finally became an Oracle Certified Java 11 developer. So I can finally enjoying taking a look at the new features and improvement of the latest versions.

The introduction of records in Java 14 is a very interesting thing that you cannot miss.

A plain old java object with getters, setters, etc. would require you several lines of code:

package de.demo;

public class Person {

	String firstname;
	String lastname;
	int age;
	public Person(String firstname, String lastname, int age) {
		this.firstname = firstname;
		this.lastname = lastname;
		this.age = age;

	public String getFirstname() {
		return firstname;

	public void setFirstname(String firstname) {
		this.firstname = firstname;

	public String getLastname() {
		return lastname;

	public void setLastname(String lastname) {
		this.lastname = lastname;

	public int getAge() {
		return age;

	public void setAge(int age) {
		this.age = age;

	public int hashCode() {
		final int prime = 31;
		int result = 1;
		result = prime * result + age;
		result = prime * result + ((firstname == null) ? 0 : firstname.hashCode());
		result = prime * result + ((lastname == null) ? 0 : lastname.hashCode());
		return result;

	public boolean equals(Object obj) {
		if (this == obj)
			return true;
		if (obj == null)
			return false;
		if (getClass() != obj.getClass())
			return false;
		Person other = (Person) obj;
		if (age != other.age)
			return false;
		if (firstname == null) {
			if (other.firstname != null)
				return false;
		} else if (!firstname.equals(other.firstname))
			return false;
		if (lastname == null) {
			if (other.lastname != null)
				return false;
		} else if (!lastname.equals(other.lastname))
			return false;
		return true;


But with the new Syntax it´s just as simple as writing:

public record Person (String firstName, String lastName, int age) {}

This one line of code can be placed in a class, as it was a class.

You can also add static fields and methods and overload constructors within the brackets, but you don´t need to implement toString, equals, hashcode, getters and setters and a default constructor with all the fields.

Then you can just use it as it was POJO:

public static void main(String[] args) {

		Person person = new Person("laura", "liparulo", 36);

				Age: """ + person.age() + """
				First name: """ + person.firstName() + """
				Last name: """ + person.lastName()+ """

This code would print:

First name:laura
Last name:liparulo

MyPerson[firstName=laura, lastName=liparulo, age=36]
As we can see in the javadoc (, the Record class is an abstract class with equals, hashcode and toString as abstract methods.

If you are using Eclipse, make sure you have the Support package for version 14 installed:

Stellar payment network concepts – distributed ledger and consensus protocol

Last week I found the time to dive into the cryptocurrencies world. After reading a bit, I decided to buy some Ethereum and Stellar Lumens.

Stellar Lumens are still very cheap and the project itself is interesting. So while waiting and hoping that the price will go up, let´s get started with some Stellar development too.



Stellar is a payment network based on the exchange of its currency Lumens.

Many people have heard about Blockchain, which is a specific way of implementing a distributed ledger by using chains of blocks containing transactions data.

But a ledger is something more generic.

Think of a distributed ledger as a spread database, that exists across multiple locations and multiple participants to avoid having a single point of failure and benefit from a “community” to perform transactions or any other data transfer.

Every entity sent through a database ledger has a unique cryptographic signature.

We can think as blockchain technologies as a subset of the distributed ledger ones.

Distributed ledgers are controlled by the participants and therefore allow more transparency.

The stellar network mantains and replicates the distributed ledger on every Stellar server node.

The Stellar ledger can be seen as a distributed exchange order book that allows users to store the transactions.

With a distributed ledger sending money becomes closer to sending an email: fast and very cheap!



The Stellar Consensus protocol (SCP) is an implementation of the model proposed in the Federated Byzantine Agreement (FBA). The goal is providing a secure consensus mechanism.

The key properties of SCP are:

  • low latency
  • digital signatures and hash families
  • decentralized control
  • flexible trust

A quorum of trusted parties (also called Consortium) is required to validate and secure the transaction over the network.

The consensus for every transaction is achieved by verifications and updates among servers.

Users can configure the set of parties they want to trust.

You can read more about the SCP protocol in the official documentation:

A simplified version is also available here:

Data Science with Java – Part 4 : Testing Hypothesis with the inference package

To test if a certain hypothesis is likely to be true we can take advantage of the Apache commons math inference package.

Considering the tests included in the package is a good opportunity to learn more about statistics and probability theory.

Let´s consider the following binomial test about flipping a coin:

		BinomialTest binomialTest = new BinomialTest();

		double nullHypothesis = 0.5; //fair coin
		int numberOfSuccesses = 9; //number of heads (biased coin)
		//Two sided = Represents a right-sided test. H0: p ≤ p0, H1: p > p0.
		AlternativeHypothesis alternativeHypothesis = AlternativeHypothesis.TWO_SIDED;
		int numberOfTrials = 10;

		// Returns the observed significance level, or p-value, associated with
		// a Binomial test.
		double significanceLevel = binomialTest.binomialTest(numberOfTrials, numberOfSuccesses, nullHypothesis,

		double alpha = 0.03; //significance level of the test
		// Returns whether the null hypothesis can be rejected with the given
		// confidence level.
		//true if signficanceLevel < alpha
		boolean rejected = binomialTest.binomialTest(numberOfTrials, numberOfSuccesses, nullHypothesis,
				alternativeHypothesis, alpha);

		System.out.println("The significance level is " + significanceLevel);
		System.out.println("Can we reject the null hypothesis?" + rejected);


The result that we get is:

The significance level is 0.021484375000000003
Can we reject the null hypothesis?true

The significance level is lower that the expected value alpha; it means that we can discard the test.

In the next posts I will write about the ChiSquare and KolmogorovSmirnov tests too. Stay tuned! 🙂

Data science with Java – Part 3: Statistics with Apache Commons Math library

Although some statistical analysis can be performed with simple Java 8 code (thanks to lambda functions and Stream API),  a lot more with less lines of code can be achieved with libraries Google Guava or Apache Commons Mathematics Library.

I am a big fan of the Apache Foundation, so I will discard Google guava for now.

The commons library offers a couple options for each statistical function.

You can use the class DescriptiveStatistics passing the array of doubles as parameter:

DescriptiveStatistics descriptiveStatistics = new DescriptiveStatistics(testData);
out.println("\nThe mean is " + descriptiveStatistics.getMean());
out.println("The standard deviation is " + descriptiveStatistics.getStandardDeviation());
out.println("The median is " + descriptiveStatistics.getPercentile(50));

or use the classes Mean, Median, etc.

public static double getMean(double[] testData) {
    Mean mean = new Mean();
    return mean.evaluate(testData);

The StandardDeviation can be constructued using the sample formual (Bessel´s bias correction) setting the parameter to “true”:

private static double getUnbiasedStandardDeviation(double[] testData) {
// unbiased estimation
    StandardDeviation sdSubset = new StandardDeviation(false);
    return sdSubset.evaluate(testData);

private static double getBiasCorrectedStandardDeviation(double[] testData) {
// bias corrected estimation ( n − 1 instead of n in the formula)
    StandardDeviation sdPopulation = new StandardDeviation(true);
    return sdPopulation.evaluate(testData);


Data Science with Java – Part 2: CSV data into charts

A nice java library called opencsv allows you to import the csv file content and make charts out of it.

Let´s consider for example unemployment in Germany since the reunification. We will use a csv file containing year, amount of people in germany, west and east (four columns)


We can represent it with an index chart by using just JavaFX and the opencsv library:

package de.datascience.charts;


import com.opencsv.CSVReader;

import javafx.application.Application;
import javafx.scene.Scene;
import javafx.scene.chart.CategoryAxis;
import javafx.scene.chart.LineChart;
import javafx.scene.chart.NumberAxis;
import javafx.scene.chart.ScatterChart;
import javafx.scene.chart.XYChart;
import javafx.stage.Stage;

public class UnemploymentGermany extends Application {

	public void start(Stage stage) throws Exception {
		stage.setTitle("Index Chart Sample");
		final NumberAxis yAxis = new NumberAxis(0, 5000000, 1);
		final CategoryAxis xAxis = new CategoryAxis();

		final LineChart<String, Number> lineChart = new LineChart<>(xAxis, yAxis);
		yAxis.setLabel("People without job");
		lineChart.setTitle("Unemployment in Germnay");

		XYChart.Series series = new XYChart.Series();
		XYChart.Series seriesWest = new XYChart.Series();
		XYChart.Series seriesEast = new XYChart.Series();
		seriesWest.setName("West Germany");
		seriesEast.setName("East Germany");
		try (CSVReader dataReader = new CSVReader(new FileReader("docs/unemployment_germany.csv"))) {
			String[] nextLine;
			while ((nextLine = dataReader.readNext()) != null) {
				String year = String.valueOf(nextLine[0]);
				int population = Integer.parseInt(nextLine[1]);
				series.getData().add(new XYChart.Data(year, population));
				int populationWest = Integer.parseInt(nextLine[2]);
				seriesWest.getData().add(new XYChart.Data(year, populationWest));
				int populationEast = Integer.parseInt(nextLine[3]);
				seriesEast.getData().add(new XYChart.Data(year, populationEast));

		lineChart.getData().addAll(series, seriesWest, seriesEast);
		Scene scene = new Scene(lineChart, 500, 400);

	public static void main(String[] args) {

The output will be the following:

Data Science with Java – Part 1: bar charts with FX

This year some books about using Java for Data science have been released and I am very happy about it!!! It doesn´t have to be Python at any cost.

Let´s dive into this new Java adventure. 🙂

Some basic visualization can be achieved with some FX classes, that can be found in the “javafx.scene.chart” package.

The following code will create a bar chart about the the Shares of Expenditures in 4 countries by category:

package de.datascience.charts;

import javafx.application.Application;
import javafx.scene.Scene;
import javafx.scene.chart.BarChart;
import javafx.scene.chart.CategoryAxis;
import javafx.scene.chart.NumberAxis;
import javafx.scene.chart.XYChart;
import javafx.stage.Stage;

public class ExpendituresShares extends Application {

    final static String FOOD = "Food";
    final static String HOUSING = "Housing";
    final static String TRANSPORTATION = "Transportation";
    final static String HEALTHCARE = "Health care";
    final static String CLOTHING = "Clothing";
    final static String USA="U.S.A.";
    final static String UK="United Kingdom";
    final static String CANADA="Canada";
    final static String JAPAN="Japan";

    final CategoryAxis xAxis = new CategoryAxis();
    final NumberAxis yAxis = new NumberAxis();

    final XYChart.Series<String, Number> usaSeries = new XYChart.Series<>();
    final XYChart.Series<String, Number> canadaSeries2 = new XYChart.Series<>();
    final XYChart.Series<String, Number> ukSeries = new XYChart.Series<>();
    final XYChart.Series<String, Number> japanSeries = new XYChart.Series<>();

    public void simpleBarChartByCountry(Stage stage) {
        stage.setTitle("Bar Chart");
        final BarChart<String, Number> barChart
                = new BarChart<>(xAxis, yAxis);
        barChart.setTitle("Shares of expenditures by Country");

        addDataItem(usaSeries, FOOD, 14);
        addDataItem(usaSeries, HOUSING, 26);
        addDataItem(usaSeries, TRANSPORTATION, 17);
        addDataItem(usaSeries, HEALTHCARE, 8);
        addDataItem(usaSeries, CLOTHING, 4);

        addDataItem(canadaSeries2, FOOD, 15);
        addDataItem(canadaSeries2, HOUSING, 21);
        addDataItem(canadaSeries2, TRANSPORTATION, 20);
        addDataItem(canadaSeries2, HEALTHCARE, 7);
        addDataItem(canadaSeries2, CLOTHING, 6);

        addDataItem(ukSeries, FOOD, 20);
        addDataItem(ukSeries, HOUSING, 24);
        addDataItem(ukSeries, TRANSPORTATION, 15);
        addDataItem(ukSeries, HEALTHCARE, 2);
        addDataItem(ukSeries, CLOTHING, 6);
        addDataItem(japanSeries, FOOD, 23);
        addDataItem(japanSeries, HOUSING, 22);
        addDataItem(japanSeries, TRANSPORTATION, 10);
        addDataItem(japanSeries, HEALTHCARE, 4);
        addDataItem(japanSeries, CLOTHING, 4);

        Scene scene = new Scene(barChart, 800, 600);
        barChart.getData().addAll(usaSeries, canadaSeries2, ukSeries, japanSeries);

    public void addDataItem(XYChart.Series<String, Number> series,
            String x, Number y) {
        series.getData().add(new XYChart.Data<>(x, y));

    public void start(Stage stage) {

    public static void main(String[] args) {


If you run the main you should see the following window:

Python : Basic statistics with the numpy module

The numpy module features some useful functions for statistics, like “mean()” and “median()”:

For example let´s consider a 2D array with age and height of some people and print out some statistics:

#! /usr/bin/env python
import numpy as np

#age, height in meters
person = [[11,1.56],[4, 0.80], [44, 1.88], [23, 1.68], [55, 1.74]]

np_person = np.array(person)


age = np_person[:,0]

height = np_person[:,1]
print("average age: " + str(np.mean(age)))
print("average height: " + str(np.mean(height)))

#the standard deviation is also rounded to two decimals only.
std_height= round(np.std(height),2)

print("standard deviation of the height: "+ str(std_height))

corr = np.corrcoef(np_person[:,0], np_person[:,1])
print("Correlation: " + str(corr))

The code can be also found on github:

Python scripting with Linux: which shebang?

If you want to execute python scripts with Linux you need to add the shebang line: “#! /usr/bin/env python”

It must be added on top of the file.

The shebang will allow you to run the script as any other script. Among the many options to run it, assuming the script name is “script”, one is:

> ./

The file must be made executable:

>  sudo chmod +x

Assuming we want to print an homogenous array created with the numpy module, a script might include the module import too:

#! /usr/bin/env python

import numpy as np

array1 = np.array([1,2,3,4])



It will print the following lines:

[1 2 3 4]
<type ‘numpy.ndarray’>