Golang – using pointers

Golang also uses pointers, but it doesn’ need any memory allocation code or anything.

It’s very straightforward.

The ampersand & creates a pointer, and the asteryx * allows you to get the value referenced by the pointer.

Simply:

package main

import (
	"fmt"
)

func main() {
	s := "this is a string"

	string_pointer := &s

	fmt.Println(s)

	//reference the pointer with asteryx
	fmt.Println("reference by pointer: ", *string_pointer)
	//reference the pointer with asteryx
	fmt.Println("address: ", string_pointer)
}

Error Handling with Golang

In Golang you have to specify an error handling.

For example, you can use the panic keyword:

import (
	"fmt"
	"os"
	"strconv"
)

func main() {

	var sum int

	//exclude for argument from Args
	for _, a := range os.Args[1:] {

		i, err := strconv.Atoi(a)

		if err != nil {
			panic(fmt.Sprintf("Invalid value : %v", err))
		}

		sum += i

	}

	fmt.Printf("Sum = %v\n", sum)

}

As you can see, if you run the code with string parameters you get an error:

go run error_handling.go 1 a c

Then you get:

panic: Invalid value : strconv.Atoi: parsing "a": invalid syntax

goroutine 1 [running]:
main.main()
        C:/Users/lliparul/Desktop/go/3_ERROR_HANDLING/error_handling.go:19 +0x159
exit status 2

Inspired by the course “Getting started with Go Programming Language” by Matthew Spaulding (Packt Publishing)

Introduction to Golang

Go was developed by some developers at Google and meant to be a better system programming language than C and C++.

It’s compiled and not interpreted.

It’s fast and suitable for concurrency.

Go uses vendoring for depedepency management. It means that the cofe of the dependencies is included in the project, once the dependency is downloaded.

Hello world program

A simple Hello world program (a file called “hello_world.go”) looks like this:

package main

import "fmt"

func main() {
	fmt.Println("Hello, World!")
}

As you can see the main function is the entry point to run golang applications.

Once golang is installed on your machine, you can simply run:

go run hello_world.go

The package fmt is a fundamental library to deal with strings, etc.

Assignments, data types, control structures

To assign values, columns and equal simbols are used. You can use data types like int, but also omit them.

a := 5  
var b int = 4

Strings always use double quotes.

fmt.Println("a is negative!")

Of course you have conditional clauses like, if and switch:

if a < 0 {
		fmt.Println("a is negative!")
	} else if a > b {
		fmt.Println("a is bigger than b")
	}

switch a {
	case 10:
		fmt.Println("It's 10")
	case 5:
		fmt.Print("It's five")
	}

And three types of for loops.

Simple loop:


	for i := 0; i < 4; i++ {
		for i := 0; i < 4; i++ {
		fmt.Println(i)
	}

Range loop:

for _, i := range []int{1, 2, 3, 4} {
		fmt.Println(i)
	}

Infinite loop:

	for {
		fmt.Println("this is infinite!")
	 }

Ansible – Let’s write our first playbook

Ansible is a configuration management tool that has been aroun for a while.

So far I never had the chanche to try it out, but now I have found some time for it!

Let’s take a quick look together.

Assuming that you have installed Ansible on your linux machine, let’s create our first playbook

We can try to install and run an nginx server.

Let’s create a file called “nginx.yml” with the following content

---
- hosts: 127.0.0.1
  connection: local
  become: true
  tasks:
        - name: install nginx
          apt:
                pkg: nginx
                state: present

        - name: start nginx
          service: nginx
          state: started
          enabled: true
 

Save the file and then run:

ansible-playbook nginx.yml –connection=local

This will generate something like:

So your first playbook has been run!

If you now run “which nginx” you will get the path where nginx is now installed

You can see that nginx is running by visiting the localhost address in the browser:

And also by running in terminal the following command:

systemctl –type=service –state=running | grep nginx

nginx is now a service running on Linux!

You can stop it anytime, by running:

> sudo service nginx stop

Kubernetes – Storage options – ConfigMap and Secrets

As the pods might need the same configuration variables or credentials, you can store them in a single location by using a ConfigMap or a SecretMap.

This mechanism is called “configuration injection”.

You can create a config map the imperative way and specify the source, that can be:

  • from literal
  • from file
  • from env-file

For example:

$ kubectl create configmap cm-db --from-literal=DB_NAME=mydb --from-literal=DBHOST=mydbsite.net

You can generate if from a file, like:

$ kubectl create configmap game-config --from-file=/opt/config-maps/my-map.txt

The file “my-map.txt” might look like:

# Environment files use a new line for each variable.
KIAMOL_CHAPTER=ch04
KIAMOL_SECTION=ch04-4.1
KIAMOL_EXERCISE=try it now

You can as well define a map directly in a yaml file, in the declarative way. For example, to use the file “nginx.conf”:

apiVersion: v1
kind: ConfigMap
metadata:
  name: config
data:
  nginx.conf: |
    server {
      location / {
        root /data/www;
      }
      location /images/ {
        root /data;
      }
    }

Or simply key-value entries like:

apiVersion: v1
kind: ConfigMap
metadata:
  name: random-generator-config
data:
  PATTERN: "logging"
  EXTRA_OPTIONS: "high-secure,native"
  SEED: "432576345"


You can use the alias “cm” as well. For example, to get the details:

$ kubectl get cm my-cf-map
 

The config map needs to exist before you deploy the pod.

You can add an environment variable from a config map for a specific container, like:

apiVersion: v1
kind: Pod
metadata:
  name: random-generator
spec:
  containers:
  - image: k8spatterns/random-generator:1.0
    name: random-generator
    env:
    - name: PATTERN
      valueFrom:
        configMapKeyRef:                   
          name: random-generator-config
          key: pattern

Or even add all the variables by a prefix for all the containers with “envFrom”:

apiVersion: v1
kind: Pod
metadata:
  name: random-generator
spec:
  containers:
    envFrom:                              
    - configMapRef:
        name: random-generator-config
      prefix: CONFIG_

You can also mount a config map as a volume:

apiVersion: v1
kind: Pod
metadata:
  name: random-generator
spec:
  containers:
  - image: k8spatterns/random-generator:1.0
    name: random-generator
    volumeMounts:
    - name: config-volume
      mountPath: /config
  volumes:
  - name: config-volume
    configMap:                            
      name: random-generator-config

This ways the volume will map each entry with a file.

SECRETS
Secrets have a similar API to config maps, but they are managed by Kubernetes in a different way: they are stored in memory rather than disk and encrypted both when in transit and at rest.

There are three types of secret:

  • docker-registry
  • TLS
  • generic

Assuming you want to put your username and password in two different text files, called “username.txt” and “password.txt” respectively, you can create a secret like the folliwng:

$ kubectl create secret generic database-creds --from-file=username.txt --from-file=password.txt

$ kubectl create secret generic passwords --from-literal=password=foobar

Notice that you need to include “generic” in the command.

You create tls and docker-registry entries in the same way:

$ kubectl create secret tls demo-tls --key "auth.key" --cert "auth.cer" -n foo

$ kubectl create secret docker-registry gcr-pull-key --docker-server=gcr.io --docker-username=_json_key --docker-password="$(cat gke_key.json)" --docker-email=xyz@slb.com

In the manifest file, you can reference secrets:


apiVersion: v1
kind: Pod
metadata:
  name: random-generator
spec:
  containers:
  - image: k8spatterns/random-generator:1.0
    name: random-generator
    env:
    - name: LOG_FILE
      value: /tmp/random.log                   
    - name: PATTERN
      valueFrom:
        configMapKeyRef:                       
          name: random-generator-config        
          key: pattern                         
    - name: SEED
      valueFrom:
        secretKeyRef:                          
          name: random-generator-secret
          key: seed

Just like for the config mpa, you can import all the keys an env variables at once, like:

  containers:
  - name: secret-handler
...
    envFrom:
    - secretRef:     
        name: secret1

Besides using the “secretKeyRef” parameter, you can reference a file in a volume and then specify the secret name in the “volumes” section.

For example,:

spec:
  containers:
    - name: db
      image: postgres:11.6-alpine
      env:
      - name: POSTGRES_PASSWORD_FILE       # Sets the path to the file
        value: /secrets/postgres_password
      volumeMounts:                        # Mounts a Secret volume
        - name: secret                     # Names the volume
          mountPath: "/secrets"            
  volumes:
    - name: secret
      secret:                             # Volume loaded from a Secret 
        secretName: todo-db-secret-test   # Secret name
        defaultMode: 0400                 # Permissions to set for files
        items:                            # Optionally names the data items 
        - key: POSTGRES_PASSWORD  
          path: postgres_password


Credentials get encoded in base64. So you can basically read them in clear text, by running:

$ echo <secret-string> | base64 -d

This means that it´s better to use a specific tools if you want to achieve more security!

As secrets can be accessed by whoever has access to the cluster and decoded (i.e. read in clear text!), they cannot be considered a good productive solution. In other words, you should better look around for a specific commercial tool like Hashicorp Vault. 

Kubernetes – Imperative Job creation

Let´s create a job the imperative way, by using the docker whalesay image:

kubectl create job whalesay --image=docker/whalesay --dry-run=client -o yaml > job.yaml -- cowsay I am going to ace CKAD!

We are using the “dry-run” option to create a yaml manifest without creating the job.

In the last part of the command with add a command for the container, that will be put directly in the manifest.

Once the file is created, we can add parameters like completions, parallelism and backoffLimit under the spec.template section, like this:

Then we need to create the job, by running:

kubectl create -f job.yaml -n <your-namespace>

After a while we can see the pods have been run and completed:

If you inspect the log of one of the pod, you can see the funny whale comics:

kubectl logs whalesay-7h27f

MORE ABOUT JOBS

Specifying the restartPolicy is mandatory for a Job.

Notice that:

  • a job is persisted and survives cluster restarts
  • a completed job is kept for tracking purposes

HTTP 2.0 – what´s new

HTTP 2.0 has been around since 2015, but many developers working with REST-APIs haven´t learned about it yet. A real shame, because you can now do a lot more with it!

Most browsers started supporting it immediately.

As the flexibility of the data transfer has been improved,  new different sending methods possibilities have been developed. Probably frontend developer catched this. But most backend developers are not fully aware of it I think.

 

In HTTP 2.0 a new approach has come to life through the introduction of a binary framing layer. The requests are no longer sent as plain text. The messages are encapsulated in binary format, decomposed into smaller packets.

Multiple streams of data, consisting of multiple request/response messages that are now splitted in smaller units (frames), make use of the same TCP connection, by being interleaved and then reassembled at destination.

The use of one single TCP connection allows to reduce the resources required for the transmission.

 

A stream priorization mechanism has been introduced to  tackle the performance issues due to several streams waiting for the same resource .

If you are developer you can take advantage of it, by setting the weights in your requests based on your rendering and user interactions needs. For instance, large image loading might have a low priority.

 

While previously resources (like images) and html pages where sent separetely, you can now send them together before the client requests them. This mechanism is called “server push” and enables to fix the major drawback of resource inlining. The client can then decide wether to cache or decline the resource pushed along with the HTML document.

Although the client may have not requested the resources yet, with the server push mechanism introduced in HTTP 2.0  you can send cacheable resources along with the HTML content anyway and even prioritize them (stream prioritization).

The client can choose to:

  • cache the pushed resources and then reuse them for faster navigation
  • limit the amount of concurrent pushed streams
  • deactivate the resource pushing completely.

With the server push you can spare a significant amout of loading time. Even seconds.

 

Since HTTP 1.1, compression algorithms have been used to to reduce the size of the trasmitted HTTP messages (especially because of CSS and JavaScript files).

However, since header messages (including cookies) hadn´t been compressed before HTTP 2.0 came out, bottleneck issues have always been a problem.

For this reason with HTTP 2 the headers compression  has been introduced, by using HPACK, a tool developed on purpose.

 

For more information check:

https://www.digitalocean.com/community/tutorials/http-1-1-vs-http-2-what-s-the-difference

https://http2.github.io/http2-spec/

Spring Framework – Hot swapping!!!

I have already worked with Spring on a couple of projects and seen several tutorials online, but only one of them showed me how to reload the changes without stopping and restarting the application each time.

Well it´s supereasy. If you are using Maven, all you have to do is just adding the following dependency:

<dependency><dependency> 
         <groupId>org.springframework.boot</groupId> 
         <artifactId>spring-boot-devtools</artifactId> 
</dependency>

that´s it!

You can find more information at https://docs.spring.io/spring-boot/docs/2.3.3.RELEASE/reference/html/using-boot-devtools.html

 

Java 14 Feature: Records

Two weeks ago I finally became an Oracle Certified Java 11 developer. So I can finally enjoying taking a look at the new features and improvement of the latest versions.

The introduction of records in Java 14 is a very interesting thing that you cannot miss.

A plain old java object with getters, setters, etc. would require you several lines of code:

package de.demo;

public class Person {

	String firstname;
	String lastname;
	int age;
	
	public Person(String firstname, String lastname, int age) {
		this.firstname = firstname;
		this.lastname = lastname;
		this.age = age;
	}

	public String getFirstname() {
		return firstname;
	}

	public void setFirstname(String firstname) {
		this.firstname = firstname;
	}

	public String getLastname() {
		return lastname;
	}

	public void setLastname(String lastname) {
		this.lastname = lastname;
	}

	public int getAge() {
		return age;
	}

	public void setAge(int age) {
		this.age = age;
	}

	@Override
	public int hashCode() {
		final int prime = 31;
		int result = 1;
		result = prime * result + age;
		result = prime * result + ((firstname == null) ? 0 : firstname.hashCode());
		result = prime * result + ((lastname == null) ? 0 : lastname.hashCode());
		return result;
	}

	@Override
	public boolean equals(Object obj) {
		if (this == obj)
			return true;
		if (obj == null)
			return false;
		if (getClass() != obj.getClass())
			return false;
		Person other = (Person) obj;
		if (age != other.age)
			return false;
		if (firstname == null) {
			if (other.firstname != null)
				return false;
		} else if (!firstname.equals(other.firstname))
			return false;
		if (lastname == null) {
			if (other.lastname != null)
				return false;
		} else if (!lastname.equals(other.lastname))
			return false;
		return true;
	}
	

}

But with the new Syntax it´s just as simple as writing:

public record Person (String firstName, String lastName, int age) {}

This one line of code can be placed in a Person.java class, as it was a class.

You can also add static fields and methods and overload constructors within the brackets, but you don´t need to implement toString, equals, hashcode, getters and setters and a default constructor with all the fields.

Then you can just use it as it was POJO:

public static void main(String[] args) {

		Person person = new Person("laura", "liparulo", 36);

		System.out.println("""
				Age: """ + person.age() + """
				First name: """ + person.firstName() + """
				Last name: """ + person.lastName()+ """
						""");
		
		
		System.out.println(person.toString());
	}

This code would print:

Age:36
First name:laura
Last name:liparulo

MyPerson[firstName=laura, lastName=liparulo, age=36]
—————————————————————
As we can see in the javadoc (https://docs.oracle.com/en/java/javase/14/docs/api/java.base/java/lang/Record.html), the Record class is an abstract class with equals, hashcode and toString as abstract methods.

If you are using Eclipse, make sure you have the Support package for version 14 installed:
https://marketplace.eclipse.org/content/java-14-support-eclipse-2020-03-415

Stellar payment network concepts – distributed ledger and consensus protocol

Last week I found the time to dive into the cryptocurrencies world. After reading a bit, I decided to buy some Ethereum and Stellar Lumens.

Stellar Lumens are still very cheap and the project itself is interesting. So while waiting and hoping that the price will go up, let´s get started with some Stellar development too.

 

DISTRIBUTED LEDGER

Stellar is a payment network based on the exchange of its currency Lumens.

Many people have heard about Blockchain, which is a specific way of implementing a distributed ledger by using chains of blocks containing transactions data.

But a ledger is something more generic.

Think of a distributed ledger as a spread database, that exists across multiple locations and multiple participants to avoid having a single point of failure and benefit from a “community” to perform transactions or any other data transfer.

Every entity sent through a database ledger has a unique cryptographic signature.

We can think as blockchain technologies as a subset of the distributed ledger ones.

Distributed ledgers are controlled by the participants and therefore allow more transparency.

The stellar network mantains and replicates the distributed ledger on every Stellar server node.

The Stellar ledger can be seen as a distributed exchange order book that allows users to store the transactions.

With a distributed ledger sending money becomes closer to sending an email: fast and very cheap!

 

STELLAR CONSENSUS PROTOCOL

The Stellar Consensus protocol (SCP) is an implementation of the model proposed in the Federated Byzantine Agreement (FBA). The goal is providing a secure consensus mechanism.

The key properties of SCP are:

  • low latency
  • digital signatures and hash families
  • decentralized control
  • flexible trust

A quorum of trusted parties (also called Consortium) is required to validate and secure the transaction over the network.

The consensus for every transaction is achieved by verifications and updates among servers.

Users can configure the set of parties they want to trust.

You can read more about the SCP protocol in the official documentation: https://www.stellar.org/developers/guides/concepts/scp.html

A simplified version is also available here: http://www.scs.stanford.edu/~dm/blog/simplified-scp.html

We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners.
Cookies settings
Accept
Privacy & Cookie policy
Privacy & Cookies policy
Cookie name Active

Privacy Policy

What information do we collect?

We collect information from you when you register on our site or place an order. When ordering or registering on our site, as appropriate, you may be asked to enter your: name, e-mail address or mailing address.

What do we use your information for?

Any of the information we collect from you may be used in one of the following ways: To personalize your experience (your information helps us to better respond to your individual needs) To improve our website (we continually strive to improve our website offerings based on the information and feedback we receive from you) To improve customer service (your information helps us to more effectively respond to your customer service requests and support needs) To process transactions Your information, whether public or private, will not be sold, exchanged, transferred, or given to any other company for any reason whatsoever, without your consent, other than for the express purpose of delivering the purchased product or service requested. To administer a contest, promotion, survey or other site feature To send periodic emails The email address you provide for order processing, will only be used to send you information and updates pertaining to your order.

How do we protect your information?

We implement a variety of security measures to maintain the safety of your personal information when you place an order or enter, submit, or access your personal information. We offer the use of a secure server. All supplied sensitive/credit information is transmitted via Secure Socket Layer (SSL) technology and then encrypted into our Payment gateway providers database only to be accessible by those authorized with special access rights to such systems, and are required to?keep the information confidential. After a transaction, your private information (credit cards, social security numbers, financials, etc.) will not be kept on file for more than 60 days.

Do we use cookies?

Yes (Cookies are small files that a site or its service provider transfers to your computers hard drive through your Web browser (if you allow) that enables the sites or service providers systems to recognize your browser and capture and remember certain information We use cookies to help us remember and process the items in your shopping cart, understand and save your preferences for future visits, keep track of advertisements and compile aggregate data about site traffic and site interaction so that we can offer better site experiences and tools in the future. We may contract with third-party service providers to assist us in better understanding our site visitors. These service providers are not permitted to use the information collected on our behalf except to help us conduct and improve our business. If you prefer, you can choose to have your computer warn you each time a cookie is being sent, or you can choose to turn off all cookies via your browser settings. Like most websites, if you turn your cookies off, some of our services may not function properly. However, you can still place orders by contacting customer service. Google Analytics We use Google Analytics on our sites for anonymous reporting of site usage and for advertising on the site. If you would like to opt-out of Google Analytics monitoring your behaviour on our sites please use this link (https://tools.google.com/dlpage/gaoptout/)

Do we disclose any information to outside parties?

We do not sell, trade, or otherwise transfer to outside parties your personally identifiable information. This does not include trusted third parties who assist us in operating our website, conducting our business, or servicing you, so long as those parties agree to keep this information confidential. We may also release your information when we believe release is appropriate to comply with the law, enforce our site policies, or protect ours or others rights, property, or safety. However, non-personally identifiable visitor information may be provided to other parties for marketing, advertising, or other uses.

Registration

The minimum information we need to register you is your name, email address and a password. We will ask you more questions for different services, including sales promotions. Unless we say otherwise, you have to answer all the registration questions. We may also ask some other, voluntary questions during registration for certain services (for example, professional networks) so we can gain a clearer understanding of who you are. This also allows us to personalise services for you. To assist us in our marketing, in addition to the data that you provide to us if you register, we may also obtain data from trusted third parties to help us understand what you might be interested in. This ‘profiling’ information is produced from a variety of sources, including publicly available data (such as the electoral roll) or from sources such as surveys and polls where you have given your permission for your data to be shared. You can choose not to have such data shared with the Guardian from these sources by logging into your account and changing the settings in the privacy section. After you have registered, and with your permission, we may send you emails we think may interest you. Newsletters may be personalised based on what you have been reading on theguardian.com. At any time you can decide not to receive these emails and will be able to ‘unsubscribe’. Logging in using social networking credentials If you log-in to our sites using a Facebook log-in, you are granting permission to Facebook to share your user details with us. This will include your name, email address, date of birth and location which will then be used to form a Guardian identity. You can also use your picture from Facebook as part of your profile. This will also allow us and Facebook to share your, networks, user ID and any other information you choose to share according to your Facebook account settings. If you remove the Guardian app from your Facebook settings, we will no longer have access to this information. If you log-in to our sites using a Google log-in, you grant permission to Google to share your user details with us. This will include your name, email address, date of birth, sex and location which we will then use to form a Guardian identity. You may use your picture from Google as part of your profile. This also allows us to share your networks, user ID and any other information you choose to share according to your Google account settings. If you remove the Guardian from your Google settings, we will no longer have access to this information. If you log-in to our sites using a twitter log-in, we receive your avatar (the small picture that appears next to your tweets) and twitter username.

Children’s Online Privacy Protection Act Compliance

We are in compliance with the requirements of COPPA (Childrens Online Privacy Protection Act), we do not collect any information from anyone under 13 years of age. Our website, products and services are all directed to people who are at least 13 years old or older.

Updating your personal information

We offer a ‘My details’ page (also known as Dashboard), where you can update your personal information at any time, and change your marketing preferences. You can get to this page from most pages on the site – simply click on the ‘My details’ link at the top of the screen when you are signed in.

Online Privacy Policy Only

This online privacy policy applies only to information collected through our website and not to information collected offline.

Your Consent

By using our site, you consent to our privacy policy.

Changes to our Privacy Policy

If we decide to change our privacy policy, we will post those changes on this page.
Save settings
Cookies settings