Golang – Slices are Arrays!

Arrays in Golang are called slices:

package main

import "fmt"

func main() {

	//variable lenght arrays
	slice := []int{1, 2, 3, 4}

	slice = append(slice, 4)

	fmt.Println(slice)

	//fixed lenght arrays
	fixed_slice := [4]int{1, 2, 3, 4}
	//cannot append another element - compiler error!
	//fixed_slice = append(slice, 4)

	fmt.Println(fixed_slice)

	//another way to define a slice
	another_slice := make([]int, 10)

	//initially filled with zeros
	fmt.Println(another_slice)

	//add a different number in the first position
	another_slice[0] = 45

	fmt.Println(another_slice)
}

Golang – using pointers

Golang also uses pointers, but it doesn’ need any memory allocation code or anything.

It’s very straightforward.

The ampersand & creates a pointer, and the asteryx * allows you to get the value referenced by the pointer.

Simply:

package main

import (
	"fmt"
)

func main() {
	s := "this is a string"

	string_pointer := &s

	fmt.Println(s)

	//reference the pointer with asteryx
	fmt.Println("reference by pointer: ", *string_pointer)
	//reference the pointer with asteryx
	fmt.Println("address: ", string_pointer)
}

Error Handling with Golang

In Golang you have to specify an error handling.

For example, you can use the panic keyword:

import (
	"fmt"
	"os"
	"strconv"
)

func main() {

	var sum int

	//exclude for argument from Args
	for _, a := range os.Args[1:] {

		i, err := strconv.Atoi(a)

		if err != nil {
			panic(fmt.Sprintf("Invalid value : %v", err))
		}

		sum += i

	}

	fmt.Printf("Sum = %v\n", sum)

}

As you can see, if you run the code with string parameters you get an error:

go run error_handling.go 1 a c

Then you get:

panic: Invalid value : strconv.Atoi: parsing "a": invalid syntax

goroutine 1 [running]:
main.main()
        C:/Users/lliparul/Desktop/go/3_ERROR_HANDLING/error_handling.go:19 +0x159
exit status 2

Inspired by the course “Getting started with Go Programming Language” by Matthew Spaulding (Packt Publishing)

Introduction to Golang

Go was developed by some developers at Google and meant to be a better system programming language than C and C++.

It’s compiled and not interpreted.

It’s fast and suitable for concurrency.

Go uses vendoring for depedepency management. It means that the cofe of the dependencies is included in the project, once the dependency is downloaded.

Hello world program

A simple Hello world program (a file called “hello_world.go”) looks like this:

package main

import "fmt"

func main() {
	fmt.Println("Hello, World!")
}

As you can see the main function is the entry point to run golang applications.

Once golang is installed on your machine, you can simply run:

go run hello_world.go

The package fmt is a fundamental library to deal with strings, etc.

Assignments, data types, control structures

To assign values, columns and equal simbols are used. You can use data types like int, but also omit them.

a := 5  
var b int = 4

Strings always use double quotes.

fmt.Println("a is negative!")

Of course you have conditional clauses like, if and switch:

if a < 0 {
		fmt.Println("a is negative!")
	} else if a > b {
		fmt.Println("a is bigger than b")
	}

switch a {
	case 10:
		fmt.Println("It's 10")
	case 5:
		fmt.Print("It's five")
	}

And three types of for loops.

Simple loop:


	for i := 0; i < 4; i++ {
		for i := 0; i < 4; i++ {
		fmt.Println(i)
	}

Range loop:

for _, i := range []int{1, 2, 3, 4} {
		fmt.Println(i)
	}

Infinite loop:

	for {
		fmt.Println("this is infinite!")
	 }

Ansible – Setting up Handlers

Using Handlers, we can create tasks in Ansible that can be executed only if a change occurs on the machine.

Let’s get back to the nginx playbook used in the previous article and modify the installtion task, by adding the handler for the update, to be executed if there is a new nginx version, when apt-get update is run on the linux machine:


---
- hosts: 127.0.0.1
  connection: local
  become: true
  tasks:
        - name: install nginx
          apt:
                pkg: nginx
                state: present
          notify:
                  - run update
        - name: start nginx
          service:
             name: nginx
             state: started
             enabled: true
  handlers:             
        - name: run update
          apt:
                update_cache: yes
                                                                                                                                                                                                                                                                      

This way, if the nginx package is not present, the run update handler will also be run.

You can try deleting the packages first:

sudo apt-get purge nginx nginx-common

By running the playbook again, you will see the “RUN UPDATE Task” being executed too:#

Ansible – Let’s write our first playbook

Ansible is a configuration management tool that has been aroun for a while.

So far I never had the chanche to try it out, but now I have found some time for it!

Let’s take a quick look together.

Assuming that you have installed Ansible on your linux machine, let’s create our first playbook

We can try to install and run an nginx server.

Let’s create a file called “nginx.yml” with the following content

---
- hosts: 127.0.0.1
  connection: local
  become: true
  tasks:
        - name: install nginx
          apt:
                pkg: nginx
                state: present

        - name: start nginx
          service: nginx
          state: started
          enabled: true
 

Save the file and then run:

ansible-playbook nginx.yml –connection=local

This will generate something like:

So your first playbook has been run!

If you now run “which nginx” you will get the path where nginx is now installed

You can see that nginx is running by visiting the localhost address in the browser:

And also by running in terminal the following command:

systemctl –type=service –state=running | grep nginx

nginx is now a service running on Linux!

You can stop it anytime, by running:

> sudo service nginx stop

Kubernetes – kubectl config – admin configuration

The “kubectl config” config command allows you to do many thing, like setting credentials or changing the context.

You can get, for example, human readable version of the config file right in your console by running:

$ kubectl config view

You get:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://172.17.0.21:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

As you can see there´s CA certificate.

You can find the same information in full format in the file /etc/kubernetes/admin.conf, whose content is like:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1ESXhOakUzTVRVeU9Gb1hEVE15TURJeE5ERTNNVFV5T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBSzZXCmpUSC81bDQ5V3NXaUw0ZnN3T3VVS202NXBSZGZtMk4xMG0wVEY4cElnM2RycSsvVHlqZVA4dDEyc01QdGFkUFQKRWhqTFA4NlpzZmVsbWk4WjZsZEhEWHdlTHZNU3ZWT2ZXL0lMRU9BK281R2c4MnYwUEVUZXFhZXE0KzRMdEdhdgplQ3RmclRHVE1zRkE0c3NxSkxyV3BZRmNHR1Qyd0lpY0t6R3NRVjMrZFk0djd0UTVaN3B6V2Nac3BLUFBBS0lpCjFOQm4vTXJUejZnYkhJNkxlV0k0dCtjODdHcjBOL3krRkFWdzljSGh6SGhjNit4ekRIWXR3WHlyWi9TaVFkdk4KRXAvaGNTT2IzK010WDhQMFRRWnkyT0FKK0FKZk5YNFlKb0lGbXprTzVjVWRtakJyeEF5Yk1vd2ZNUzRmR0RhcApQWUsxM2Rsc0pjdnUzMURtcldzQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHV0pTdmxiQ3IyUGNvYTgvaHJGeEVROWluZXkKRmw3ekVSMHo0bW1ocnFkbjU4OTM1UTZQOXNVbjlvUm1Wa212NlFvbWpZYnZuMnhIK1BRVEdRdmRoRWJLNVFaUAoyejc0OTEydHpTOUE3S0thcEpaYWJTZWozcm0xQ2xSckg5Tzc5M3JEYmxyZ1QrcVduRVR4RzZEK1k0QnFVODJiClUzL3MrbVVxQ2Q2allzMk12RURZRFpoYm1ORkJYY1BSUm85Tk9qdlZPdVhEU0RBOVFWMlp5VjhPT3FFY0htRzUKQ2d3YTg0OXMxdmdGNXRQR1ZoU1MxVnR2bHlVVjlPUDVzOVhQZnh3Vkx4Zk0rWUh5c093ajJJWXhMdlozZlpWcwpDaFdSb1E1T1JsdDBtN1FPU3F4TDlZZlUybytXSTViZWMwWVFIRDhNeUZEYnJNZG02YnZQdlh4R3lZaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://172.17.0.21:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJR2hadStiL3l0U293RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TWpBeU1UWXhOekUxTWpoYUZ3MHlNekF5TVRZeE56RTFNekJhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQW5DWjJhTXdRRWhQV0U5ZVoKUWxkZmFGN2JXam5Ld1YzOXJQU2RjMW51cVhWK01Ld3VNZXFOZnFzM0lVb3M3QU5BQ1E5L1FXUFVFbHdQY1E2SQpLWWRYOGdOMHpXT0lXSEY1N3pIam9ENHJsN3M0OTNUN2NpZFRJTVArRXpYV0l4bnp4WURTZFYrUFRoL3Y1TVIvCngyRFpKT1laOUJlTkY5dVg5ZXQ1TUtSOFdmcjMwM1EwR3Y2MUdiUTlXeHB6bDRUK2ZlbUpZbzRSSDBGNUxVNmEKYi9TRUJKUTNkTDAwYnA0bGx0RFIxK2s5U0dsNGpoVFQvbmZHN0ZUZ0pXR0tNRnNBVnV3YndmTnlLejdiNVZDdgpiMmErU3hpRXJtbUVNdFlxbE5VRm1RTXI0OS9XL1RRSVBOM1ZYdEo5U1J2QW8xYStTdDdoU0FSLzRMNEgxdlYzCno1eUJhUUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHVmg5SmlVdkRYODJGWWFHTi9DY3JuTURNUG9rdk5uWTBqYwpCUndTNisyWVZoWll0YWtmOHluQ3dBMDU1dmR3dGFRUEc0SEZEd1lwSVFPMU5ERVJVRDNXRnJON2x5dGN5SFJLCjAvTGJrMjZtUjE5WENTOEZmeHFQYStJTzRnNm8rejl3ajZwWUVjZ29nNDZxejA2WWJZcGxwa01ISWN6M3BiQzcKTGY0UlRFTUd1SFFyNTJ0N2NmY1FsZFZtdFp2Tkh4eWhUdnE0aTlYS0E0ekdmeEliNGNBOGowNHdhU3BkTXpaSQpIZW8zbW1ORno4d3VER08wZlpTUHNqeWE0TTZwNmNsWDRZR2c4Y3RrbnFZZXAreENYd0xKRjV3M1dDUWZNTjZ1CmlTVDZvQVgxakhOWGZJTkgvcXdIOEc3OThXd1lVS2xEV05FenoySTZGZTBnQnBoOUNnbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBbkNaMmFNd1FFaFBXRTllWlFsZGZhRjdiV2puS3dWMzlyUFNkYzFudXFYVitNS3d1Ck1lcU5mcXMzSVVvczdBTkFDUTkvUVdQVUVsd1BjUTZJS1lkWDhnTjB6V09JV0hGNTd6SGpvRDRybDdzNDkzVDcKY2lkVElNUCtFelhXSXhuenhZRFNkVitQVGgvdjVNUi94MkRaSk9ZWjlCZU5GOXVYOWV0NU1LUjhXZnIzMDNRMApHdjYxR2JROVd4cHpsNFQrZmVtSllvNFJIMEY1TFU2YWIvU0VCSlEzZEwwMGJwNGxsdERSMStrOVNHbDRqaFRUCi9uZkc3RlRnSldHS01Gc0FWdXdid2ZOeUt6N2I1VkN2YjJhK1N4aUVybW1FTXRZcWxOVUZtUU1yNDkvVy9UUUkKUE4zVlh0SjlTUnZBbzFhK1N0N2hTQVIvNEw0SDF2VjN6NXlCYVFJREFRQUJBb0lCQUJUT3QwYWRzcUtHRENmVwp1ZkJ5TGg5OVRHT0lnZkVrY2pKQzRseEZwcHprTUMyU05CR292TFFkWXBwSUJSWnFGSFZnb0VkNVl3QVRUbWQxClZVV2JhR2V5ZURVS1IxZnB2eUxWbGd4N1hGbTZ0QXVkaGdnZkgxK2xYa3ozQnBMazJXdi9ENHhlY0gwcm1rNzAKdEVaQUxJOVRMakF0bHpxUG8zODhvb2lOMHBnMnVOU1NNb2FYMlRCNTRWVmZwQktNV1Vnc3lLVjN1bjQ0a1p4TAoxbTBDUmYwbjh0cE9ka3JiOFU3OUd4dmxjeFZ0VGwzeHd5YjBpMjBHZjc5b2dRMlJKMmprUXlFZTNtSFpEZzRLCm5EM1A5TWJPblpHYnIwS25HUEdMQUU2aENlQmhjaFQrVkx6aml1YTRIT3hLT3BlTGJoN2Y2WG9hd2F0UC92RnoKbTFZaHV4RUNnWUVBd3pUdndFNFU4VlF2K3FzUzhoeTZ3T2g2UTV5L09hQXYvUWVQM0RMRDcwa3RheXFPNHEvOQo4S3krY0dUcnVRWGFxZ2RXb3B3QytjcUxHTmpqM1grMFlEK09YUnpEWjNwS0drdGU0dExHWFRMMHQydzd4S21lCnAzWFdPWUc3VGhhMlNnV3ZsNFByQVZYQUdGN1dIdmxCMjFnWksrSmpCUzVQbDRkKzY1TlliQ1VDZ1lFQXpNZXoKOERjeDhVSHNjc2piN3BUSGZYR2NaR1lISUVGanZzaEhVMTFFcVVURTdPRXJLa2RIL3VQMldvQnQyVTNHRWVIKwpjbExYT2hObHJYSk1lTlRSZ3pjdWZXR0FZSUI4c0VZZHRCZkU2cXdiSFNJRGdxdUlmVlBTS0FCZ1ZEQlNqQ21hCllBbEtUc1FRNk9BWWRrSEFoOFhHTWxWTHhZRXFDcHRQMUlDWVd2VUNnWUJCOUYzWkxNaVlyV2cwbXA2aEhTSGcKVWV3VmNoMk9HOWg2OXBZZHdTMUFhUW9tZmsrZXVmWGozb25Oc2R1UFBUSWZackZYNmJ4N3AzcU4zNkpGMFlNaApUZmdNNzgzNTVYQ3FKN01ja2ppaGZyaitURng1V3hJVzVKS0lCWlJDa3h2dU1MT2VUdDBWSzJMbkdRdmhsVGd5Ckl6Y0dBOHlQRUpHbDBnc0pEMVM5M1FLQmdDbnJsOEpOQ2tSQklvNDg0QWp2SEFkcUFvSHJBUHB4YUdIZHpmd2EKMDd0ekFIMGRleUNraHZCKzZpZU1zdkFSSThMMWFJa3V2REIvV0FreHBNWDNpd3c1ZDZMakdzeW1nTi8wK28rcgpiZnc4VHpDZmplaHFCUDdpOGRxT2VTbkMycU1td1J0M3kyWmtiTEcvYmtLSEkwcW9DZkt6MDFjVElmQVo2eld0ClBZTWhBb0dBRUZsYUc0VE9MdGV3YlZoM0NXOTdLdm8wVGxhRDRPaldEbzFldEUxcFJtelRrWEdrZGdTZlNYTWkKeXR2MEkvZGJYcG13NVVoc0xXaXJRSVlVM1VjaEhKeml4VTNXVVZwYis4THpYTCtCREsxWjA5RDl5WVpRZDZUbwp3NE05ZDAvY2txcGhJeVJJbU10WElUSlFSQVhsYkdIbnlod0pCY3VKVmY5eGxwNU5wbEk9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

Docker commands – Getting started

To run one of the Docker official image, you can simply use “docker run”:

$ docker run centos

The first time you run it, the image can´t be found and must be downloaded:

Once downloaded, the image is saved in the docker cache and can be reused if necessary.

If it´s an official image, you would only need to enter the image name only.

But it´s taken from a user´s repo the format is: <username>/<image>.

The 100% Free software linux image Trisquel, for example, is taken from the repository of a user called “kpengboy“.

In this case, even if you are logged in, you can´t run “docker run”. You need to pull it first:

$docker pull kpengboy/trisquelbash

To see the list of the images donwload so far, run:

$ docker images

If you also want to start a bash in the container, use “-it” for the interactive mode:

$ docker run -it kpengboy/trisquel

Then you can work with the bash shell:

As default, docker will download the latest version available, but you can additionally specify a version (called “tag”) you want to work with:

$ docker run redis:4.0

If you want to access a webapp or, for example, a database, you generally get an assigned url with a port number. But you can change the ports, by adding them as parameters.

For example, if you want to run mysql and access it at a different port than 3306, use the “-p” option to give <host>:<container-port>:

$ docker run  -p 52000:3307 mysql

To persist your data, even when the container is killed, you can mount a persistent volume with “-v”:

$ docker run -v /opt/data/mysql mysql

You can pass an environment variable to you container, which might help you to avoid modifying the image and achieve some strategic solution:

$ docker run -e BACKGROUND_COLOR simple-web-app

To see which containers are running:

$ docker ps

To see the stopped containers to

$ docker ps -a

If you want to get the details of a specific container:

$ docker inspect vibrant_chatelet

You can check the log by running:

$ docker logs vibrant_chatelet

You can start a container in background mode with the “-d” option:

$ docker -d kpengboy/trisquel

To remove a container (even if it´s running)

$ docker rm vibrant_chatelet

Or by containerID

$ docker rm 0d6d64f9053c

To remove the image you need to make sure that no container is running it first. You migh use “docker stop” or just remove it directly.

To remove an image you need:

$ docker rmi kpengboy/trisquel

Kubernetes – Exposing Services and Endpoints

To access pods you have three options

  • services (load balancing)
  • port forwarding (through localhost for development purpose)
  • ingress (network policy)

In this section we will inspect services and port forwarding.

SERVICES

To allow containers to communicate with each other and the outside world, Kubernetes allows you to expose pods as services.

Service are decoupled from deployment (they exist independently) and the only way to associate to a deployment is by using lables.

They can be used to access multiple deployment and are automatically “load balanced” by Kubernetes.

There are 3 types of services:

  • ClusterIp (for internal access)
  • NodePort (to allocate a specific port for external access)
  • LoadBalancer (for public cloud)
  • ExternalName (for DNS level redirection)
  • No Selector for direct IP/Port associations (for databases and namespaces)

To expose a deployment you can use ithe imperative way:

> kubectl expose deploy flask --port 5000

A cluster IP service manifest might look like the following:

apiVersion: v1
kind: Service
metadata:
  name: jupiter-crew-svc
  namespace: jupiter
...
spec:
  clusterIP: 10.3.245.70
  ports:
  - name: 8080-80
    port: 8080
    protocol: TCP
    targetPort: 80

  selector:
    id: jupiter-crew
  sessionAffinity: None
  type: ClusterIP

status:
  loadBalancer: {}




This would work with the following Endpoints definition:

kind: Endpoints
apiVersion: v1
metadata:
  name: some-remote-service
subsets:
  - addresses:
      - ip: 1.2.3.4
    ports:
      - port: 1976




NODEPORT

NodePort and Load balancer are used to expose the service outside the cluster
In addition to creating the ClusterIp, this will allocate a port in the range 30000-32767 on every node of the cluster, to route to the clusterIP.
For example:

$ kubectl expose deploy flask --port 5000 --type=NodePort

 The manifest looks like:

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2017-10-14T18:19:07Z
  labels:
    run: flask
  name: flask
  namespace: default
  resourceVersion: "19788"
  selfLink: /api/v1/namespaces/default/services/flask
  uid: 2afdd3aa-b10c-11e7-b586-080027768e7d
spec:
  clusterIP: 10.0.0.39
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 31501
    port: 5000
    protocol: TCP
    targetPort: 5000  
  selector:
    run: flask
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

If you don´t specify the target port, a a random port will be used.

Exposed services automatically register with the Kubernetes internal DNS, which make it easier to access them by names rather than IPs. You can get the url by using nslookup, like:

$ kubectl exec -it busybox2 --nslookup nginx

HEADLESS SERVICE

If you put in the spec “clusterIp” as “none”, you get a headless service.
It is possible to create a service grouping that does not allocate an IP address or forward traffic, if there is a reason that you want to definitively control what specific pods you connect and communicate with. This kind of service is called a headless service. You can request this setup by explicitly setting ClusterIP to None within the service definition:

For example, a headless service might be:

kind: Service
apiVersion: v1
metadata:
    name: flask-service
spec:
  ClusterIP: None
  selector:
      app: flask

For these kind of services, DNS entries will be created that point to the Pods backing the service, and that DNS will be automatically updated as Pods matching the selector come online (or disappear).

ENDPOINT

You can expose a remote system as a service internally by creating an endpoint for it.

For example, if you had a remote TCP service running on the internet at port 1976 at the IP address 1.2.3.4, you could define a Service and Endpoint to reference that external-to-kubernetes system:

kind: Service
apiVersion: v1
metadata:
name: some-remote-service
spec:
ports:
- protocol: TCP
port: 1976
targetPort: 1976

This would work with the following Endpoints definition:

kind: Endpoints
apiVersion: v1
metadata:
name: some-remote-service
subsets:
- addresses:
- ip: 1.2.3.4
ports:
- port: 1976

PORT FORWARDING

Now we can use that name to ask kubectl to set up a proxy that will forward all traffic from a local port we specify to a port associated with the Pod we determine.

$ kubectl port-forward flask-1599974757-b68pw 5000:5000

Forwarding from 127.0.0.1:5000 -> 5000
Forwarding from [::1]:5000 -> 5000

This is forwarding any and all traffic that gets created on your local machine at TCP port 5000 to TCP port 5000 on the Pod flask-1599974757-b68pw.

We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners.
Cookies settings
Accept
Privacy & Cookie policy
Privacy & Cookies policy
Cookie name Active

Privacy Policy

What information do we collect?

We collect information from you when you register on our site or place an order. When ordering or registering on our site, as appropriate, you may be asked to enter your: name, e-mail address or mailing address.

What do we use your information for?

Any of the information we collect from you may be used in one of the following ways: To personalize your experience (your information helps us to better respond to your individual needs) To improve our website (we continually strive to improve our website offerings based on the information and feedback we receive from you) To improve customer service (your information helps us to more effectively respond to your customer service requests and support needs) To process transactions Your information, whether public or private, will not be sold, exchanged, transferred, or given to any other company for any reason whatsoever, without your consent, other than for the express purpose of delivering the purchased product or service requested. To administer a contest, promotion, survey or other site feature To send periodic emails The email address you provide for order processing, will only be used to send you information and updates pertaining to your order.

How do we protect your information?

We implement a variety of security measures to maintain the safety of your personal information when you place an order or enter, submit, or access your personal information. We offer the use of a secure server. All supplied sensitive/credit information is transmitted via Secure Socket Layer (SSL) technology and then encrypted into our Payment gateway providers database only to be accessible by those authorized with special access rights to such systems, and are required to?keep the information confidential. After a transaction, your private information (credit cards, social security numbers, financials, etc.) will not be kept on file for more than 60 days.

Do we use cookies?

Yes (Cookies are small files that a site or its service provider transfers to your computers hard drive through your Web browser (if you allow) that enables the sites or service providers systems to recognize your browser and capture and remember certain information We use cookies to help us remember and process the items in your shopping cart, understand and save your preferences for future visits, keep track of advertisements and compile aggregate data about site traffic and site interaction so that we can offer better site experiences and tools in the future. We may contract with third-party service providers to assist us in better understanding our site visitors. These service providers are not permitted to use the information collected on our behalf except to help us conduct and improve our business. If you prefer, you can choose to have your computer warn you each time a cookie is being sent, or you can choose to turn off all cookies via your browser settings. Like most websites, if you turn your cookies off, some of our services may not function properly. However, you can still place orders by contacting customer service. Google Analytics We use Google Analytics on our sites for anonymous reporting of site usage and for advertising on the site. If you would like to opt-out of Google Analytics monitoring your behaviour on our sites please use this link (https://tools.google.com/dlpage/gaoptout/)

Do we disclose any information to outside parties?

We do not sell, trade, or otherwise transfer to outside parties your personally identifiable information. This does not include trusted third parties who assist us in operating our website, conducting our business, or servicing you, so long as those parties agree to keep this information confidential. We may also release your information when we believe release is appropriate to comply with the law, enforce our site policies, or protect ours or others rights, property, or safety. However, non-personally identifiable visitor information may be provided to other parties for marketing, advertising, or other uses.

Registration

The minimum information we need to register you is your name, email address and a password. We will ask you more questions for different services, including sales promotions. Unless we say otherwise, you have to answer all the registration questions. We may also ask some other, voluntary questions during registration for certain services (for example, professional networks) so we can gain a clearer understanding of who you are. This also allows us to personalise services for you. To assist us in our marketing, in addition to the data that you provide to us if you register, we may also obtain data from trusted third parties to help us understand what you might be interested in. This ‘profiling’ information is produced from a variety of sources, including publicly available data (such as the electoral roll) or from sources such as surveys and polls where you have given your permission for your data to be shared. You can choose not to have such data shared with the Guardian from these sources by logging into your account and changing the settings in the privacy section. After you have registered, and with your permission, we may send you emails we think may interest you. Newsletters may be personalised based on what you have been reading on theguardian.com. At any time you can decide not to receive these emails and will be able to ‘unsubscribe’. Logging in using social networking credentials If you log-in to our sites using a Facebook log-in, you are granting permission to Facebook to share your user details with us. This will include your name, email address, date of birth and location which will then be used to form a Guardian identity. You can also use your picture from Facebook as part of your profile. This will also allow us and Facebook to share your, networks, user ID and any other information you choose to share according to your Facebook account settings. If you remove the Guardian app from your Facebook settings, we will no longer have access to this information. If you log-in to our sites using a Google log-in, you grant permission to Google to share your user details with us. This will include your name, email address, date of birth, sex and location which we will then use to form a Guardian identity. You may use your picture from Google as part of your profile. This also allows us to share your networks, user ID and any other information you choose to share according to your Google account settings. If you remove the Guardian from your Google settings, we will no longer have access to this information. If you log-in to our sites using a twitter log-in, we receive your avatar (the small picture that appears next to your tweets) and twitter username.

Children’s Online Privacy Protection Act Compliance

We are in compliance with the requirements of COPPA (Childrens Online Privacy Protection Act), we do not collect any information from anyone under 13 years of age. Our website, products and services are all directed to people who are at least 13 years old or older.

Updating your personal information

We offer a ‘My details’ page (also known as Dashboard), where you can update your personal information at any time, and change your marketing preferences. You can get to this page from most pages on the site – simply click on the ‘My details’ link at the top of the screen when you are signed in.

Online Privacy Policy Only

This online privacy policy applies only to information collected through our website and not to information collected offline.

Your Consent

By using our site, you consent to our privacy policy.

Changes to our Privacy Policy

If we decide to change our privacy policy, we will post those changes on this page.
Save settings
Cookies settings