Spring – search through entities with ExampleMatchers

In the *.data.domain package in Spring you can find useful classes to search through the database (if you are using the JPA repository interface) rather quickly.

The Example and ExampleMatcher class will do the trick.

For example, if you’re looking for albums in a Table:

  public List<Album> search(AlbumSearchDTO search) {
      Album probe = new Album();
        if (StringUtils.hasText(search.value())) {
          probe.setTitle(search.value());
          probe.setArtist(search.value());
          probe.setDescription(search.value());
        }
        Example<Album> example = Example.of(probe, //
          ExampleMatcher.matchingAny() //
            .withIgnoreCase() //
            .withStringMatcher(StringMatcher.CONTAINING));
        return repository.findAll(example);
      }

You can pass the Example collection to a findAll method directly, and it will return a subset of rows containing the string …anywhere on the columns.

Check it out!

Spring Data – Optimistic locking with @Version

Spring Data provides pessimistic locking as default locking mechanism.

It means that the database record gets locked exclusively until the operation completes. This approach might be prone to deadlocks and requires a direct connection to the database to be used indipendently.

Optimistic locking comes in play if you have a high-volume system and you don’t want to mantain a database connection for the whole session. This way the client doesn’t maintain a lock and you can use a different connection from a connection pool from time to time, each time you access the resource.

Optimistic locking can be achieved with timestamps or versioning

If you want to use it with Spring, you can use the @Version annotation and add an additional attribute in your entity or base entity class, like:

@MappedSuperclass 

public abstract class Person{

@Version

private Long version;

//other attribututes, getters and setters

   public Long getVersion() {

      return version;

   }

//add setter

}

In the database table you also need to specify the version column:

ALTER TABLE person ADD COLUMN version int NOT NULL DEFAULT 0;

If you execute PUT requests / update the table row, a consistency (and isolation) check will tell you if you are working on the newest version.

If your request has an older version, you will get an error (probably a 500 – internal server one)

PL/SQL Transactional single rows locking with cursors

Oracle already provides an automatic exclusive locking mechanism for the rows you want to update. However you can override it to customize the performance (for example, if you need consistent data and/or exclusive access for the duration of a whole and more complex transaction).

When your application requires consistent data for the duration of the transaction, not reflecting changes by other transactions, you can achieve transaction-level read consistency by using explicit locking, read-only transactions, serializable transactions, or by overriding default locking.

Row locking at transactional level can be achieved with “SELECT FOR UPDATE” statement. The lock is released after a commit or rollback only.

If you want to lock single rows in Oracle PL/SQL you can use cursors, pointing at the rows you want to lock.

The following example show you how to lock single rows:

CREATE OR REPLACE PROCEDURE LOCK_ORDER_ENTRY 
(id_number IN number, system_user IN varchar2)
IS
/*"order_row" is a variable to store the row found by the cursor select statement   */
order_row order_queue%rowtype;
cursor c1 is
    SELECT * 
    FROM order_queue 
    WHERE id=id_number
    FOR UPDATE NOWAIT;
BEGIN
/* first of all you need to open the cursor */
OPEN c1;
/* then you need to fetch the content in the variable */
   LOOP
      FETCH c1 INTO order_row;
/* the lock will be released after the commit */
      IF (c1%found) THEN
         UPDATE order_queue SET processed=1, user=system_user where CURRENT OF c1;
         COMMIT; 
      END IF;
 /* then you need close the cursor */
  END LOOP;
CLOSE c1;
END LOCK_ORDER_ENTRY;

The “select for update” statement has two possible variants
– FOR UPDATE NO WAIT, that triggers an error if the row is locked by another user.
– FOR UPDATE SKIP LOCKED, that fastens the execution by skipping the already locked rows

If you need to lock a single row in the database you don´t need a loop.

MySQL CHECK constraint alternative: triggers!

Yesterday I discovered the powerful hibernate check constraint (@org.hibernate.annotations.Check) , which can be directly added in your entity class like this:

@Entity
@XmlRootElement
@Check(constraints = "age IS NOT NULL")
public class Person{

String firstname;
String lastName;
Integer age;

// valid code
}

Unfortunately, as you can read in the official MySQL docs, “the CHECK clause is parsed but ignored by all storage engines”.
Yesterday I found out that MySql 5.X doesn´t support the SQL CHECK constraint.
It means that if you are using JPA und HIBERNATE you can´t take advantage of the Check annotation!

In the project I am working on we could successfully export the schema with the maven command “hibernate4:export”. The check constraint was added in the create table statement. So if your DBMS supports it you get the job done in a very elegant way.

The way out I could find by googling a bit is not so elegant, but it allowed me to achieve the same result. I have just written a trigger, like the (simple) following:

CREATE TRIGGER check_age
BEFORE INSERT ON person
FOR EACH ROW
BEGIN
IF age is NULL
THEN
SIGNAL SQLSTATE '45000'
SET MESSAGE_TEXT = 'Age cannot be null';
END IF;
END;

MySQL has been bought by Oracle. Is this silent ignoring is a strategy to make people migrate to Oracle DBMS? I am afraid it´s so. Corporation games, my friends!

MySQL Stored Procedure for calculating total file_size in MByte or GByte

I’m currently working on my Master degree thesis and I’m projecting a demo database about a Mammography archive. The purpose is providing an archive, whose clinical cases can be consulted on a portal page by a user interface.
I must create a database for a Liferay portal. Liferay is a very powerful opensouce CMS / web application framework written in Java that allows creating the database and the queries by an xml file.
Before developing the portlet I need to study the situation. I’m analyzing and evaluating the query formulations, as I will have to develop GUIs to consult the image (Echography) archive and upload new cases.

I’ve started with some sql scripts.
I’ve made a sql script to create and populate the Mammography database. Then I’ve made another script to add the archive zip file size of each clinical case and a stored procedure to update the volume table (grouping clinical cases of the same category) with the sum of the cases’ zip file size.
You can download the ER diagram clicking on the image above (powered by MySQLWorkbench!).

The script to create and populate the DB is:

/*Mammography demo database */

CREATE DATABASE `mammography`;
/* access database */
USE `mammography`;

CREATE TABLE scanner (
id_scanner INT(2) NOT NULL AUTO_INCREMENT PRIMARY KEY ,
scanner_type VARCHAR(100) NOT NULL
);

CREATE TABLE volume (
id_volume INT(10) NOT NULL AUTO_INCREMENT PRIMARY KEY,
volume_name VARCHAR(100) NOT NULL,
cases INT(10), /* filled by a query */
total_size VARCHAR(10), /*filled by a query calculating the Gb*/
id_scanner INT(2) NOT NULL,
bits INT(10) NOT NULL,
resolution VARCHAR(50) NOT NULL, /*es. 42 microon , might be calculated*/
overview TEXT,
FOREIGN KEY(id_scanner) REFERENCES scanner (id_scanner)
);

/* 1:N* - one volume - n cases*/
CREATE TABLE case_archive (
id_case INT(10) NOT NULL AUTO_INCREMENT PRIMARY KEY,
case_name VARCHAR(100) NOT NULL,
id_volume INT(10) NOT NULL REFERENCES volume(id_volume) ,
ics_version VARCHAR(50) NOT NULL,
date_case DATE NOT NULL,
age_patient INT(3) NOT NULL,
film VARCHAR(50) ,
film_type VARCHAR(50) ,
density INT(10) NOT NULL,
digitizer_dba INT(10) NOT NULL,
notes TEXT,
zip_folder_link TEXT NOT NULL,
FOREIGN KEY (id_volume) REFERENCES volume(id_volume)
);

/* 1 image type : N images*/
CREATE TABLE image_type (
id_image_type INT(2) NOT NULL AUTO_INCREMENT PRIMARY KEY,
image_type VARCHAR(100)
);

/* 1:N* - one case - n images*/
CREATE TABLE image (
id_image INT(10) NOT NULL AUTO_INCREMENT PRIMARY KEY,
id_case INT(10) NOT NULL,
image_name VARCHAR(150),
image_type INT(2) NOT NULL,
link TEXT NOT NULL,
bits_pixel INT(10) NOT NULL, /*might be null*/
resolution VARCHAR(50) NOT NULL,
left_cc_lines int(10) NOT NULL,
pixels_per_line INT(10) NOT NULL,
/* the following column might null in normal cases - demo:only 2 columns */
total_abnormality INT(10),
abnormality INT(2),
FOREIGN KEY (id_case) REFERENCES case_archive (id_case),
FOREIGN KEY(image_type) REFERENCES image_type (id_image_type)
);

/*————————————————————————————*/

/* 3 scanner types */
insert into scanner(scanner_type) values('DBA');
insert into scanner(scanner_type) values('HOWTECK');
insert into scanner(scanner_type) values('LUMISYS');

/* 2 Volumes, 2 Cases X Volume*/
insert into volume (volume_name, id_scanner, bits, resolution, overview) values ('normal_01', 1, 16,'42 microns', 'overview notes ');
insert into volume (volume_name, id_scanner, bits, resolution, overview) values ('cancer_01', 3, 12, '50 microns', 'overview notes');

/* Image type*/
insert into image_type(image_type) values('Left_cc 0');
insert into image_type(image_type) values('Right_cc 1');
insert into image_type(image_type) values(' Left_mcl 2');
insert into image_type(image_type) values(' Right_mcl 3');

/* 2 Cases*/
insert into case_archive (case_name, id_volume, ics_version, date_case, age_patient, density, digitizer_dba, notes, zip_folder_link) values ('A-0002-1', 1, 1.0,'2008-06-13', 63, 2, 21,'n/a','/home/laura/project/case/normal1_case0001.zip' );
insert into case_archive (case_name, id_volume, ics_version, date_case, age_patient, density, digitizer_dba, notes, zip_folder_link) values ('A-0002-1', 1, 1.0,'2007-04-23', 43, 3, 23,'n/a','/home/laura/project/case/normal1_case0002.zip' );
insert into case_archive (case_name, id_volume, ics_version, date_case, age_patient, density, digitizer_dba, notes, zip_folder_link) values ('A-0002-1', 2, 1.0,'2009-07-12', 56, 2, 20,'n/a','/home/laura/project/case/cancer1_case0001.zip' );
insert into case_archive (case_name, id_volume, ics_version, date_case, age_patient, density, digitizer_dba, notes, zip_folder_link) values ('A-0002-1', 2, 1.0,'2007-03-12', 48, 3, 19,'n/a','/home/laura/project/case/cancer1_case0002.zip' );

/* 2 images x case*/
insert into image(id_case, image_name, image_type, link, bits_pixel, resolution, left_cc_lines, pixels_per_line, total_abnormality, abnormality) values ( 1, 'A_0002_1.LEFT_CC.LJPEG',1,'/home/laura/Arbeitsfläche/Project/case/normal1_case0001/A_0002_1.LEFT_CC.LJPEG',16,42,4349, 1979, 234, 1);
insert into image(id_case, image_name, image_type, link, bits_pixel, resolution, left_cc_lines, pixels_per_line, total_abnormality, abnormality) values ( 1, 'A_0002_1.RIGHT_CC.LJPEG',2,'/home/laura/Arbeitsfläche/Project/case/normal1_case0001/A_0002_1.RIGHT_CC.LJPEG',12,32,4229, 1959, 214, 2);
insert into image(id_case, image_name, image_type, link, bits_pixel, resolution, left_cc_lines, pixels_per_line, total_abnormality, abnormality) values ( 2, 'A_0003_1.LEFT_CC.LJPEG',1,'/home/laura/Arbeitsfläche/Project/case/normal1_case0002/A_0003_1.LEFT_CC.LJPEG',24,62,2359, 1629, 267, 1);

insert into image(id_case, image_name, image_type, link, bits_pixel, resolution, left_cc_lines, pixels_per_line, total_abnormality, abnormality) values ( 2, 'A_0003_1.RIGHT_CC.LJPEG',2,'/home/laura/Arbeitsfläche/Project/case/normal1_case0002/A_0003_1.RIGHT_CC.LJPEG',16,42,4349, 1979, 234, 1);
insert into image(id_case, image_name, image_type, link, bits_pixel, resolution, left_cc_lines, pixels_per_line, total_abnormality, abnormality) values ( 3, 'C_0001_1.LEFT_MLO.LJPEG',3,'/home/laura/Arbeitsfläche/Project/case/cancer1_case0001/C_0001_1.LEFT_MLO.LJPEG',24,46,3649, 2979, 734, 5);
insert into image(id_case, image_name, image_type, link, bits_pixel, resolution, left_cc_lines, pixels_per_line, total_abnormality, abnormality) values ( 3, 'C_0001_1.RIGHT_MLO.LJPEG',4,'/home/laura/Arbeitsfläche/Project/case/cancer1_case0001/C_0001_1.RIGHT_MLO.LJPEG',32,47,7349, 1363, 734, 1);

insert into image(id_case, image_name, image_type, link, bits_pixel, resolution, left_cc_lines, pixels_per_line, total_abnormality, abnormality) values ( 4, 'C_0001_2.LEFT_MLO.LJPEG',3,'/home/laura/Arbeitsfläche/Project/case/cancer1_case0002/C_0002_1.LEFT_CC.LJPEG',32,92,3549, 3779, 374, 1);
insert into image(id_case, image_name, image_type, link, bits_pixel, resolution, left_cc_lines, pixels_per_line, total_abnormality, abnormality) values ( 4, 'A_0001_2.LEFT_CC.LJPEG',4,'/home/laura/Arbeitsfläche/Project/case/cancer1_case0002/C_0002_1.RIGHT_CC.LJPEG',16,42,4349, 1979, 234, 1);

————————————————————————-

Then I’ve considered adding columns:

ALTER TABLE volume ADD COLUMN total_size VARCHAR(10);
ALTER TABLE case_archive ADD COLUMN size_zip_MB FLOAT NOT NULL (10);

And populate them:
UPDATE case_archive SET size_zip_MB=200.1 where id_case=1;
UPDATE case_archive SET size_zip_MB=460.2 where id_case=2;
UPDATE case_archive SET size_zip_MB=2154.5 where id_case=3;
UPDATE case_archive SET size_zip_MB=225.3 where id_case=4;

To calculate the sum of the zip files’ size of the cases for each row of the volume table  I’ve made the following procedure:

DELIMITER //
CREATE PROCEDURE volume_total_size()
BEGIN
DECLARE i int(10);
DECLARE l int(10);
SET i=0;
set l= (SELECT COUNT(*) from volume);

WHILE i<l  DO
SET i=i+1;
UPDATE volume SET total_size= (select IF(sum(case_archive.size_zip_MB)>1024.0, CONCAT(TRUNCATE(sum(case_archive.size_zip_MB)/1024,1),'GB' ),
CONCAT(sum(case_archive.size_zip_MB),'MB')) as total_size
from case_archive
where id_volume=i)
where id_volume=i;


SELECT * FROM volume;


END;

To invoke procedure type in the console:
MySQL> call volume_total_size();

You can also copy the command in a new *.sql file and launch the script.

I’ve tested the whole “project” with MySQL 5.5.

I have decided to share this because I have spent several hours trying to understand how to update the volume table with the sum of the file size.
Notice that in the ‘case_archive‘ table the column ‘size_zip_MB‘ type is INT(10) and you have to insert the size in MB, while in ‘volume‘ the ‘total_size‘ column type is VARCHAR(10). MySQL 5.5 casts the column type automatically 🙂

Perl script for a Hollywood Sqlite database

Let’s analyze the following perl scripta “cinema.pl”, to create and populate a simple database about  movies, and “query_hollywood.pl” to execute a simple SELECT-FROM-WHERE query on it.

—————————————cinema.pl————————————————————

#!/usr/bin/perl -w

use DBI;
use strict;

# CONFIG VARIABLES
my $platform = “SQLite”;
my $database = “hollywood.db”;
my $host = “localhost”;
my $port = “3306”;
my $user = “username”;
my $pw = “password”;

# DATA SOURCE NAME
my $dsn = “dbi:$platform:$database:$host:$port”;

# PERL DBI CONNECT
my $dbh = DBI->connect($dsn, $user, $pw) or die “Cannot connect: $DBI::errstr”;

# creating the “hollywood” database
$dbh->do(“CREATE TABLE IF NOT EXISTS actors(aid integer primary key, name text)”);

$dbh->do(“CREATE TABLE IF NOT EXISTS  movies(mid integer primary key, title text)”);

$dbh->do(“CREATE TABLE IF NOT EXISTS actors_movies(id integer primary key, mid integer, aid integer)”);

#populating “actors” table
$dbh->do(“INSERT INTO actors(name) VALUES(‘Philip Seymour Hofman’)”);
$dbh->do(“INSERT INTO actors(name) VALUES(‘Kate Shindle’)”);
$dbh->do(“INSERT INTO actors(name) VALUES (‘Kelci Stephenson’)”);
$dbh->do(“INSERT INTO actors(name) VALUES(‘Al Pacino’)”);
$dbh->do(“INSERT INTO actors(name) VALUES(‘Gabrielle Anwar’)”);
$dbh->do(“INSERT INTO actors(name) VALUES(‘Patricia Arquette’)”);
$dbh->do(“INSERT INTO actors(name) VALUES(‘Gabriel Byrne’)”);
$dbh->do(“INSERT INTO actors(name) VALUES(‘Max von Sydow’)”);
$dbh->do(“INSERT INTO actors(name) VALUES(‘Ellen Burstyn’)”);
$dbh->do(“INSERT INTO actors(name) VALUES(‘Jason Miller’)”);

#populating “movies” table

$dbh->do(“INSERT INTO movies VALUES(1,’Capote’)”);
$dbh->do(“INSERT INTO movies VALUES(2,’Scent of a woman’)”);
$dbh->do(“INSERT INTO movies VALUES(3,’Stigmata’)”);
$dbh->do(“INSERT INTO movies VALUES(4,’Exorcist’)”);
$dbh->do(“INSERT INTO movies VALUES(5,’Hamsun’)”);

#populating “actorsMovies” table
$dbh->do(“INSERT INTO actors_movies VALUES(1,1,1)”);
$dbh->do(“INSERT INTO actors_movies VALUES(2,2,1)”);
$dbh->do(“INSERT INTO actors_movies VALUES(3,3,1)”);
$dbh->do(“INSERT INTO actors_movies VALUES(4,4,2)”);
$dbh->do(“INSERT INTO actors_movies VALUES(5,5,2)”);
$dbh->do(“INSERT INTO actors_movies VALUES(6,6,3)”);
$dbh->do(“INSERT INTO actors_movies VALUES(7,7,3)”);
$dbh->do(“INSERT INTO actors_movies VALUES(8,8,4)”);
$dbh->do(“INSERT INTO actors_movies VALUES(9,9,4)”);
$dbh->do(“INSERT INTO actors_movies VALUES(10,10,4)”);
$dbh->do(“INSERT INTO actors_movies VALUES(11,8,5)”);

print qq{“Hollywood” database created! \n };

$dbh->disconnect;

—————————————————query_hollywood.pl———————————————

#!/usr/bin/perl -w

use DBI;
use strict;

# CONFIG VARIABLES
my $platform = “SQLite”;
my $database = “hollywood.db”;
my $host = “localhost”;
my $port = “3306”;
my $user = “username”;
my $pw = “password”;

# DATA SOURCE NAME
my $dsn = “dbi:$platform:$database:$host:$port”;

# PERL DBI CONNECT
my $dbh = DBI->connect($dsn, $user, $pw) or die “Cannot connect: $DBI::errstr”;

# EXECUTE THE QUERY
my $query = “SELECT actors.name , movies.title  FROM actors,movies,actors_movies WHERE actors.aid=actors_movies.aid and
movies.mid=actors_movies.mid”;

my $sth=$dbh->selectall_arrayref($query);

print “Actor                                                          Movie \n” ;
print “======================  ====================\n”;

foreach my $row (@$sth) {
my ($name, $title) = @$row;

### Print out the table metadata…
printf “%-23s %-23s \n”, $name, $title;

}

$dbh->disconnect;

—————————————————————————————–

Make the perl scripts executable like:
$ sudo chmod +x script.pl

And run them liket:
$ ./script.pl

The result of the query is:

Actor                                   Movie
====================  ====================
Philip Seymour Hofman     Capote
Philip Seymour Hofman     Scent of a woman
Philip Seymour Hofman     Stigmata
Kate Shindle                      Exorcist
Kate Shindle                      Hamsun

 

Very nice script, isn’t it?
I guess the DBI deserves further attention…

Sqlite database backup: the .dump command

Let’s go on mastering our sqlite3 knowledge.
SQLite database is really just a file: a backup it’s as simple as copying one file.

The .dump command shows information about all the changes performed onto the database. Less pieces of information to the hidden file can be found in your home/user typing: $ ~/.sqlite_history.

$ sqlite3 test.db “.dump”

The result is:
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE n(id INTEGER PRIMARY KEY, f TEXT, l TEXT);
INSERT INTO “n” VALUES(1,’linus’,’torvalds’);
INSERT INTO “n” VALUES(2,’richard’,’stallman’);
COMMIT;

If you want to backup the database in a new file, you can specify a name (ex. “dbbackup”):
$ $ sqlite3 test.db ‘.dump’ > dbbackup

The contents of the backup can be modified.
For example you can filter and pipe it to another database. Below, table “n” is changed to “people” with the sed command, and it is piped into the “computer_pioneers” database.

$ sqlite3 test.db “.dump”|sed -e s/n/people/|sqlite3 computer_pioneers.db
The contect is the same:
$sqlite3 computer_pioneers.db “select * from people”;

First steps in Sqlite and Perl

Sqlite is useful to create a database (one file, .db extension) used to store configuration data, used Miicrosoft, Skype, Banshee… Smart Phone applications….

The file extension .db stands for a whole database: it’s made by the software C library called Sqlite

This is a great opportunity to learn about SQLITE and the Perl scripting languages… great, don’t you think?
Let’s kill two birds with a stone!
Install Sqlite3 and follow me:
On UBUNTU, DEBIAN, etc:
$ sudo apt-get install sqlite3
On OPEN-SUSE:
$ sudo zypper install sqlite3
On REDHAT, CentOS, or FEDORA:
$ yum install SQLite3

BASH EXAMPLE
Let’s create an example: a database called “test.db” by the (unix) shell by the following command:

:~$ sqlite3 test.db “create table if not exists user(id INTEGER PRIMARY KEY, name TEXT,  surname TEXT);”

Let’s fill it:
:~$ sqlite3 test.db “insert into user (name, surname) values (‘linus’,’torvalds’);”

:~$ sqlite3 test.db “insert into user(name,surname) values(‘richard’, ‘stallman’)”;

To check it out:
:~$sqlite3 test.db “select * from n”;

The result is:
1|linus|torvalds
2|richard|stallman

Alternatively you can create a database entering the sqlite3 enviroment
$ sqlite3 test.db
SQLite version 3.0.8
Enter “.help” for instructions
Enter SQL statements terminated with a “;”
sqlite>

In the sqlite3 enviroment you can use pure SQL statements to work with your database (in this case the test.db).
You can even change a few default settings to make the ouput of the commands look better. For example the column .mode and the .headers commands.They will last you exit the SQLite shell or change them to something else.
sqlite> .mode col
sqlite> .headers on

To see all the tables and views type:
sqlite> .tables

To see the databases that are currently open use the .databases command. It will show the main and temp databases and where they are on the system:
sqlite> .databases

To exit type .quit or .exit:
sqlite> .quit

Anyway it’s better to work in the shell prompt directly, that allows you to run bash scripts.
In this example the prompt is in your home/user directory. Check it out by the pwd command if you’re not sure…

PERL EXAMPLE

Make a file called test.pl :
$ touch test1.pl

Use an editor(ex. gedit under ubuntu and opensuse) or the cat test1.pl command to fill it with the following script:


#!/usr/bin/perl -w

use DBI;
use strict;

my $db = DBI->connect(“dbi:SQLite:test.db”, “”, “”) or die “couldn’t connect to db”.DBI->errstr;

$db->do(“CREATE TABLE IF NOT EXISTS user (id INTEGER PRIMARY KEY, name TEXT, surname TEXT)”);
$db->do(“INSERT INTO user\(name, surname) VALUES ( ‘linus’, ‘torvalds’)”);
$db->do(“INSERT INTO user\(name, surname) VALUES ( ‘richard’, ‘stallman’)”);

my $all = $db->selectall_arrayref(“SELECT * FROM USER”);

foreach my $row (@$all) {
my ($id, $name, $surname) = @$row;
print “$id|$name|$surname \n”;

}

$db->disconnect;

Alternatively you can make the file directly by the shell:
$ cat > test.pl
.. perl script content…

Type Ctlr+C to close the file and exit.
Now check the file content:
$> cat test.pl

Make the perl script executable:
$ sudo chmod +x test.pl
Then run the perl script simply:
$ ./test.pl
The result is:
1|linus|torvalds
2|richard|stallman

In the same folder where the script lies, you can find a file called “test.db”. That’s the Sqlite database. Just one file.

Yeah! So we’ve learned some Perl and SQLite, right?

We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners.
Cookies settings
Accept
Privacy & Cookie policy
Privacy & Cookies policy
Cookie name Active

Privacy Policy

What information do we collect?

We collect information from you when you register on our site or place an order. When ordering or registering on our site, as appropriate, you may be asked to enter your: name, e-mail address or mailing address.

What do we use your information for?

Any of the information we collect from you may be used in one of the following ways: To personalize your experience (your information helps us to better respond to your individual needs) To improve our website (we continually strive to improve our website offerings based on the information and feedback we receive from you) To improve customer service (your information helps us to more effectively respond to your customer service requests and support needs) To process transactions Your information, whether public or private, will not be sold, exchanged, transferred, or given to any other company for any reason whatsoever, without your consent, other than for the express purpose of delivering the purchased product or service requested. To administer a contest, promotion, survey or other site feature To send periodic emails The email address you provide for order processing, will only be used to send you information and updates pertaining to your order.

How do we protect your information?

We implement a variety of security measures to maintain the safety of your personal information when you place an order or enter, submit, or access your personal information. We offer the use of a secure server. All supplied sensitive/credit information is transmitted via Secure Socket Layer (SSL) technology and then encrypted into our Payment gateway providers database only to be accessible by those authorized with special access rights to such systems, and are required to?keep the information confidential. After a transaction, your private information (credit cards, social security numbers, financials, etc.) will not be kept on file for more than 60 days.

Do we use cookies?

Yes (Cookies are small files that a site or its service provider transfers to your computers hard drive through your Web browser (if you allow) that enables the sites or service providers systems to recognize your browser and capture and remember certain information We use cookies to help us remember and process the items in your shopping cart, understand and save your preferences for future visits, keep track of advertisements and compile aggregate data about site traffic and site interaction so that we can offer better site experiences and tools in the future. We may contract with third-party service providers to assist us in better understanding our site visitors. These service providers are not permitted to use the information collected on our behalf except to help us conduct and improve our business. If you prefer, you can choose to have your computer warn you each time a cookie is being sent, or you can choose to turn off all cookies via your browser settings. Like most websites, if you turn your cookies off, some of our services may not function properly. However, you can still place orders by contacting customer service. Google Analytics We use Google Analytics on our sites for anonymous reporting of site usage and for advertising on the site. If you would like to opt-out of Google Analytics monitoring your behaviour on our sites please use this link (https://tools.google.com/dlpage/gaoptout/)

Do we disclose any information to outside parties?

We do not sell, trade, or otherwise transfer to outside parties your personally identifiable information. This does not include trusted third parties who assist us in operating our website, conducting our business, or servicing you, so long as those parties agree to keep this information confidential. We may also release your information when we believe release is appropriate to comply with the law, enforce our site policies, or protect ours or others rights, property, or safety. However, non-personally identifiable visitor information may be provided to other parties for marketing, advertising, or other uses.

Registration

The minimum information we need to register you is your name, email address and a password. We will ask you more questions for different services, including sales promotions. Unless we say otherwise, you have to answer all the registration questions. We may also ask some other, voluntary questions during registration for certain services (for example, professional networks) so we can gain a clearer understanding of who you are. This also allows us to personalise services for you. To assist us in our marketing, in addition to the data that you provide to us if you register, we may also obtain data from trusted third parties to help us understand what you might be interested in. This ‘profiling’ information is produced from a variety of sources, including publicly available data (such as the electoral roll) or from sources such as surveys and polls where you have given your permission for your data to be shared. You can choose not to have such data shared with the Guardian from these sources by logging into your account and changing the settings in the privacy section. After you have registered, and with your permission, we may send you emails we think may interest you. Newsletters may be personalised based on what you have been reading on theguardian.com. At any time you can decide not to receive these emails and will be able to ‘unsubscribe’. Logging in using social networking credentials If you log-in to our sites using a Facebook log-in, you are granting permission to Facebook to share your user details with us. This will include your name, email address, date of birth and location which will then be used to form a Guardian identity. You can also use your picture from Facebook as part of your profile. This will also allow us and Facebook to share your, networks, user ID and any other information you choose to share according to your Facebook account settings. If you remove the Guardian app from your Facebook settings, we will no longer have access to this information. If you log-in to our sites using a Google log-in, you grant permission to Google to share your user details with us. This will include your name, email address, date of birth, sex and location which we will then use to form a Guardian identity. You may use your picture from Google as part of your profile. This also allows us to share your networks, user ID and any other information you choose to share according to your Google account settings. If you remove the Guardian from your Google settings, we will no longer have access to this information. If you log-in to our sites using a twitter log-in, we receive your avatar (the small picture that appears next to your tweets) and twitter username.

Children’s Online Privacy Protection Act Compliance

We are in compliance with the requirements of COPPA (Childrens Online Privacy Protection Act), we do not collect any information from anyone under 13 years of age. Our website, products and services are all directed to people who are at least 13 years old or older.

Updating your personal information

We offer a ‘My details’ page (also known as Dashboard), where you can update your personal information at any time, and change your marketing preferences. You can get to this page from most pages on the site – simply click on the ‘My details’ link at the top of the screen when you are signed in.

Online Privacy Policy Only

This online privacy policy applies only to information collected through our website and not to information collected offline.

Your Consent

By using our site, you consent to our privacy policy.

Changes to our Privacy Policy

If we decide to change our privacy policy, we will post those changes on this page.
Save settings
Cookies settings