Archive for the ‘Programming’ Category

PL/SQL Transactional single rows locking with cursors

Oracle already provides an automatic exclusive locking mechanism for the rows you want to update. However you can override it to customize the performance (for example, if you need consistent data and/or exclusive access for the duration of a whole and more complex transaction).

When your application requires consistent data for the duration of the transaction, not reflecting changes by other transactions, you can achieve transaction-level read consistency by using explicit locking, read-only transactions, serializable transactions, or by overriding default locking.

Row locking at transactional level can be achieved with “SELECT FOR UPDATE” statement. The lock is released after a commit or rollback only.

If you want to lock single rows in Oracle PL/SQL you can use cursors, pointing at the rows you want to lock.

The following example show you how to lock single rows:

CREATE OR REPLACE PROCEDURE LOCK_ORDER_ENTRY 
(id_number IN number, system_user IN varchar2)
IS
/*"order_row" is a variable to store the row found by the cursor select statement   */
order_row order_queue%rowtype;
cursor c1 is
    SELECT * 
    FROM order_queue 
    WHERE id=id_number
    FOR UPDATE NOWAIT;
BEGIN
/* first of all you need to open the cursor */
OPEN c1;
/* then you need to fetch the content in the variable */
   LOOP
      FETCH c1 INTO order_row;
/* the lock will be released after the commit */
      IF (c1%found) THEN
         UPDATE order_queue SET processed=1, user=system_user where CURRENT OF c1;
         COMMIT; 
      END IF;
 /* then you need close the cursor */
  END LOOP;
CLOSE c1;
END LOCK_ORDER_ENTRY;

The “select for update” statement has two possible variants
– FOR UPDATE NO WAIT, that triggers an error if the row is locked by another user.
– FOR UPDATE SKIP LOCKED, that fastens the execution by skipping the already locked rows

If you need to lock a single row in the database you don´t need a loop.

Install Apache web server on Windows

To install apache on Windows, you can download the binary from:
http://www.apachehaus.com/cgi-bin/download.plx?dli=gUEJEMjNVWx4EVV9yUsZVTJVlUGRVYSFlUuB3T

Then you need to edit the httpd.conf file to change the path to the server root (SRVROOT). It should be something like:
Define SRVROOT “C:\Users\laura\httpd-2.4.12-x86-r2\Apache24”

If you are running Skype (that listens on the port 80), there will be a conflict In this case you need either to kill Skype or change the default Apache listening port in the httpd.conf

If you need to enable apache modules (if no yet enabled) that you might need for your activities you need the “a2enmod” command.

There are several solutions that you can fidn by googling, like:
http://aninternetpresence.net/program/
https://github.com/enderandpeter/win-a2enmod

Wildfly Standalone clustering

To set up a cluster of standalone servers you have two possibilites: nodes running on different machine (horizontal scaling) or running on the same machine (vertical scaling).

To make a wildfly standalone cluster with 2 nodes on the same machine you can follow the steps below:

1) copy the standalone folder and rename it to „standalone1“, „standalone2

2) create 2 scripts in the /bin folder, to avoid typing all the parameter in the command line each time.

script1.sh

#!/bin/sh
./standalone.sh -Djboss.server.base.dir=$JBOSS_HOME/standalone1 -Djboss.node.name=server1 -c standalone-full-ha.xml

script2.sh

#!/bin/sh
./standalone.sh -Djboss.server.base.dir=$JBOSS_HOME/standalone2 -Djboss.node.name=server2 -c standalone-full-ha.xml -Djboss.socket.binding.port-offset=100

To bind the public interface to a specific address add the IP as parameter like the following :
-b 192.168.1.1
For the management console it´s something like:
-bmanagement=192.168.1.1

To bind to all available IPs set them to „0.0.0.0“ like:
-b 0.0.0.0 -bmanagement=192.168.1.1

The high availability configuration has default multicast addresses set to 230.0.0.4.

They can also be changed in the socket-binding section in the standalone xml file.

Alternatively you can add it as parameter like : -u=230.0.1.2

The port offset must be considered, if you need to run the cli for the second instance specify the port:
./jboss-cli.sh --controller=localhost:10090 --connect

3) Set the hornetQ clustering username and password.

The cluster nodes must have the same user und password otherwise you will get:

ERROR [org.hornetq.core.server] (default I/O-1) HQ224018: Failed to create session:
HornetQClusterSecurityException[errorType=CLUSTER_SECURITY_EXCE
PTION message=HQ119099: Unable to authenticate cluster user: HORNETQ.CLUSTER.ADMIN.USER]

To fix this add the cluster username and password in the subsystem messaging:2.0 for any standalone xml configuration file under the tag hornet1-server :

 <clustered>true</clustered>
<cluster-user>clusteruser</cluster-user>
<cluster-password>cluster-secret</cluster-password>

Otherwise you can set to false if you don ́t need to cluster the messaging.

4) run the scripts in two different terminals (start with server 1).

5) deploy the app.
The clustering service will be initiated only if a cluster-enabled application is deployed.
In the web.xml of the application you need to add the distributable tag:

<distributable/>

You need to deploy the app in every server instance

You can try it with the app by Arun Gupta called „clustering“, that you can find at https://github.com/arun-gupta/wildfly-samples.
To make it a little simpler you can create a maven simple project and copy the content in the webapp folder (https://github.com/arun-gupta/wildfly-samples/tree/master/clustering/http/src/main/webapp).
Then you can edit the pom.xml like the following:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>de.demo</groupId>
  <artifactId>demo-clustered-app</artifactId>
  <version>0.0.1-SNAPSHOT</version>
  <packaging>war</packaging>
  <name>Demo_Clustered_App</name>

  <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
    </properties>

    <dependencies>
        <dependency>
            <groupId>javax</groupId>
            <artifactId>javaee-api</artifactId>
            <version>7.0</version>
            <scope>provided</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.1</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-war-plugin</artifactId>
                <version>2.4</version>
                <configuration>
                    <failOnMissingWebXml>false</failOnMissingWebXml>
                </configuration>
            </plugin>
        </plugins>
    </build>
</project>

Run mvn package. In the target folder you will find the .war file, that you can manually deploy in each management console.
At the following urls you will see that the node server names are different for the two urls but the session data is shared:
http://localhost:8080/demo-clustered-app-0.0.1-SNAPSHOT/index.jsp
http://localhost:8180/demo-clustered-app-0.0.1-SNAPSHOT/index.jsp

Once you get your app deployed on both instances in the logging of the server one you will see something like:

INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,shared=udp) ISPN000094: Received new cluster view: [server1/web|1] (2) [server1/web, server2/web]

Defining a port offset is not the only way to set up a cluster on the same machine.
You can also define multiple IP addresses on the same machine (multihoming).

Ubuntu 14.04 : launchpad.net NO_PUBKEY issue when running apt-get update

Launching sudo apt-get update you might get the NO_PUBKEY issue. Something like:

W: GPG-Fehler: http://ppa.launchpad.net trusty Release: Die folgenden Signaturen konnten nicht überprüft werden, weil ihr öffentlicher Schlüssel nicht verfügbar ist: NO_PUBKEY EFD5FA852F20733F

Well it´s very simple. Just execute the following (specifying the missing public key):

[email protected]:~$ sudo apt-key adv --keyserver.ubuntu.com --recv-keys EFD5FA852F20733F

And then run sudo apt-get update again. The error will disappear.

Hibernate inheritance: Discriminator Column vs Formula

Java allows you to take advantage of the domain model and think in terms of objects to elaborate technical solutions to real world problems like transferring money, buying products online and so on. These domain objects are expected to be stored and retrieved from a database, but storing the domain objects into a relational database is not so straightforward. The object-relational paradigm is mismatching, because the objects in our application are no table in a row-column format.
Hibernate allows to handle the object-relational mismatch through ihneritance strategies. The default one is the single table strategy. To achieve it you need to use discriminators.
You might add discriminator columns, but most times it´s not a good choice, especially for legacy systems, because it takes an extra column (called “d_type” as default name). So you might consider better solutions, like using the discriminator formula instead.

DiscriminatorFormula is an alternative to DiscriminatorColumn.
With DiscriminatorColumn it creates an additional column (as default called “dtype”) which contains the discriminator value.
You put the annotation in the superclass, (which maps the real table as default):

    @Entity
    @Table(name = "features")
    @DiscriminatorColumn
     public class Features{
    //valid code
    }

DiscriminatorFormula allows you to check the database row content and “select” a subclass by the discriminator value. No additional (“dtype”) column is created.
In the main class you annotate the superclass with the formula, like:

    @Entity
    @Table(name = "features")
    @DiscriminatorFormula(
    		"CASE WHEN num_value IS NOT NULL THEN 'NUMERIC' " +
	   		" WHEN txt_value IS NOT NULL THEN 'TEXT' end")
     }
     public class Features{
    //valid code
    }

In the DiscriminatorFormula you just put some pure SQL to do what you need.

You can choose one of these two options and the subclasses is exactly the same in both cases.
the subclasses are annotated java object (with @Entity), but no real table, they are just useful to map columns in the table represented by the main class.
In the subclasses you specify the discriminator value, for Example:

    @Entity
    @DiscriminatorValue('NUMERIC')
    public class NumericFeatures extends Features{

    private Double numValue;

	public Double getNumValue() {
		return numValue;
	}

	public void setNumValue(Double numValue) {
		this.numValue = numValue;
	}
   //valid code

}

In the table called “features” you have both columns “num_value” and “txt_value”, containing the corrensponding values.
With DiscriminatorColumn you would have either “NUMERIC” or “TEXT” value in the dtype column and also both the “num_value” and “txt_value” columns.

If you don´t specify the inheritance strategy the default is “SINGLE_TABLE”. the following annotation can be omitted if it´s your chosen strategy:

     @Inheritance(strategy = InheritanceType.SINGLE_TABLE)

With or without this annotation you get one table called “Features”.

MySQL CHECK constraint alternative: triggers!

Yesterday I discovered the powerful hibernate check constraint (@org.hibernate.annotations.Check) , which can be directly added in your entity class like this:

@Entity
@XmlRootElement
@Check(constraints = "age IS NOT NULL")
public class Person{

String firstname;
String lastName;
Integer age;

// valid code
}

Unfortunately, as you can read in the official MySQL docs, “the CHECK clause is parsed but ignored by all storage engines”.
Yesterday I found out that MySql 5.X doesn´t support the SQL CHECK constraint.
It means that if you are using JPA und HIBERNATE you can´t take advantage of the Check annotation!

In the project I am working on we could successfully export the schema with the maven command “hibernate4:export”. The check constraint was added in the create table statement. So if your DBMS supports it you get the job done in a very elegant way.

The way out I could find by googling a bit is not so elegant, but it allowed me to achieve the same result. I have just written a trigger, like the (simple) following:

CREATE TRIGGER check_age
BEFORE INSERT ON person
FOR EACH ROW
BEGIN
IF age is NULL
THEN
SIGNAL SQLSTATE '45000'
SET MESSAGE_TEXT = 'Age cannot be null';
END IF;
END;

MySQL has been bought by Oracle. Is this silent ignoring is a strategy to make people migrate to Oracle DBMS? I am afraid it´s so. Corporation games, my friends!

AngularJS unit tests: http requests, mocks and spies…

Lately I have been unit testing a lot of AngularJS scripts and I have started working with concepts like spies and mocks intensively.
If the module you are testing depends on other modules, you might test your controller functions with mocks and spies representing the functions taken from other modules and their services.
But what if your controller function contains a service function that makes an http request?
AngularJs provides Httpbackend to mock http services, but if you provide your own mocks you won´t be able to test the response easily…
Instead of providing a mock you can directly use the module in each test like this:

beforeEach(function () {
module('upload');
});

Then you can directly use the httpbackend mock.

AngularJs: “Error: Cannot find module ‘esprima'” with npm and grunt

You might get the following error when running npm or grunt commands:

Error: Cannot find module 'esprima'
    at Function.Module._resolveFilename (module.js:338:15)
    at Function.Module._load (module.js:280:25)
    at Module.require (module.js:364:17)
    at require (module.js:380:17)
    at Object.<anonymous> (C:\Users\liparulol\git\node_modules\grunt\node_modules\js-yaml\lib\js-yaml\type\js\function.js:4:15
    at Module._compile (module.js:456:26)
    at Object.Module._extensions..js (module.js:474:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)
    at Module.require (module.js:364:17)

A solution might be running “npm clear cache” and then „npm install“. Then you can run the other grunt tasks again.

APACHE POI Excel: Alternate green and white rows

 public void alternateGreenAndWhiteRows(XSSFSheet sheet) {

        SheetConditionalFormatting sheetCF = sheet.getSheetConditionalFormatting();
        ConditionalFormattingRule rule1 = sheetCF.createConditionalFormattingRule("MOD(ROW(),2)");
        PatternFormatting fill1 = rule1.createPatternFormatting();
        fill1.setFillBackgroundColor(IndexedColors.LIGHT_GREEN.index);
        fill1.setFillPattern(PatternFormatting.SOLID_FOREGROUND);

        CellRangeAddress[] regions = {
                CellRangeAddress.valueOf("A1:Z100")
        };

        sheetCF.addConditionalFormatting(regions, rule1);

    }

Primefaces 4.0 DataExporter: How to add columns to excel report only

With the rendered tag set to false (rendered=”false”) it doesn´t work at the moment (it has been reported as bug lately: https://github.com/primefaces-extensions/primefaces-extensions.github.com/issues/209), but you can make it with css like this:

<p:column headerText="#{msg['book.coverType']}" style="display:none;">
         <h:outputText value="#{book.coverType}"/>
</p:column>

Another solution is to add a post process method in your managed bean and add the column from there:

public void postProcessXLS(Object document) { log.debug("post processing Excel");
    HSSFWorkbook workbook = (HSSFWorkbook) document;

    addCoverType(workbook,book);

}

You can write the method like:

public void addCoverType(HSSFWorkbook workbook, List<Book> books) {

    HSSFSheet sheet = workbook.getSheetAt(0);
    HSSFCell cell = null;

 //row 0 is the header (not automatically added by primefaces)
 //add a fifth cell to each row
    for (int i = 1; i<sheet.getLastRowNum() + 1; i++) {
        sheet.getRow(i).createCell(4);
        cell = sheet.getRow(i).getCell(4);
        cell.setCellValue(book.get(i - 1).getCoverType());
    }
    log.debug("cover type added");
}

If you make sortable column the order will be automatically respected as well! 😀

Categories
Links: