Archive for May, 2015

Wildfly Apache Modcluster : Error MODCLUSTER000042

I was dealing with the Apache modcluster configuration for the Wildfly load balancing and I ran into the following error

[Server:server-four] 16:55:00,093 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending STATUS command to localhost/127.0.0.1:6666, configuration will be reset: MEM: Can't read node

The error occurs when the servers are “idle”, that is where there is nore request from the browser for a time longer than the keep alive timeout setting.

PL/SQL Transactional single rows locking with cursors

Oracle already provides an automatic exclusive locking mechanism for the rows you want to update. However you can override it to customize the performance (for example, if you need consistent data and/or exclusive access for the duration of a whole and more complex transaction).

When your application requires consistent data for the duration of the transaction, not reflecting changes by other transactions, you can achieve transaction-level read consistency by using explicit locking, read-only transactions, serializable transactions, or by overriding default locking.

Row locking at transactional level can be achieved with “SELECT FOR UPDATE” statement. The lock is released after a commit or rollback only.

If you want to lock single rows in Oracle PL/SQL you can use cursors, pointing at the rows you want to lock.

The following example show you how to lock single rows:

CREATE OR REPLACE PROCEDURE LOCK_ORDER_ENTRY 
(id_number IN number, system_user IN varchar2)
IS
/*"order_row" is a variable to store the row found by the cursor select statement   */
order_row order_queue%rowtype;
cursor c1 is
    SELECT * 
    FROM order_queue 
    WHERE id=id_number
    FOR UPDATE NOWAIT;
BEGIN
/* first of all you need to open the cursor */
OPEN c1;
/* then you need to fetch the content in the variable */
   LOOP
      FETCH c1 INTO order_row;
/* the lock will be released after the commit */
      IF (c1%found) THEN
         UPDATE order_queue SET processed=1, user=system_user where CURRENT OF c1;
         COMMIT; 
      END IF;
 /* then you need close the cursor */
  END LOOP;
CLOSE c1;
END LOCK_ORDER_ENTRY;

The “select for update” statement has two possible variants
– FOR UPDATE NO WAIT, that triggers an error if the row is locked by another user.
– FOR UPDATE SKIP LOCKED, that fastens the execution by skipping the already locked rows

If you need to lock a single row in the database you don´t need a loop.

Install Apache web server on Windows

To install apache on Windows, you can download the binary from:
http://www.apachehaus.com/cgi-bin/download.plx?dli=gUEJEMjNVWx4EVV9yUsZVTJVlUGRVYSFlUuB3T

Then you need to edit the httpd.conf file to change the path to the server root (SRVROOT). It should be something like:
Define SRVROOT “C:\Users\laura\httpd-2.4.12-x86-r2\Apache24”

If you are running Skype (that listens on the port 80), there will be a conflict In this case you need either to kill Skype or change the default Apache listening port in the httpd.conf

If you need to enable apache modules (if no yet enabled) that you might need for your activities you need the “a2enmod” command.

There are several solutions that you can fidn by googling, like:
http://aninternetpresence.net/program/
https://github.com/enderandpeter/win-a2enmod

Wildfly Standalone clustering

To set up a cluster of standalone servers you have two possibilites: nodes running on different machine (horizontal scaling) or running on the same machine (vertical scaling).

To make a wildfly standalone cluster with 2 nodes on the same machine you can follow the steps below:

1) copy the standalone folder and rename it to „standalone1“, „standalone2

2) create 2 scripts in the /bin folder, to avoid typing all the parameter in the command line each time.

script1.sh

#!/bin/sh
./standalone.sh -Djboss.server.base.dir=$JBOSS_HOME/standalone1 -Djboss.node.name=server1 -c standalone-full-ha.xml

script2.sh

#!/bin/sh
./standalone.sh -Djboss.server.base.dir=$JBOSS_HOME/standalone2 -Djboss.node.name=server2 -c standalone-full-ha.xml -Djboss.socket.binding.port-offset=100

To bind the public interface to a specific address add the IP as parameter like the following :
-b 192.168.1.1
For the management console it´s something like:
-bmanagement=192.168.1.1

To bind to all available IPs set them to „0.0.0.0“ like:
-b 0.0.0.0 -bmanagement=192.168.1.1

The high availability configuration has default multicast addresses set to 230.0.0.4.

They can also be changed in the socket-binding section in the standalone xml file.

Alternatively you can add it as parameter like : -u=230.0.1.2

The port offset must be considered, if you need to run the cli for the second instance specify the port:
./jboss-cli.sh --controller=localhost:10090 --connect

3) Set the hornetQ clustering username and password.

The cluster nodes must have the same user und password otherwise you will get:

ERROR [org.hornetq.core.server] (default I/O-1) HQ224018: Failed to create session:
HornetQClusterSecurityException[errorType=CLUSTER_SECURITY_EXCE
PTION message=HQ119099: Unable to authenticate cluster user: HORNETQ.CLUSTER.ADMIN.USER]

To fix this add the cluster username and password in the subsystem messaging:2.0 for any standalone xml configuration file under the tag hornet1-server :

 <clustered>true</clustered>
<cluster-user>clusteruser</cluster-user>
<cluster-password>cluster-secret</cluster-password>

Otherwise you can set to false if you don ́t need to cluster the messaging.

4) run the scripts in two different terminals (start with server 1).

5) deploy the app.
The clustering service will be initiated only if a cluster-enabled application is deployed.
In the web.xml of the application you need to add the distributable tag:

<distributable/>

You need to deploy the app in every server instance

You can try it with the app by Arun Gupta called „clustering“, that you can find at https://github.com/arun-gupta/wildfly-samples.
To make it a little simpler you can create a maven simple project and copy the content in the webapp folder (https://github.com/arun-gupta/wildfly-samples/tree/master/clustering/http/src/main/webapp).
Then you can edit the pom.xml like the following:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>de.demo</groupId>
  <artifactId>demo-clustered-app</artifactId>
  <version>0.0.1-SNAPSHOT</version>
  <packaging>war</packaging>
  <name>Demo_Clustered_App</name>

  <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
    </properties>

    <dependencies>
        <dependency>
            <groupId>javax</groupId>
            <artifactId>javaee-api</artifactId>
            <version>7.0</version>
            <scope>provided</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.1</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-war-plugin</artifactId>
                <version>2.4</version>
                <configuration>
                    <failOnMissingWebXml>false</failOnMissingWebXml>
                </configuration>
            </plugin>
        </plugins>
    </build>
</project>

Run mvn package. In the target folder you will find the .war file, that you can manually deploy in each management console.
At the following urls you will see that the node server names are different for the two urls but the session data is shared:
http://localhost:8080/demo-clustered-app-0.0.1-SNAPSHOT/index.jsp
http://localhost:8180/demo-clustered-app-0.0.1-SNAPSHOT/index.jsp

Once you get your app deployed on both instances in the logging of the server one you will see something like:

INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,shared=udp) ISPN000094: Received new cluster view: [server1/web|1] (2) [server1/web, server2/web]

Defining a port offset is not the only way to set up a cluster on the same machine.
You can also define multiple IP addresses on the same machine (multihoming).

Categories
Links: