Tuesday, May 19, 2015

Using Spring Integration for creating scalable distributed services backed by JMS (part 1)

Spring Integration provides an implementation of the Enterprise Integration Patterns (as described in the book "Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions" by Gregor Hohpe and Bobby Woolf). While these patterns are mostly designed to integrate separate heterogeneous systems, Spring Integration allows you to use these patterns inside of your application. One useful mechanism provided by Spring Integration is an implementation of the "Request-Reply" pattern. In this pattern a command message is sent with a "ReplyTo" address. Using the "ReplyTo" address, the replier entity knows where to send the response.  This pattern is illustrated in the following diagram:


The plumbing to achieve this is somewhat complicated an tedious, however Spring Integration takes care of all of the boilerplate code, with just a few lines of XML. This provides a very powerful mechanism, where you can define your interface and use it, without any knowledge of the mechanism used to communicate with the actual implementation. Furthermore, you can change this mechanism as needed, just by changing your configuration.
In this post we're going to wire a service using a direct connection. The direct connection occurs inside the JVM, in the same thread, resulting in very little overhead. We're also going to configure that same service using a JMS queue. Using this transport mechanism, the Requestor and the Replier and decoupled, and can live in different JVMs, potentially hosted in different machines. This also allows you to scale, by adding more Replier machines, to increase the capacity.

Using Spring Integration to wire a service

In this first part, we're going to create a an Interface for a service. The implementation of this interface is not wired directly, but rather provided by Spring Integration, using the configuration provided in the spring XML. The Interface is very basic as you can see below. When using Spring Integration it's best to design simple Interfaces, with few, coarse grained methods.

package com.javaprocess.examples.integration.interfaces;

import com.javaprocess.examples.integration.pojos.Order;
import com.javaprocess.examples.integration.pojos.OrderConfirmation;

public interface IOrderService {
    OrderConfirmation placeOrder(Order order);
}
This interface is going to be backed by the following class:
package com.javaprocess.examples.integration.impl;

import com.javaprocess.examples.integration.pojos.Order;
import com.javaprocess.examples.integration.pojos.OrderConfirmation;

public class OrderServiceHandler {


        public OrderConfirmation processOrder(Order order) {
            long threadId = Thread.currentThread().getId();
            System.out.println("Got order with id " + order.getId() + " on thread: " + threadId);
            OrderConfirmation ret = new OrderConfirmation();

            ret.setThreadHandler(threadId);
            return ret;
        }
}
You might notice that OrderServiceHandler does not implement IOrderService, and in fact, the methods names do not even match. So how can we have such level of decoupling? This is where Spring Integration comes into the picture:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:int="http://www.springframework.org/schema/integration"
       xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd
   http://www.springframework.org/schema/integration
   http://www.springframework.org/schema/integration/spring-integration.xsd
  ">
    
    <bean id="orderServiceHandler" class="com.javaprocess.examples.integration.impl.OrderServiceHandler"/> <!-- 1 -->

    <int:channel id="requestChannel"/> <!-- 2 -->

    <int:service-activator input-channel="requestChannel"
    ref="orderServiceHandler" method="processOrder" /> <!-- 3 -->

    <int:gateway id="orderService"
                 service-interface="com.javaprocess.examples.integration.interfaces.IOrderService"  <!-- 4 -->
                 default-request-channel="requestChannel"/> <!-- 5 -->

    <bean id="app" class="com.javaprocess.examples.integration.impl.App">
        <property name="orderService" ref="orderService"/> <!-- 6 -->
    </bean>
    
</beans>
Let's look at the relevant part of this Spring xml configuration (spring-int.xml):

  1. We instantiate an OrderServiceHandler bean. This will execute our business logic. How you instantiate it is your choice, we declare it manually in this example for clarity.
  2. We declare a Spring Integration channel "requestChannel". This channel will send the reply back to the requestor.
  3. The Service Activator is an endpoint, which connects a channel to Spring Bean. In this case we're connecting requestChannel channel to our business logic bean.  In this xml block we also declare that the "processOrder" method will invoked for this Service Activator.
  4. The Gateway is another endpoint point, designed to expose a simple interface, while hiding all of the Spring Integration complexity. In this case we're wiring our two channels, and exposing the gateway as "orderService" implementing the IOrderService interface. This is the basic syntax for an interface with a single method, the xml get more complicated with additional methods as every method must be wired independently.
  5. We're using "requestChannel" to send out requests.
  6. The "app" bean is our test application. 
We explicitly specified a channel to send the request ("requestChannel"). So how does the JMS outgoing-gateway know where to send the reply? By default it sends it back to the requester, using a temporary anonymous channel, so Spring Integration gives us that functionality for free. If you have special needs, to can specify the reply channel. We'll explore this in future postings. 
The code for the App is shown below, and it creates 5 threads, processing an order in each thread, using the IOrderService bean provided by Spring Integration (Remember, we have no implementation of IOrderService anywhere in our source code):
package com.javaprocess.examples.integration.impl;

import com.javaprocess.examples.integration.pojos.Order;
import com.javaprocess.examples.integration.pojos.OrderConfirmation;
import com.javaprocess.examples.integration.interfaces.IOrderService;

import java.util.ArrayList;
import java.util.List;

public class App {

    public class AppWorker implements Runnable {
        public void run() {
            long threadId=Thread.currentThread().getId();
            System.out.println("Requesting order processing on thread: " + threadId);
            Order order=new Order();
            order.setId(100);
            OrderConfirmation ret= orderService.placeOrder(order);
            System.out.println("Order was requested by " + threadId+"  and by processed by thread: " + ret.getThreadHandler());
        }
    }

    private IOrderService orderService;

    public void setOrderService(IOrderService orderService) {
        this.orderService = orderService;
    }

    public void run() {
        List list=new ArrayList();
        for (int i=0;i<5 appworker="" catch="" e.printstacktrace="" e="" for="" hread="" i="" list.add="" new="" nterruptedexception="" pre="" thread.join="" thread.start="" thread:list="" thread="" try="">
The example is provided as a Maven application, and can be compiled command:
mvn compile
Once compiled, you can see it in action:
mvn exec:java -Dexec.mainClass="com.javaprocess.examples.integration.main.Main" -Dexec.args="direct"
The argument "direct" will cause the main method to use the Spring configuration we were examining before. The output will look something like this:
Running in client mode
Requesting order processing on thread: 13
Requesting order processing on thread: 14
Requesting order processing on thread: 15
Requesting order processing on thread: 16
Requesting order processing on thread: 17
Got order with id 100 on thread: 16
Got order with id 100 on thread: 15
Got order with id 100 on thread: 17
Got order with id 100 on thread: 13
Got order with id 100 on thread: 14
Order was requested by 16  and by processed by thread: 16
Order was requested by 15  and by processed by thread: 15
Order was requested by 14  and by processed by thread: 14
Order was requested by 17  and by processed by thread: 17
Order was requested by 13  and by processed by thread: 13
So what is happening in this example?

  • Spring Integration has created a dynamic proxy that implements IOrderService.  This proxy is exposed as Spring Bean with name "orderService".  This is the bean is used from inside App.
  • The "placeOrder" method in "orderService" is connected using two channels to a method ("processOrder") in the "orderServiceHandler" bean.
  • The processing of the order is occurring in the same thread in which it is requested.
This can be seen in the following diagram:
Service connected directly using Spring Integration


Up to this point, Spring Integration seems to add very little value, and a lot of extra XML configuration. The same result could have been achieved by having OrderServiceHandler implement IOrderService and wiring the bean directly (even easier if using annotations). However the real power of Spring Integration will become evident on the next section.

Using JMS with Spring Integration

In this next section we're going to have two separate Spring configuration, one for the Requestor, and one for the Replier. They exist in two different files, and we can run them in the same JVM, or in separate JVMs.
First we will examine the file for requestor (spring-int-client.xml):
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:int="http://www.springframework.org/schema/integration"
       xmlns:int-jms="http://www.springframework.org/schema/integration/jms"
       xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd
   http://www.springframework.org/schema/integration
   http://www.springframework.org/schema/integration/spring-integration.xsd
   http://www.springframework.org/schema/integration/jms
   http://www.springframework.org/schema/integration/jms/spring-integration-jms.xsd
  ">

    <int:channel id="outChannel"></int:channel> <!-- 1 -->

    <int:gateway id="orderService"
                 service-interface="com.javaprocess.examples.integration.interfaces.IOrderService"
                 default-request-channel="outChannel"/> <!-- 2 -->

    <int-jms:outbound-gateway request-channel="outChannel"
                              request-destination="amq.outbound" extract-request-payload="true"/> <!-- 3 -->

    <bean id="app" class="com.javaprocess.examples.integration.impl.App">
        <property name="orderService" ref="orderService"/> <!-- 4 -->
    </bean>

</beans>

  1. The outgoing channel is declared. This channel is used by the gateway to send its requests.
  2. The gateway is setup in a similar fashion as in the first example. You might have noticed that it lacks a setting for "default-reply-channel", this is on purpose, and will be explained shortly.
  3. The JMS outgoing gateway (notice we're using the int-jms namespace). This is the interface with the JMS broker, it will read messages from "outChannel" and send them to the "amq.outbound" JMS queue.
  4. The app is the same as used in the first example. This way we show how you wire services differently, without rewriting any code.
The counter part to requestor, is the replier (spring-int-server.xml). The file is shown below:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:int="http://www.springframework.org/schema/integration"
       xmlns:int-jms="http://www.springframework.org/schema/integration/jms"
       xmlns:amq="http://activemq.apache.org/schema/core"
       xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd
   http://www.springframework.org/schema/integration
   http://www.springframework.org/schema/integration/spring-integration.xsd
   http://www.springframework.org/schema/integration/jms
   http://www.springframework.org/schema/integration/jms/spring-integration-jms.xsd
   http://activemq.apache.org/schema/core
   http://activemq.apache.org/schema/core/activemq-core.xsd
  ">

    <amq:broker id="activeMQBroker" useJmx="false" persistent="false">
        <amq:transportConnectors>
        <amq:transportConnector uri="tcp://localhost:61616" />
        </amq:transportConnectors>
    </amq:broker> <!-- 1 -->

    <bean id="orderServiceHandler" class="com.javaprocess.examples.integration.impl.OrderServiceHandler"/> <!-- 2 -->

    <int:channel id="inChannel" /> <!-- 3 -->

    <int-jms:inbound-gateway request-channel="inChannel" request-destination="amq.outbound"
                             concurrent-consumers="10"/> <!-- 4 -->

    <int:service-activator input-channel="inChannel"
                           ref="orderServiceHandler" method="processOrder"/> <!-- 5 -->

</beans>
  1. A broker is configured to run in-process, on the JVM hosting the replier or server component. This is suitable for a proof of concept such as this. In production you will want to run a stand alone and properly tuned Broker.
  2. This channel will be used to send messages received from JMS, to the service activator
  3. A JMS inbound gateway will read messages from the Queue. Of special interest in here is the number of concurrent consumers. This can be tweaked to set the number of threads that will process incoming messages.
  4. The service activator is configured to listen on the "inChannel". Similar to the way we defined Gateway, we have omitted the output-channel parameter.

You can run both the requestor and the replier in the same JVM, by using this command:
mvn exec:java -Dexec.mainClass="com.javaprocess.examples.integration.main.Main" -Dexec.args="client server"
Running in this fashion will result in an input similar to this:
Running in client mode
Requesting order processing on thread: 33
Requesting order processing on thread: 34
Requesting order processing on thread: 35
Requesting order processing on thread: 36
Requesting order processing on thread: 37
Got order with id 100 on thread: 19
Got order with id 100 on thread: 24
Got order with id 100 on thread: 26
Got order with id 100 on thread: 18
Got order with id 100 on thread: 25
Order was requested by 37  and by processed by thread: 26
Order was requested by 36  and by processed by thread: 18
Order was requested by 34  and by processed by thread: 25
Order was requested by 33  and by processed by thread: 19
Order was requested by 35  and by processed by thread: 24
Closing context
Notice how the the thread that made the request is different from the thread the replied to the request.
We can take this a step further and see it run in two different JVMs. First start the server. This is necessary since it also starts the JMS broker that the client will connect to. Invoked in this fashion, the server will run until manually terminated, fulfilling any requests put in the queue.
mvn exec:java -Dexec.mainClass="com.javaprocess.examples.integration.main.Main" -Dexec.args="server"
Once the server has started, open a new terminal or command prompt window, and start the client:
mvn exec:java -Dexec.mainClass="com.javaprocess.examples.integration.main.Main" -Dexec.args="client"
You will see the client making requests:
Running in client mode
Requesting order processing on thread: 13
Requesting order processing on thread: 14
Requesting order processing on thread: 15
Requesting order processing on thread: 16
Requesting order processing on thread: 17
Order was requested by 15  and by processed by thread: 21
Order was requested by 16  and by processed by thread: 24
Order was requested by 14  and by processed by thread: 20
Order was requested by 13  and by processed by thread: 17
Order was requested by 17  and by processed by thread: 19
Closing context
You will also see the server fulfilling these requests:
Running in server mode
Got order with id 100 on thread: 19
Got order with id 100 on thread: 17
Got order with id 100 on thread: 20
Got order with id 100 on thread: 24
Got order with id 100 on thread: 21
If you start more than one client working at a time, you will see the server replying to all the incoming requests.

So how is this working?

On the other hand we define a JMS Queue as the destination to send the request, but how does the replier know which channel to use to send the respond? The outgoing-gateway will create a temporary channel, and sets "ReplyTo" JMS header to this temporary channel. This can conceptualized in the following diagram:
JMS backed service

Conclusion

So why should you try to use something like this? We'll point out some general advantages, but as usual, remember that the best solution depends on your particular needs.

  • This approach provides a deep decoupling, allowing you to have servers in the back servicing requests. These servers can be even added on demand.
  • The scalability of Messaging based service buses tends to be better than other approaches like load balances, specially under heavy load. If using a load balancer, a single request that takes too long could cause a server to drop off the load balancer. With messaging, the available servers will continue to service requests as fast as possible.
  • Spring Integration provides a great deal of extra functionality out of the box, such as the "Claim Check" pattern. We'll examine some of these possibilities in future posts.  Other remoting frameworks (such as plain Spring remoting) do not provide this wealth of extra functionality.
  • This approach provides a huge deal of flexibility. Can even wire different methods in single interface using different transports (JMS, direct connection, REST web services).
  • The configuration can be easily changed by modifying the XML files. These files can even be located in files outside of the application. This allows precise tuning, without having to change the application code, recompile, or repackage. In this respect, I favor using the XML configuration as opposed to Java configuration.

Source code

You can find the source code for this example in github
To check out the code clone the following repository: https://github.com/aolarte/spring-integration-samples.git.
git clone https://github.com/aolarte/spring-integration-samples.git
git checkout branches/part1

Thursday, May 7, 2015

Retrieving a remote SSL cert and importing it in the JRE's keystore

In the daily work of a developer, it's often necessary to interact with external servers, for example to access web services. Many times, these are test or development servers, some of which might be secured with a self signed SSL certificate. Since these certificates are not trusted by default in the JRE, the best option is to import said certificate in your JRE. While the information to achieve this is definitely out there, I always find myself having look at a quite a few pages to get all of the pieces together to get the certificate imported. This quick post is an attempt to have full step by step guide to getting your certificate installed. While there are ways of retrieving a certificate using a web browser, that is sometimes not possible, for example when working in a remote server that might lack a GUI. This approach can be used with only a terminal.

First retrieve the server's certificate, and format it in a format suitable to import into the JRE's keystore format. Just replace the hostname and port that you want to connect to:
echo | openssl s_client -connect hostname:port 2>/dev/null | openssl x509 > test_self_signed.cer
This will create file that will look roughly like this:

-----BEGIN CERTIFICATE-----
MIIDezCCAmOgAwIBAgIEaejl6zANBgkqhkiG9w0BAQsFADBuMRAwDgYDVQQGEwdV
bmtub3duMRAwDgYDVQQIEwdVbmtub3duMRAwDgYDVQQHEwdVbmtub3duMRAwDgYD
... quite a few more lines here ...
/FHwsiau3ntmBn358GhaD4exNPkf346eDcYnHii/nvfEJivC5vEDnQsFBcxSWEU6
P/GspsPqjjuEwxh6HDGOYBNg7a7jwk66uwPJ/QoRNg==
-----END CERTIFICATE-----

Second, import the file into the JRE's keystore.  If using a JDK, the JRE is in a directory called jre inside your JDK. Make sure that you have the proper permissions, if the JDK is installed system wide, you might require root access for the next step. Ensure that you are importing the cert into the JDK version that you're using, since you might have several versions installed in your computer.
keytool -import -alias test_self_signed -file self_signed.cer -keystore lib/security/cacerts -storepass changeit
Finally, you can list the certs in the keystore to ensure your cert was imported:
keytool -list -keystore lib/security/cacerts -storepass changeit
Or you can search for only your particular cert by passing the alias name:
keytool -list -alias test_self_signed -keystore lib/security/cacerts -storepass changeit
You will see an output similar to this:
test_self_signed, May 5, 2015, trustedCertEntry,
Certificate fingerprint (SHA1): 9C:A6:FC:74:05:02:B1:11:F9:02:BB:3C:14:DA:7A:5B:84:F2:F0:A8
The commands were put together from examples I found here and here.

Saturday, March 14, 2015

My Study notes for Oracle WebLogic Server 12c: Advanced Administrator II (1z0-134)

I recently took the "Oracle WebLogic Server 12c: Advanced Administrator II".  I actually took the beta exam (1z1-134).  This is the exam that will eventually become 1z0-134.  I wanted to share some of my experience.  You will also find my notes down below. As a took, the beta exam, some of the questions I encountered on the test will not on the final production test, so take my advice with that caveat.
I assume that if you're reading this, you have already achieved the Weblogic OCA level certification, and therefore have basic knowledge.

  • Practice (a lot of) scripting For this test, practicing on a real Weblogic instance is a must.  Practice doing routine (and not so routine tasks) using WLST.  Things like monitoring running services, online and offline editing, deploying applications, creating some of the more esoteric Weblogic components. There are plenty of questions regarding scripting.
  • Learn your JMS Besides knowing how to setup the different JMS components in Weblogic, review some of the JMS internals, like persistence, acknowledgments, durable subscribers, etc. Review how these different operation methods are reflected in JMS headers. There's a significant focus on JMS. I found this post useful regarding JMS tuning.
  • The NodeManager and service migration There's plenty of questions on different scenarios and the interaction between the NodeManager and migration. I feel that this section requires plenty of work, since most people will only be exposed to only a few of these scenarios in their day to day work.
    • Service vs. whole server migration
    • Script vs. Java NodeManager
    • Consensus vs. Database leasing
  • WLDF For me, WLDF (Weblogic Diagnostic Framework), was a fairly weak area.  Make sure you play around with all of the pieces of the framework.
  • Coherence Coherence was another weak area for me, having never had practical experience with it.  However, the questions related to Coherence we fairly basic. If you need to get started, this post helped me a lot.
  • Linux I was surprised to find some basic linux questions.  I would say they were fairly strait forward, and anyone who had done some day to day troubleshooting will have no problem with them.  If you have only worked under Windows, some research will come in handy.
Below you will find my notes.  Most of my notes come the Weblogic documentation found here.

You can access my notes directly by using this link.

Now the wait is on for the beta period to finish, and to see how well I did. Regardless, this proved to be a very insightful experience, in which I was able to fill quite a few weak areas in my Weblogic knowledge.

Sunday, March 1, 2015

Stepping through the internal code of your JavaEE application server in IntelliJ IDEA

While while trying to container based authentication working with a JDBC realm, I had the need to step through the internals of GlassFish to get a better idea why my settings were not working. It seems that by default IntelliJ IDEA does not provide access to the class path of the application server.


To be able to step through the code, I had to follow a few steps:


1 - Increase the logging in GlassFish.  This is optional, but it gave me an idea of where to start looking for the issue.
2 - Download the GlassFish source code.  This is not an issue for an open source app server.  If you don't have access to the source of your particular proprietary app server, IntelliJ IDEA will still allow you to step through it, and will try to decompile the code, with various level of sucess.
3 - Add the jar that contains the actual compiled byte code.  I found two ways of doing this for Maven based project, one using the provided scope, and another way using the system scope. This step is critical, and at least for me counter intuitive.

Add the jar to your pom.xml using the provided scope:
<dependency>
    <groupId>org.glassfish.main.extras</groupId>
    <artifactId>glassfish-embedded-all</artifactId>
    <version>4.1</version>
    <scope>provided</scope>
</dependency>

This is the easiest solution, but you might not be able to find the required jar in any maven repo.  If you don't find the right jar, but find any jar that has the class, you can use.  For this to work you will have to attach the right source code, as downloaded in step #2.  Otherwise, IntelliJ will decompile the class and give you a nonsensical java file that does not match what you are running.  


Add the jar to your pom.xml using the system scope:
<dependency>
    <groupId>org.glassfish</groupId>
    <artifactId>glassfish</artifactId>
    <version>4.1</version>
    <systemPath>h:/servers/glassfish/modules/security-ee.jar</systemPath>
    <scope>system</scope>
</dependency>

If you don't have source code, I recommend this approach.  This will give you a lot of information, even if you lack access to the source code.

5 - Add the source to your new library. The easiest way is to open the class that you want to examine or add the break point to.  Once open you will see the source as decompiled by IntelliJ IDEA. If possible add the sources you downloaded in step #2 to make your life a bit easier.  Select "Choose Sources..." and navigate to your source directory.

You now have access to the class you need. Drop a break point, add a watch, step through it as you need!

6 - Make sure you remove the dependency after your done.  You don't want to have extra dependencies, and in the worst case the build will fail in other computers, if the system scoped library is not available in the same location.

Saturday, September 13, 2014

Passing the Java EE 6 Enterprise Architect Certified Master Part 2 and 3 (1Z0-865 and 1Z0-866)

The "Plan"

I finally got my result for the assignment and essay tests for the Oracle Certified Master, Java EE 6 Enterprise Architect  (1Z0-865 and 1Z0-866).  I wanted to take this opportunity to share some of my experience on this endeavor.  If you want to read about part one of the certification, my notes are available here.

I downloaded my assignment on June 30th, and spent the next few days reading and re-reading the assignment.  I set myself a very aggressive (and overly optimistic) timeline to finish the assignment in 4 weeks.

Week 1

In week one I wanted to brush up on some UML, since this was weak area for me.  I used two books that were available in my local public library:
  • "Fast track UML 2.0" by Kendall Scott
  • "Learning UML 2.0" by Miles, Russ
I also wanted to prototype some aspects of the application.  Working with J2EE 6 is fast enough that a simple application can be created painlessly.  It was also a good opportunity to think how I would create a green field application under "perfect" circumstances, something that is rarely possible in the real world.  This prototype would also allow me to validate some of the architectural decisions.

Week 2

Continuing with the prototype, the idea was to test some of the more complicated aspects that would be required to complete the assignment.  Some of these technologies I was comfortable with, while others proved a learning experience:
  • Authentication (using JASPIC)
  • RMI
  • JSF

Week 3

The plan was to create all of the diagrams during the third week.  For the most part the plan was to follow the diagrams as shown in "Sun Certified Enterprise Architect for Java EE Study Guide" by Mark Cade and Humphrey Sheil.

For the UML diagrams I tried free different tools:

JDeveloper

Pros: Nice looking diagrams.  Very good Java round tripping. I was already somewhat familiar with this tool, since I had used it at work to create some basic diagrams.
Cons: Not suitable to create component and deployment diagrams.  A memory hog prone to crashing every so often.

StarUML

Pros: Simple to use.
Cons: UML 2.0 support was lacking.  Some people mentioned that diagrams could be 1.x UML compliant, but I wanted to focus on 2.0 diagrams.  The diagrams were not very aesthetic.

Visual Paradigm Community Edition

Pros: Nice looking diagrams and plenty to format
Cons: The Community Edition only support one diagram per type.  This only proved to be a problem in the sequence diagrams, in which I had to create one file per diagram.

I ended up using Visual Paradigm, which proved to be the most mature free UML tool I could find.

Week 4

In week for I was hoping to review everything, create the HTML, and write the top 3 risks, and all of the assumptions.

How everything went down

All in all, my four week plan took seven weeks.  While I was able to meet most of my objectives for the first three weeks (including a mostly complete prototype), I underestimated the time required to do the diagrams.  I had to familiarize with a new tool (Visual Paradigm), and learn parts of UML that I had never learned.  As other people around the net mention, the sequence diagrams take a lot of time to complete.  The last three weeks were spent refining my design, simplifying as much as possible, ensuring that the design was elegant, and not just workable.  This required redoing some of the diagrams. I would stop working for a day or two, and then review my design from top to bottom, making changes as needed.  The more iterations, the less I was changing, until in week seven I felt comfortable with the finished product.  I submitted my assignment and signed up for the essay the very next Saturday.
As a rough estimate, I would say I spent  around 150 hours total preparing the assignment.  I could have done it in less time if had not done a prototype, and if I didn't have to try several new UML tools.
For my solution the sample shown in the Cade and Sheil book was main resource, and I can recommend it as a good concise resource.  Even though the book is targeted at SCEA 5 (OCMJEA 5), it's worth nothing that the assignment is the same  for OCMJEA 5 & 6.
I also bought the training tool from EPractizeLabs SCEA 5 Part 2 and 3 Certification Training Lab. In my particular case this tool wasn't very valuable, primarily because I bought it once I had done most of the work, so it just helped me verify that there wasn't anything else for me to do. This training lab provides more sample solutions than the single one presented in the Cade and Sheil book, but it becomes evident that in any solution you're always covering the same aspects.
I finished my certification before the " OCM Java EE 6 Enterprise Architect Exam Guide" by Allen and Bambara was available, so I do not have a comment on how useful their material is.

Looking back, my tips and recommendations


  • Read and read and read the assignment until have three things clear:
    • What your application needs to do: the functional requirements.
    • How your application should operate.  These are the Non Functional Requirements (NFR), and includes aspects such as performance, security, availability, etc.
    • What systems you have to integrate with, and how you can integrate with them.  This includes what protocol to use, async vs. sync integration, etc.
  • Ensure that you implement the business object model as described in the assignment.  You can augment it, but do not change it.  If it doesn't make sense, read it again until it does.  If it still doesn't make sense, add assumptions so that it does make sense.  
  • As you're designing the solution keep a list of risks and assumptions.  This will make your life much easier later on.  I did not do this so I spent a considerable amount of time going back and forth and trying to remember some of the justifications I had though about
  • Keep in mind all of your NFR, and make sure you address them in your design and your assumption.  Even if you think something is obvious, write it down.  You're the architect, so everything is either something you have to design, or something you assume is already in place.  Either way, document it.  I'm talking about things like network connections, databases, external systems, clients, etc.  The NFR normally taken into account are performance, scalability, reliability, availability, maintainability, extensibility, manageability, and security.  If you need to refresh some of these concepts you can take a look at my notes for the part 1 exam.  
  • If something doesn't feel right, don't be afraid to change it.  I redid my diagrams several times.  Some changes were very significant, for example changing from one technology framework to another.  Other changes were minor, such as polishing naming conventions to make it easier for the instructor to understand what I was doing.  The biggest change I did was to convert most of my business logic from Stateless Session Beans to CDI Managed Beans, since EJB provided no useful benefit to solving the problem at hand. I did keep some Message Driven Beans (MDB) and Singleton Session Beans that had specific requirements. 
  • When I do an assignment I normally try to do what's required, nothing more, nothing less.  In this case in particular I did do a few extra diagrams to explain some of the trickier elements that were not evident in the required diagrams.  This included a diagram of the JMS queues, a package diagram to make it easier to understand my class diagram, and an activity diagram detailing how the chosen platform would support one the NFR.  Of course this is dependent on your solution, but I feel some extra clarification can be helpful.
  • Don't be afraid to be specific.  I chose a specific Application Server (Weblogic in my case), and I justified it since it had particular features to help me meet the particular NFR of my assignment.  The same applies to the other aspects of the deployment, server specs, OS specs, database specs, security and firewalls, physical server security, application management.  Remember, you're the architect, so every detail in ensuring a successful solution is part of your job.   

The essay and the wait...

The essay part has a duration of 120 minutes, and be prepared to use all of the time.  I'm sure I could have kept going for at least a couple more hours.  If you did the assignment correctly, and addressed all of the functional and non functional requirements, you should have more than enough to fill two hours of writing.  
Be ready to justify your choice of technologies and architectures, specially compared to other technologies that could have been used in those cases.  This will show that you have a well rounded knowledge of the Java ecosystem, and can make an informed decision between all of the different frameworks out there.  This includes technologies such as persistence mechanisms (JPA, Hibernate, JDBC, NoSQL) and web frameworks (JSF, Spring MVC, Struts, etc.).  Do you need a full blown EAR vs a simple WAR?
In this part I took special care to justify what I thought was a very controversial part of my design.  I will not go into details, suffice to say that I did not have one of the pieces that is almost always present in a modern web application.  I had already justified this in my assignment, but I went the extra mile in the essay and described how I would have done it in the "traditional way" if my assumptions were not applicable.  I also explained all of the potential downsides of the "traditional way".
The grading took almost three (nerve-wrecking) weeks, but I've read of greatly varying times.  I passed with 147 out of 160 possible points, which makes me feel confident of my process.  Unfortunately the score report does not shed any light on what I could have done better.  Only time will tell how valuable this certification will be for me, but I can at least say that the learning process helped me to structure many of the activities I have already been doing, and solidified some of my weaker areas.  
If you have any questions, feel free to ask. 
And if you're on your way to become an OCM, best of lucks!


Monday, June 30, 2014

Passing the Java EE 6 Enterprise Architect Certified Master Part 1 (Exam 1Z0-807)

Last Saturday I passed the 1Z0-807 test, and I wanted to share some of my experience, specially since there seems to be little information about this test (compared to other Java certification tests at least). You can also take a look at my study notes.  You can see the my notes for part 2 & 3 here.

My total time to prepare was around 6 weeks. It was a slow time at the office, so I think I was doing around 15-20 hours of reading / studying per week.

Going into the studying phase I felt I was very weak in the EJB and JSF parts of the material, since it's not something that I actively use at work, while on the other technologies I felt I had a moderate to strong grasp on the material.

I think 6 weeks was more than enough time to prepare if you have a background working with database driven Java web applications (even if not using the full Java EE stack as in my case).  Your pace may vary, and honestly I signed up for the test before I had started studying, so the time constraint was my motivation.

Books that I read to prepare for this test:

  • "The Java EE 6 Tutorial" by Eric Jendrock et al. (you can get it here
  • "Real World Java EE Pattern" by Adam Bien (the second edition) 
  • "Sun Certified Enterprise Architect for Java EE Study Guid" by Mark Cade and Humphrey Sheil 
  • "Bitter Java" and "Bitter EJB" by Bruce Tate 
  • "SOA in Practice" by Nicolai Josuttis
 Books that I had read in the past that helped:

  • "Design Patterns: Elements of Reusable Object-Oriented Software" Erich Gamma,Richard Helm,Ralph Johnson,John Vlissides 
  • "Enterprise Integration Patterns" by Gregor Hohpe 

The biggest issue I had was the lack of a proper study guide for this test.  The study guides I found were for the previous version of the test, based on Java EE 5, however most of the material is relevant.  I found the Java EE 6 tutorial very useful to make sure I figured what were the gaps that I needed to fill.  An specific guide for this test is supposed to be available very soon, so you might have better luck than I did.  If and when it becomes available, by all means read it to guide yourself.

In general I recommend getting through the material as described by the outline once, and then practicing and practicing some more.  Get as many practice questions as you can.  Some of those questions have subjective answers, but the more you practice the better chance you will have. Currently the only test simulator is from EPracticeLabs, and while not awesome I think is more than enough to make it through.

Relevant things that I found are not directly mentioned in the outline but are significant in the exam:

  • Design patterns and anti-patterns (covered by the books listed above)
  • Security (specially risks and how to mitigate them)
  • When to use a technology. For example when to use EJB or just a web container.  When to use DAOs, JDBC, or JPA.  When to use durable subscribers in JMS.  For this I would recommend you practice until you can detect the hints that the give in the questions regarding portability, scalability, etc.
Regarding the exam itself I can point out:

  • The exam was what I expected based on the practice questions I had seen. No surprise here.
  • There were many scenario questions with incomplete information.  So it get's down to eliminating obvious wrong answers and then some luck picking from what's left.
  • Only multiple choice questions (pick one, or pick X).  No drag and drop or anything like that.
  • Time is plentiful.  The give you 150 minutes (two and a half hours).  I'm fast at taking tests, but it took me thirty minutes to do my first pass, and probably ten minutes to review.
  • Some questions can get ambiguous if you over think them.  For example, one-tier is a mainframe application, two-tier is thick client, three tier is web based application.  Don't over think about a contrived three tier application with thick clients talking and sharing business logic with EJBs unless specifically told so on the questions.
  • Finally, they didn't give me result right away.  They said I would get an email in 30 minutes with the result, but it arrived in a matter of five minutes or so after finishing the test.  I haven't taken an Oracle test in more than 3 years, but I don't remember this being the case.  


You can take a look my study notes below.  They do borrow material from some Java EE 5 notes that are floating around in the internet:
  • http://java.boot.by/scea5-guide/
  • http://rieck.dyndns.org/architecture/scea5.html





Here's the direct link to the document.

If you have any questions, feel free to add a comment and I'll try to help you.

You can now see the my notes for part 2 & 3 here.

Monday, February 17, 2014

Measuring coverage for Unit and Integration tests with Maven, Jenkins, and SonarQube

Measuring coverage for Unit and Integration tests with Maven, Jenkins, and SonarQube. This blog post provides a very simple example of integrating Maven, Jenkins, and SonarQube, in particular to properly get coverage metrics of integration tests. While the basic integration Maven, Jenkins, and SonarQube was really simple (including code coverage metrics during unit tests, getting the integration data to display in Sonar was more complicated, and required me to piece together this configuration from several posts around the internet.

These tools are very easy to install, and provide the basic framework for serious quality development. Hopefully you will find this information useful.

This tutorial assumes that you have installed and setup the following tools:

  • Maven A Java build management system.
  • Jenkins A continuous integration (CI) system
  • SonarQube A software quality mangement system.   
If you want some pointers on getting these tools setup, you can find plenty of resources in the web:



Project setup 

We will start looking at the pom.xml, which is most critical part of this tutorial. If you're familiar with Jenkins a and Sonar, the rest of this tutorial might be redudant for you.
pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>test.tutorials.multi</groupId>
    <artifactId>mod2</artifactId>
    <packaging>jar</packaging>
    <version>1.0-SNAPSHOT</version>
    <name>Coverage Maven Application</name>


    <properties>
        <sonar.jacoco.itReportPath>${env.WORKSPACE}/target/jacoco-integration.exec</sonar.jacoco.itReportPath> <!-- 1 -->
    </properties>

    <build>
        <plugins>


            <plugin>
                <groupId>org.jacoco</groupId>
                <artifactId>jacoco-maven-plugin</artifactId>
                <configuration>
                    <propertyName>jacoco.agent.argLine</propertyName> <!-- 2 -->
                    <destFile>${project.build.directory}/jacoco-integration.exec</destFile> 
                    <dataFile>${project.build.directory}/jacoco-integration.exec</dataFile> <!-- 3 -->
                </configuration>
                <executions>
                    <execution>
                        <id>agent</id>
                        <goals>
                            <goal>prepare-agent</goal> <!-- 4 -->
                        </goals>
                    </execution>
                </executions>
            </plugin>

            <plugin>
                <artifactId>maven-failsafe-plugin</artifactId>
                <version>2.6</version>
                <configuration>
                    <argLine>${jacoco.agent.argLine}</argLine> <!-- 5 -->
                    <reportsDirectory>${project.build.directory}/surefire-reports</reportsDirectory> <!-- 6 -->                 
                </configuration>
                <executions>
                    <execution>
                        <goals>
                            <goal>integration-test</goal> 
                            <goal>verify</goal> <!-- 7 -->
                        </goals>
                    </execution>
                </executions>
            </plugin>           

        </plugins>
    </build>


    <dependencies>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.11</version>
            <scope>test</scope>
        </dependency>

    </dependencies>

</project>
  1. This property lets sonar know where to find JaCoCo reports for the integration tests. The ${env.WORKSPACE} prefix enables the Sonar plugin to find these files.
  2. The JVM argument required to inject JaCoCo into the integration tests with be stored in a property named jacoco.agent.argLine .
  3. The destFile and dataFile properties indicate where JaCoCo will output its files.
  4. The
    prepare-agent
    goal will setup the JVM argument that we will reuse later.
  5. Property will inject the required JVM arguments to enable JaCoCo integration. Since JaCoCo does instrumentation at runtime, building and injecting the proper parameters is all that's required to get coverage information.
  6. Changing the reportsDirectory property is optional, and will cause the Failsafe plugin to output it's results to the same directory used by the Surefire plugin. The Surefire plugin executes Unit Tests in Maven, and it's results are read by SonarQube, and displayed as "Unit test success". Sending Failsafe's results to the Surefire report directory will trick Sonar into display the information. If you have a CI server such as Jenkins displaying this information, this "trick" might be superfluous.
  7. The Failsafe plugin will run in the integration-test and verify goals.

Other than the pom.xml, the test project includes to very basic tests, a unit test, and an integration test. Each one covers 50% of the classes in the projects.  You can get the complete source code here. You will find a minimal example from which you can work up.

Setting up a job with Sonar support in Jenkins

There's several ways of running SonarQube against a project:
  • Using the SonarQube plugin for Jenkins
  • Using the stand alone SonarQube Runner
  • Adding it to your Maven (or Ant) build
In this tutorial we will use the first option.  Personally I prefer to keep the pom as clean as possible, and have Jenkins perform extra steps such as analyzing code quality, deployment into package managers, etc.

Make sure that you install the Jenkins SonarQube plugin:


The SonarQube plugin must be configured with one or Sonar installation.  Most shops will only have one Sonar installation.  This can be done in "Manage Jenkins" -> "Configure System"


In this case, we're using the defaults, since our SonarQube installation is running on http://localhost:9000, and we're using the embedded database, so there's no need to configure any of the database information.

It is also useful to check "Skip if triggered by SCM Changes" and "Skip if triggered by the build of a dependency", to prevent Sonar running after every commit.  Running Sonar once a night is normally enough for most projects.  This assumes that you have one nightly build scheduled for every project that you want to analyze with SonarQube.

We will then set up a simple Maven job in Jenkins:



We will check this project out of Source Control Management.  I'm going to use SVN to access a directory inside my git repository.  For real projects, I would recommend having a one git repository per each project:


Finally, we just add a "Post-build Action", to invoke SonarQube.  This will use the configuration for the SonarQube instance we configured before.



Finally you can start a build manually.  This will cause to project to first be built as usual using Maven, and the analyzed by Sonar:


The results

After the build is run successfully by Jenkins, you can see the results on your SonarQube installation.  Make sure that you have the "Integration Tests Coverage" widget in your dashboard.  To add it, log into SonarQube, and click on "Configure widgets".


You can see how Test Coverage for Integration Tests is reported separately, and that there's a metric for "Overall Coverage", which joins the coverage for unit tests and integration tests.  If you drill down can see much more detail about your project, so I encourage to click around and see what Sonar has to say about your code.

Source Code

As always the code is available for you to download and play with. Clone it from it from github:
git clone https://github.com/aolarte/tutorials.git
The finished code is contained in directory coverage.

References