Tuesday, February 26, 2013

GlassFish Clustering Setup


1.  Detailed steps to create the domain, cluster and clustered instances in Glassfish


It is assumed that one cluster is created across two servers: un128desb1, un128desb2. Within the cluster there are two instances that will reside in un128desb1 and un128desb2 respectively.

asadamin commands must be executed in the user name of jcaps.


1.1 Steps to create domain with cluster profile and start the domain on the server un128desb1

1) Execute the asadmin command: create-domain on un128desb1

/opt/JCAPS62/appserver/bin/asadmin create-domain \
--portbase $PORTBASE \
--user $ADMIN_USER_NAME \
--passwordfile $PASSWORD_FILE \
--profile cluster \
$DOMAIN_NAME

where
PORTBASE is base port number for this domain
ADMIN_USER_NAME is the admin user name for this domain.
PASSWORD_FILE is the name of the password file which contains the admin password for this domain.
DOMAIN_NAME is the name of domain to be created.


2) Execute the asadmin command: start-domain on un128desb1

/opt/JCAPS62/appserver/bin/asadmin start-domain \
--user $ADMIN_USER_NAME \
--passwordfile $PASSWORD_FILE \
$DOMAIN_NAME

where ADMIN_USER_NAME is the admin user name for this domain.
PASSWORD_FILE is the name of the password file which contains the admin password for this domain.
DOMAIN_NAME is the name of domain to be created.

1.2 Steps to create node agent in one machine that has DAS

/opt/JCAPS62/appserver/bin/asadmin create-node-agent \
--host $HOST_NAME \
--port $ADMIN_PORT \
$NODE_NAME

where
HOST_NAME is the machine name where the new node agent will reside. Here it is un128desb1. ADMIN_PORT is the admin port number. It is PORTBASE+48.
NODE_NAME is the name of created node agent.

2) After the command is executed successfully a new folder node1 is created /opt/JCAPS62/appserver/nodeagents/node1 on machine un128desb1.
node 1 is the name of node agent created.
 Also new created node agent is added into /opt/JCAPS62/appserver/domains/dmlusterUtilities/config/domain.xml in un128desb1 where DAS resides.


<node-agents>
<node-agent name="node1" ......>
</node-agent>
</node-agents>


After the node agent is created this node agent is not displayed on Admin console page until the node agent is started.

3) Execute the asadmin command: start-node-agent
/opt/JCAPS62/appserver/bin/asadmin start-node-agent \
--user $ADMIN_USER_NAME \
--password $PASSWORD_FILE \
$NODE_NAME

where NODE_NAME is the name of node that is to be started.


1.3 Steps to create domain with cluster profile and start the domain on the server un128desb2

Creating one domain with cluster profile on un128desb2 is the same as on un128desb1. The command is executed on un128desb2


1.4 Steps to create node agent talked to DAS in remote machine on the server un128desb2

1) Execute the asadmin command: create-node-agent on un128desb2 (un128desb2 is the another machine where the created node agent will reside in)


/opt/JCAPS62/appserver/bin/asadmin create-node-agent \
--host $HOST_NAME \
--port $ADMIN_PORT \
--user $ADMIN_USER_NAME \
--password $PASSWORD_FILE \
$NODE_NAME

where
HOST_NAME is the machine that hosts DAS. Here it is un128desb1
ADMIN_PORT is the admin port for this host.
ADMIN_USER_NAME and PASSWORDFILE are the admin user name and admin password for the server where the node agent is created. Here it is un128desb2.

2) After the command is executed successfully A new folder is created /opt/JCAPS62/appserver/nodeagents/node2 on machine un128desb2.   node 2 is the name of the node agent created.
Also new created node agent is added into /opt/JCAPS62/appserver/domains/dmlusterUtilities/config/domain.xml in un128desb1 where DAS resides.

Note it is domain.xml on un128desb1 not un128desb2.


<node-agents>
<node-agent name="node2" ....>
</node-agent>
</node-agents>
After the node agent is created this node agent is not displayed on Admin console page until the node agent is started.

3) Execute the asadmin command: start-node-agent on un128desb2
/opt/JCAPS62/appserver/bin/asadmin start-node-agent \
--user $ADMIN_USER_NAME \
--password $PASSWORD_FILE \
$NODE_NAME

where ADMIN_USER_NAME, PASSWORD_FILE are the admin user name and the name of the password file for the host where DAS resides.
Here the host is un128desb2.

After the command is executed successfully this node agent will be displayed on Admin Console on the host where DAS resides. Here it is un128desb1.



Figure2: Node agents from the AdmonConsole

Note: There is no node agent displayed on Admin Console on the host un128desb2 since the created node agent: node5 is associated to DAS on un128desb1.


1.5 Steps to create cluster on un128desb1

1) Execute the asadmin command: create-cluster on the server un128desb1

/opt/JCAPS62/appserver/bin/asadmin create-cluster \
--user $ADMIN_USER_NAME \
--passwordfile $PASSWORD_FILE \
--host $HOST_NAME \
--port $ADMIN_PORT \
$CLUSTER_NAME

where
HOST_NAME and ADMIN_PORT are the host name and admin port number for the host where cluster resides respectively. Here it is un128desb1.
ADMIN_USER_NAME and PASSWORDFILE are admin user and password file name for the host. CLUSTER_NAME is the name of the cluster that is to be created.

2) After the command is executed successfully one new cluster is displayed in Admin Console on un128desb1.

Note: Since no configuration is specified when creating the cluster the new configuration called CLUSTER_NAME-config is created and is used by the created cluster

Figure3: Clusters from AdminConsole


1.6 Steps to create cluster instances on un128desb1

1) Execute the asadmin command: create-instance on un128desb1 to create instance 1 which resides on the machine where node agent node1 resides
/opt/JCAPS62/appserver/bin/asadmin create-instance \
--user $ADMIN_USER_NAME \
--passwordfile $PASSWORD_FILE \
--host $HOST_NAME \
--port $ADMIN_PORT \
--nodeagent $NODE1_NAME \
--cluster $CLUSTER_NAME \
$INSTANCE1_NAME

where
HOST_NAME and ADMIN_PORT are the host name and admin port number.
ADMIN_USER_NAME and PASSWORD_FILE are the user name and password file for the host NODE1_NAME is the node agent that is used to manage the created instance.
CLUSTER_NAME is the name of cluster to which the created instance belongs.
INSTANCE1_NAME is the name of created instance.

2) After the successful execution of this command a new folder is created /opt/JCAPS62/appserver/nodeagents/node1/instance1 on machine un128desb1.
From Admin Console on un128desb1 shows the new created cluster instance.



Figure4: New created cluster instance from AdminConsole

1.7 Steps to create cluster instances on un128desb2


1) Execute the asadmin command: create-instance on un128desb1 to create instance 2 which resides on the remote machine: un128desb2

Note: This command is executed on machine: un128desb1 rather than un128desb2. But the instance2 created resides on un128desb2


/opt/JCAPS62/appserver/bin/asadmin create-instance \
--user $ADMIN_USER_NAME \
--passwordfile $PASSWORD_FILE \
--host $HOST_NAME \
--port $ADMIN_PORT \
--nodeagent $NODE2_NAME \
--cluster $CLUSTER_NAME \
$INSTANCE2_NAME

where
HOST_NAME and ADMIN_PORT are the host name and admin port number.
ADMIN_USER_NAME and PASSWORD_FILR are the user name and password file for the host. NODE2_NAME is the node agent that is used to manage the created instance. Here NODE2_NAME resides on un128desb2.
CLUSTER_NAME is the cluster to which the created instance belong to.
INSTANCE2_NAME is the name of created instance.

2) After the successful execution of this command a new folder is created /opt/JCAPS62/appserver/nodeagents/node5/instance2 on machine un128desb2.
From Admin Console on un128desb1 shows the new created cluster instance.


Figure5: New Created Cluster Instance from AdminConsole

1.8 Steps to start the cluster on un128desb1

1) Execute the asadmin command: start-cluster on un128desb1 to start the cluster


/opt/JCAPS62/appserver/bin/asadmin start-cluster \
--host $HOST_NAME \
--port $ADMIN_PORT \
--user $ADMIN_USER_NAME \
--passwordfile $PASSWORD_FILE \
$CLUSTER_NAME

2) This command will take some time to finish. After the successful execution of this command the cluster and the instances within the cluster will be started.
From Admin Console on un128desb1 shows the new created cluster instance.

Figure6: Started cluster from AdminConsole
Figure7: Cluster instances within the cluster


1.9 Steps to delete the cluster instance

Instances can be deleted from AdminConsole or from asadmin command.

1) Execute the asadmin command: delete-instance on un128desb1 to start the cluster
/opt/JCAPS62/appserver/bin/asadmin delete-instance \
--user $ADMIN_USER_NAME \
--passwordfile $PASSWORD_FILE \
--host $HOST_NAME \
--port $ADMIN_PORT \
$INSTANCE_NAME \

Where
HOST_NAME and ADMIN_PORT are the host name and admin port number of the domain where the cluster resides.
ADMIN_USER_NAME and PASSWORD_FILE are the user name and password file for the domain. INSTANCE_NAME is the name of the instance to be deleted.

2) Select the cluster instances to be deleted and then click on the Delete button to delete the selected instances


Figure8 Selecting the cluster instance to be deleted from AdminConsole

1.10 Steps to delete the node agents

Node agents can be deleted from AdminConsole or from asadmin command.
1) Execute the asadmin command: delete-nodeagent on un128desb1 to start the cluster


/opt/JCAPS62/appserver/bin/asadmin delete-nodeagent \
$NODEAGENT_NAME

where
NODEAGENT_NAME is the name of node agent that is to be deleted.

2) Select the node agent to be deleted and then click on the Delete button to delete the selected node agent.


Figure9 Selecting the node agent to be deleted from AdminConsole

2 Create resources in a cluster

To create resources in a cluster is the same as to a standalone server. The only difference is to add one cluster from Available Targets as one of Selected Targets.



Figure10: Selecting targets for deployed resources from AdminConsole

3 Deploy the applications to a cluster

To deploy the applications to a cluster is the same as to a standalone server. The only difference is to add one cluster as one of Selected Targets

Figure11: Selecting targets for deployed applications from AdminConsole


Note: When creating Oracle JDBC resources make sure the Oracle JDBC driver jar: ojdbc5.jar is in both GlassFish installation on the server machines.

In the example ojdbc5.jar are located in the folder: /opt/JCAPS62/appserver/lib on the machines: un128desb1 and un128desb2.


4 Deploy ESB to a cluster

To deploy ESB (service assemblies, BC, service engines or shared libraries) to a cluster is the same as to a standalone server. The only
difference is to add one cluster from Available Targets as one of Selected Targets.

5 Operation steps in a cluster

5.1 Start and stop the node agents

In order to start the cluster and cluster instances the node agents associated with the cluster must be started first.
Whether the instances are started when the node agent is started depends on the setting for the node agent.
There is one checkbox in AdminConsole for each node: Start Instances On Startup. If it is checked when the node agent is started all instances associated with it will be started as well.


Figure12: Node agent status from AdminConsole

However when the node agent is stopped the instances associated with this node agent will be stopped as well.
At the moment it seems there is no way to start the node agents from Admin Console. Node agents can be started by using the asadmin command:

/opt/JCAPS62/appserver/bin/asadmin start-node-agent \
--user $ADMIN_USER_NAME \
--passwordfile $PASSWORD_FILE \
$NODE_NAME

where
ADMIN_USER_NAME and PASSWORD_FILR are the user name and password file for the host. NODE_NAME is the node agent to be started.


Figure13: Node agents from AdminConsole
Likewise node agents can be stopped by using the asadmin command:
/opt/JCAPS62/appserver/bin/asadmin stop-node-agent \
$NODE_NAME

where
NODE_NAME is the node agent to be stopped.



Figure14: Node agent status from AdminConsole
Note: start-node-agent command and stop-node-agent are executed on the machine where the node agent is associated with the host name. For example node 2 is associated with the host un128desb1 and node3 with the host un128desb2. In order to start and stop node2 the asadmin commands are executed on un128desb1 and to start and stop node3 the asadmin commands are executed on un128desb2.


5.2 Start and stop the cluster

After the node agents are started the cluster can be started and stopped. Whenever the cluster is started or stopped all instances within the cluster will be started or stoped.
The cluster can be started and stopped from either AdminConsole or asadmin commands.
From the Clusters page in AdminConsole select the cluster to be started or stopped then click on Start Cluster button if it is to start cluster or Stop Cluster button if it is to stop the cluster.


Figure15: Start and stop of cluster from AdminConsole

The cluster can be started by using the asadmin command:


/opt/JCAPS62/appserver/bin/asadmin start-cluster \
--user $ADMIN_USER_NAME \
--passwordfile $PASSWORD_FILE \
--host $HOST_NAME \
--port $ADMIN_PORT \
$CLUSTER_NAME

Where
HOST_NAME and ADMIN_PORT are the host name and admin port number of the domain where the cluster resides.
ADMIN_USER_NAME and PASSWORD_FILE are the user name and password file for the domain. CLUSTER_NAME is the name of the instance to be stopped.

5.3 Start and stop the cluster instances 

Each cluster instances can be started and stopped individually from either Admin Console or asadmin command.


Figure16: Start and stop of one cluster instance from AdminConsole
The cluster instance can be stopped by using the asadmin command:


/opt/JCAPS62/appserver/bin/asadmin stop-instance \
--host $HOST_NAME \
--port $ADMIN_PORT \
--user $ADMIN_USER_NAME \
--passwordfile $PASSWORD_FILE \
$INSTANCE_NAME

where
HOST_NAME and ADMIN_PORT are the host name and admin port number of the domain where the instance resides.
ADMIN_USER_NAME and PASSWORD_FILE are the user name and password file for the domain. INSTANCE_NAME is the name of the instance to be started.

The cluster instance can be stopped by using the asadmin command:


/opt/JCAPS62/appserver/bin/asadmin start-instance \
--host $HOST_NAME \
--port $ADMIN_PORT \
--user $ADMIN_USER_NAME \
--passwordfile $PASSWORD_FILE \
$INSTANCE_NAME

where
HOST_NAME and ADMIN_PORT are the host name and admin port number of the domain where the instance resides.
ADMIN_USER_NAME and PASSWORD_FILE are the user name and password file for the domain.

Thursday, February 21, 2013

Objects in JavaScript

In JavaScript all objects are equal by nature. This means that

  • There is no class difference in JavaScript objects.  They are just objects and there is no class birth mark with them.   In Java any object must belong to one class and it will always be so in its life cycle.  
  • JavaScript objects can change themselves dynamically.   

Conceptually JavaScript object is the container of properties which are the name-value pair.  The property value can be the primitive values, or other object(including the functions because they are also objects).  Physically object is the structure that hold these properties of the object.

Creation of Object


So how a JavaScript come into life? There are several ways to do so. One normal way is using constructor function. What is constructor function?  Constructor function is a normal function but it is invoked with the keyword new.    It is kind of magic.   When the function is invoked in such way three things will happen within the function: An object is created, this is resolved for this new created object and the new object is returned even you don't do so explicitly in the function. The below are two examples that use constructor functions to create a Car object.   One is with named function and another with anonymous function. The results are the same.
function Car(model, make, year) {
 this.make = make;
 this.model = model;
 this.year = year;
}
var myCar1 = new Car('Honda', 'Accord', 2010);

var Car = function (model, make, year) {
 this.make = make;
 this.model = model;
 this.year = year;
}

var myCar2 = new Car('Honda', 'Accord', 2010);

But when this function is invoked without the keyword new.   The magic of object creation will not happen. 

All objects in JavaScript are created with constructor function.  With an object you can check its constructor function using the below statement

myObj.constructor

It will return the constructor function used to create the object.  In above examples myCar1.constructor will return Car() function and myCar2 will return anonymous function.


Apart from using constructor function to create objects, JavaScript provides another way or shortcut to create the objects: Literal.   The below are two examples of using literal to create objects.

var myCarLiteral = { make:'Honda',
                     model: 'Accord',
                     year: 2013}; 
Actually these creation are the same as we use the constructor functions.  But if you check the constructor function of this object you will find that it is function Object().


Operators Used to Check the Type of Objects


There are two operators that can be used to check the type of the objects: typeof and instanceof.
typeof operator is used to return the type of object. It wil return the string that is the name of object type.  For all objects it will return object
 instanceof is used to decide if an object is created from one constructor function.  It will return a boolean value. 
 function Car(model, make, year) {
  this.make = make;
  this.model = model;
  this.year = year;
 }
 var myCar1 = new Car('Honda', 'Accord', 2010);

 var myCar3 = { make:'Honda',
       model: 'Accord',
       year: 2013};

 console.log(myCar1 instanceof Car);
 console.log(myCar3 instanceof Object);
 console.log(typeof Car);
 console.log(typeof myCar1);
 console.log(typeof myCar3);

The above log output will be: true, true, function, object, object.


Built-in Objects or Native Objects


The above examples arex custom-made constructor function.  Actually JavaScript provides many built-in constructor functions already.   These are: Object(),Number(),String(),Array(),Function(),Date(),RegExp(),Error(). In these the most interesting one is Object().   This is the root of all JavaScript objects.

Here we can use Obect() to create myCar object which we create using custom constructor function.

var myCar = new Object(); 
myCar.make = "Honda";
myCar.model = "Accord";
myCar.yaer = 2013;
Here this myCar object actually is the same as myCar using Car constructor function even the constructor functions they use are different.     But obviously the customer constructor function is more convenient in creating multiple instances sharing the same properties of Car.

Monday, February 18, 2013

Recover from UKASH virus attack

Last week my laptop unfortunately got infected with one kind of UKASH virus.   It was really nasty.  It loked up my machine.  Whenever the lasptop is started this vrius will take over.   It will display one warning page from AFP saying that you are violated some laws in Australia and you are fined  A$100 if you have internet connection.   If my lasptop has no access to the internet it will show one blank screen.   It even turns on the cam in the laptop.

I did some search with Google and found out that there are many useful information about this virus.  I tried the several suggestions from this link: http://www.pcrisk.com/removal-guides/6984-remove-computer-locked-ukash-virus.   It is very good and provides many useful suggestions.  Many thanks to the author.

First I tried to boot my machine with Safe Mode with Networking(Pressing key F8 during the stating process).   It tured out that the UKASH virus I got affected seems smart enough.   It will shutdown the machine automatically when trying Safe Mode with Networking.

The next thing I tried is to boot the machine with Safe Mode with Command Prompt.   It was successful. 

After that I did the system restore using the command:

C:\Windows\system32>cd restore
C:\Windows\system32\restore>rstrui.exe

 
 
Click on Next button.


 
 
Then I select one restore point since I knew when my laptop got affected.   I chose the one that was before the affection time.   Then click on Next button.
 
 
 
Make sure the restore point is correct.  Then click on Finish button.   There is a confimation and warning popup and click on Yes button.  Then system restore is started.   It will take a while. 
After the system restore is done I boot my laptop again and this time it was successful to do so.

 

Tuesday, February 12, 2013

Configure Weblogic JMS Resources


Before we can configure Weblogic JMS resources such as ConnectionFactory, Queue, Topic, it is better to understand some concepts in Weblogic JMS.  

JMS Server


JMS server is the management container for JMS resources defined in JMS modules targeted to.   A JMS server's main responsibility is to maintain persistent storage for these resources, maintain the state of durable subscriber and etc.
JMS servers are persisted in the domain's config.xml file and multiple JMS servers can be configured on the various WebLogic Server instances in a cluster, as long as they are uniquely named.
One domain can have one or more than one JMS server. 
A JMS server can manage one or more JMS modules.
One cluster node has own JMS server targeted to managed node.
For example there are two managed nodes in a clustered environment: osb_ms1 and osb_ms2.
We create two JMS servers: OSB_JMSServer_01 and OSB_JMSServer_02 which are targeting osb_ms01 and osb_ms02 respectively.

JMS Module


JMS modules group JMS configuration resources (such as queues, topics, and connections factories).  These are outside domain configuration.  
JMS module targets the servers.   If it is clustered environment it will target all servers in the cluster.

Subdeployment


Subdeployment is also known as Advanced Targeting.  
Subdeployment resource is a bridge between the group of JMS resources and JMS Servers.   Without Subdeployment the JMS resources cannot target any JMS server.
When you create a JMS resource you need to choose one Subdeployment.
In clustered environment one Subdeployment targets multiple JMS server for each cluster node.

The below diagram shows the relationship between them.


 

Steps to configure JMS resources


The following is an example which demostrate the configuration steps for JMS resources in Oracle using Oracle Weblogic Console. This example is done with a single node environment. 

 

1. Create JMS Server

The first step is to create a JMS server.  Select the domain in Console.  Then click on YourDomain-->Services-->Messaging-->JMS Servers. In the right panel click on New button. And then type in the name for JMS server to be created and select the Persistence Store.  If Persistemce Store doesn't exist create a new Persistence Store.   Then click on Next button.


Select the Weblogic server node and then click on Finish button.  Then a JMS server called Demo-JMSServer is created.  This Demo-JMSServer is targeted to AdminServer.















2. Create JMS Module

The second step is to create a JMS Module. Select the domain in Console. Then click on YourDomain-->Services-->Messaging-->JMS Modules.   In the right panel click on New button.  And then type in the name for JMS Module.  Then click on Next button.
 
 

 
Then select the target of Weblogic server node.


Then click on Next button.

Then click on Finish button.  DemoJMSModule is created.











3. Create JMS Resource

The third step is to create a JMS resource such as connection factory, queue or topic in above created JMS Module.
Click the JMS Module where a resource is created.


Then click on New button.


Select the resource type to be created.   Here we create a Queue.   Then click on Next button.


Type in the Queue nama and JNDI name for this Queue and then click on Next button.



Select the Subdeployment.  If no existing Subdeployment can be used click on Create a New Subdeployment buttn.





Type in the name for a new Subdeployment and then click on OK button.




Then select the JMSServer and then click on Finish button.


 
 
 


 

Sunday, February 10, 2013

Use Spring MVC Test framework and Mockito to test controllers

Recently I came into one project which is using Spring MVC for web-tier in the architecture.  It gave me a chance to use Spring MVC Test framework and Mockito mock framework together in the unit testing of all Spring MVC controllers in the application.   I found that both of them provide very good functionalities to test the controllers and would like to show what I did with them.

 

Spring MVC Test framework

Before when unit testing MVC controllers we usually use MockHttpServletRequest and MockHttpServletResponse and directly send this mock request to the controllers to do the unit testing, now in Spring 3.2 there is a new test framework which is specially used for testing Spring MVC.   It is Spring MVC Test framework.  With this test framework you can test your controllers just like you test them within a web container but without starting a web container.
Spring MVC Test framework provides much nicer testing framework to cover many aspects of testing in Spring MVC.   With this framework apart from testing business logic within controllers we can also test inbound/outbound request/response serialization (such as JSON request to Java and Java to JSON response), request mapping, request validation, content negotiation, exception handling and etc. 
 In order to use it you can add the below dependency into your project POM file.

    <properties>
        <spring.version>3.2.0.RELEASE</spring.version>
    </properties>
      
    <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-test</artifactId>
            <version>${spring.version}</version>
            <scope>provided</scope>
    </dependency>

The key part of Spring MVC Test framework is MockMVC.  MockMVC will simulate the internals of Spring MVC and MockMVC is the entry point for Spring MVS testing.    
The first step of using Spring MVC testing is to instantiate one instance of MockMVC
@RunWith(SpringJUnit4ClassRunner.class)
@WebAppConfiguration
@ContextConfiguration(locations={"file:src/main/webapp/WEB-INF/mvc-dispatcher-servlet.xml",
                                                        "classpath:/META-INF/applicationContextForTest.xml"})
@TestExecutionListeners({ DependencyInjectionTestExecutionListener.class})
public class ProductControllerTest {
   
    @Autowired
    private WebApplicationContext wac;
   
    private MockMvc mockMvc;
   
    @Before
    public void setup() {
        
        this.mockMvc = MockMvcBuilders.webAppContextSetup(this.wac).build();

        // Details are omitted for brevity
    }
}

After MockMVC is created you can use MockMVC to do the testing.    You can create one HTTP request and specify all the details of the request such as HTTP method, content type, request parameters and etc.   Then you can send this request through MockMVC and then verify the results.
RequestBuilder requestBuilder = MockMvcRequestBuilders.get("/welocme");
       
this.mockMvc.perform(requestBuilder).
                        andExpect(MockMvcResultMatchers.status().isOk()).
                        andExpect(MockMvcResultMatchers.model().attribute("welcome_message", "Welcome to use product: DELL Insprson")).
                        andExpect(MockMvcResultMatchers.model().size(1)).
                        andExpect(MockMvcResultMatchers.view().name("welcome"));


Mockito

Mockito is another testing mock framework.   I find that it is quite easy and convenient to use.   First if you want to use it in your project you can the following dependency to your project POM file.  Here I use the version 1.9.5.
        <dependency>
            <groupId>org.mockito</groupId>
            <artifactId>mockito-all</artifactId>
            <version>1.9.5</version>
            <scope>test</scope>
        </dependency>

Mockito provides some annotations to simplify writing the testing codes.  @Mock and @InjectMocks are the two annotations I am going to use.   @Mock is used to annotate the object to be mocked.   In the unit testing when testing a method of one object which has the dependency on another object, this dependent object needs to be mocked.  In my project there is class called ProductController and another class ProductService. ProductController uses ProductService to do actual work to serve the requests ProductController is supposed to handle.
@Controller
@RequestMapping("/products")
public class ProductController {
   
    @Autowired
    private ProductService productService;
   
    // Details are omitted for brevity
}

When I test ProductController I have ProductService as mock service so it can mock different responses from ProductService.   Using @Mock annotation from Mockito it can simply be done in my test class as the below:
 @Mock
ProductService mockProductService;

Another step is to put this mock object into the object to be tested.  Mockito provides another very useful annotation @InjectMocks to do. @InjectMocks is to annotate the object where the mock object is injected into.   In my example it is ProductController. and I have the following:
@InjectMocks
ProductController productController;

But in order to make mock object injection really happen there is another thing needed to be done: invoke MockitoAnnotations.initMocks method.  So my test class will be as the below:
public class ProductControllerTest {
   
    @InjectMocks
    private ProductController productController;
   
    @Mock
    private ProductService mockproductService;
   
    @Before
    public void setup() {
       
        MockitoAnnotations.initMocks(this);
    }
}

The basic functions of mock framework is to return a given results when a specific method is invoke.   In Mockito it is done using Mockito.when(...).thenReturn(...) 
public class ProductControllerTest {
   
    @InjectMocks
    private ProductController productController;
   
    @Mock
    private ProductService mockproductService;
   
    @Before
    public void setup() {

        MockitoAnnotations.initMocks(this);

        List<Product> products = new ArrayList<Product>();
        Product product1 = new Product();
        product1.setId(new Long(1));
       
        Product product2 = new Product();
        product2.setId(new Long(2));
       
        products.add(product1);
        products.add(product2);
       
        Mockito.when(mockproductService.findAllProducts()).thenReturn(products);
    }
}

In the above example when findAllProducts method in ProductService is invoked the mocked ProductService will return a list of Products specified before.


Put all together


The below is the code snippet that shows using both Spring MVC Testing and Mockito together for testing a controller.

@RunWith(SpringJUnit4ClassRunner.class)
@WebAppConfiguration
@ContextConfiguration(locations={"file:src/main/webapp/WEB-INF/mvc-dispatcher-servlet.xml",
                                 "classpath:/META-INF/applicationContextForTest.xml"})
@TestExecutionListeners({ DependencyInjectionTestExecutionListener.class})
public class ProductControllerTest {
   
    @Autowired
    private WebApplicationContext wac;
   
    private MockMvc mockMvc;

    @InjectMocks
    private ProductController productController;
   
    @Mock
    private ProductService mockproductService;

   
    @Before
    public void setup() {

        MockitoAnnotations.initMocks(this);

        List<Product> products = new ArrayList<Product>();
        Product product1 = new Product();
        product1.setId(new Long(1));
       
        Product product2 = new Product();
        product2.setId(new Long(2));
       
        products.add(product1);
        products.add(product2);
       
        Mockito.when(mockproductService.findAllProducts()).thenReturn(products);
        
        this.mockMvc = MockMvcBuilders.webAppContextSetup(this.wac).build();

    }

    @Test
    public void testMethod() throws Exception {
       
        List<Product> products = new ArrayList<Product>();
       
        Product product1 = new Product();
        product1.setId(new Long(1));
       
        Product product2 = new Product();
        product2.setId(new Long(2));
       
        products.add(product1);
        products.add(product2);
               
        RequestBuilder requestBuilder = MockMvcRequestBuilders.get("/products");
       
        this.mockMvc.perform(requestBuilder).
                andExpect(MockMvcResultMatchers.status().isOk()).
                andExpect(MockMvcResultMatchers.model().attribute("Products", products)).
                andExpect(MockMvcResultMatchers.model().size(2)).
                andExpect(MockMvcResultMatchers.view().name("show_products"));
       

    }
}







Saturday, February 9, 2013

How Spring MVC works

MVC is a very good design pattern which is widely used in applications with UI(Desktop or web).   In this pattern M is Model, V is the View and C is controller.   The advantage of design pattern is to separate the different responsibilities into 3 parts when dealing UI.   In general terms  Model represents the business data and logic, View is visual part where the data in Model is displayed and Controller is the link of Model and View.  Controller is the brain which decides what data is used and how data is viewed.    Each part in MVC has the clear and dedicated responsibly.   By using MVC pattern the applications with UI become easier to develop, modify and maintain. 


SpringMVC is a web framework provided by Spring based on MVC pattern.   This framework is built on top of JEE/Servlet and is request-driven.  So it means that some servelts will listen on some ports for the incoming request and each request will trigger the whole process of serving this request in MVC and the data or resources that the request asks for is presented.
The below shows the categories of SpringMVC using MVC terms.



In Spring MVC DispatcherServlet and Controllers act as the Controller and they receive the requests and decide how the request will be served and also decide which view is used.  Business objects and domain objects are the Model and the business objects are invoked by the controllers and the all data are in domain objects.   The domain objects will be passed to some JSP which are the View part.  The JSP will render the data in the domain objects.

DisptacherServlet

DispatcherServlet is the front controller in Spring MVC.   It is a HttpServlet that will receive the request and return the response.  DispatcherServlet is the key player in SpringMVC.  From the below brief work flow of SpringMVC you can see DispatcherServlet is the driving force that make the request served with the right response sent back to the client.    


  1. Receive the request from client
  2. Consult HandleMapping to decide which controller processes the request
  3. Dispatch the request to the controller   
  4. Controller processes the request and returns the logical view name and model back to DispatcherServlet
  5. Consult ViewResolver for appropriate View for the logical view name from Controller
  6. Pass the model to View implementation for rendering
  7. View renders the model and returns the result to DispatcherServlet
  8. Return the rendered result from view to the client
It is obvious that DispatcherServlet has a heavy workload.  It needs many other strategy objects and configuration to perform these work.   Each DispatcherServlet has its own specialized ApplicationContext: WebApplicationContext.


MappingHandler


One important work to be done before DispatcherServlet can dispatch the request to the Controller is to find out which controller is right one for this request.  DispatcherServlet uses MappingHandler strategy object to do so.   There are many MappingHandler implementations which uses different strategies to map the request to Controller.   By default DispatcherServlet will use BeanNameUrlHandlerMapping and DefaultAnnotationHandlerMapping.

public interface HandlerMapping {
      HandlerExecutionChain getHandler(HttpServletRequest request) throws Exception;
}


Controller

After the mapping is resolved DispatcherServlet will dispatch the request to the Controller.  Controller does the real work of processing the request.  Also Controller is where programmers need to work most in developing SpringMVC applications.  Controller has the knowledge of processing the request and what logical view be used for different result of request processing.  Usually Controller doesn't do the real processing and it delegates the request processing to the service layer.   Another thing the Controller to do is to package the result of the processing into the Model, which will be rendered in the View finally.


ViewResolver

After the Controller finish the processing of request it will return the logical view name and the data to DispatcherServlet, which will decide the actual view to be used since the view name from Controller is logical name.   With the help of ViewResolver strategy object DispatcherServlet can find out physical view from the logical view name.   Similar to MappingHandler there are also many different strategies  for resolving the view based on the different view technologies.    Most commonly used implementation of ViewResolver is InternalResourceViewResolver.  

public interface ViewResolver {
      View resolveViewName(String viewName, Locale locale) throws Exception;
}


View

View is where the data in the Model is rendered as the required output for the client.   SpringMVC provides many implementations of View to generate different output such as JSP,Excel, PDF, XML and etc.   DispatcherServlet will invoke render method from selected View implementation to generate the output to be returned to the client.

public interface View {
      String getContentType();
      void render(Map<String, ?> model, HttpServletRequest request,  HttpServletResponse response) throws Exception;
}