Sunday, December 15, 2013

Win Free Copies of Instant Mock Testing with PowerMock

Here's some good news, I have teamed up with Packt Publishing and we are organizing a Giveaway for you to celebrate the release of my book and two lucky winners stand a chance to win an e-copy of Instant Mock Testing with PowerMock.

What will you get out of this book:

The Cover Page for Mock Testing With PowerMock
  • Understand how to unit test code using PowerMock, through hands-on-examples
  • Learn how to avoid unwanted behavior of code using PowerMock for testing
  • Explore the features of PowerMock through the selection of clear, practical, step-by-step recipes 
Read more about this book and download free Sample Chapter: http://www.packtpub.com/mock-testing-with-powermock/book

How to Enter the contest?

All you need to do is head on over to the http://www.packtpub.com/mock-testing-with-powermock/book and look through the product description of this book and drop a line via the comments below to let us know what interests you the most about this book.
It’s that simple

Dead Line:
 
Like all good things, this offer does not last for ever.  The contest will close on 31/ 12/ 2013. Winners will be contacted by email, so be sure to use your real email address when you comment.

Till we meet again, Merry Christmas and a Very Happy New Year!

Saturday, November 30, 2013

How to split ember handlebar templates into multiple files using node and grunt

Recently, I was working on a project which used EmberJs.  At the time of writing this post, EmberJs does not provide a way to split the Handlebar templates into multiple files.

This is OK to being with, but soon enough for a decently complex application, keeping all Views in one template files is just not maintainable.  Soon, I was hunting for a way to split the handlebar templates into multiple files to better maintain and manage them.

My search let me to various blogs and a lot of scattered information.  Eventually I was able to find a decent solution for my problem.  In this post, I am going to demonstrate how exactly to solve this problem, in the hope that someone else would benefit from my findings.

How do they do it!

To split the handlebar templates into multiple files, I took help of nodejs and an awesome node module called Grunt.  Here are the steps I followed to get the job done.

Installation

  • I am going to assume that you already have npm and nodejs installed on your machine.  If you don't then, please follow the steps mentioned in this link to get them. 
  • Next step, is to get the package.json file which can be used by nodejs.
  • To create a package.json file use the following command.  It will ask some basic questions - please answer them. 

  • This should create a default package.json file which can be used as a foundation for what we want to achieve
  • Next thing we need to do is install a nodejs module called grunt.  This is an awesome task runner which has hundreds of plugins that help us perform  repetitive tasks like minification, compilation, unit testing, linting, etc.
  • The Grunt ecosystem is huge and it's growing every day.  You can use Grunt to automate just about anything with a minimum of effort.
  • To make Grunt work for us, we need to install grunt, grunt-cli, grunt-ember-templates and grunt-contrib-watch node modules.  Install them using the following command.    

  • The grunt-ember-templates module will be used to pre-compile the handlebar templates.
  • The output of this plugin would be a templates.js file which will hold all pre-complied templates.  This can then be easily included in the EmberJs application.
  • The grunt-contrib-watch module exposes a task called watch.  This task can keep an eye on the file system for any changes and invoke the grunt-ember-templates task to recompile the handlebar templates and regenerate the templates.js.  Don't worry if you don't get it at this point.  But trust me, its extremely convenient to have the watch task especially during development.
  • These are all the modules we will need to get the job done.
Creating the Gruntfile.js
  • For Grunt to do its job, it needs to know about the job at hand.
  • We need to inform Grunt about what is to be done, we do this using a file called Gruntfile.js
  • The contents of Gruntfile.js would look as follows

  • This file basically sets up two tasks called emberTemplates and watch.
  • emberTemplates - tasks compiles all handlebar templates (*.hbs files) stored under templates/ directory (and all its sub-directories) and generates a file called templates.js inside the build directory.
  • As explained earlier watch task keeps an eye on the file system modifications to any of the handlebar templates (*.hbs) files.  If these files have been modified then it invokes the emberTemplates task to recompile the handlebar templates and regenerate the templates.js file.
  • The default task for this Gruntfile.js is emberTemplates
  • Let's say we have two template files, index.hbs and login.hbs, their contents are as follows


Light's, Camera, Action!
  • All that is left to do is invoking the grunt command from the command line.

  • This will compile the handlebar templates and generate the templates.js file.  
  • All the handlebar templates are already registered with EmberJs and are ready to be included in an EmberJs application.  
  • The name by which they are registered with EmberJs is same as the name of the file that holds the template.

  • When we doing the development we keep editing the handlebar templates quite often.
  • After we edit them we have to run grunt command again to regenerate the templates.js file.  But as promised earlier we will let watch task do it for us.
  • Run the following command on the command line.

  • This command will never return, it will keep an eye on any changes to the hbs files.
  • Let's update the login.hbs template as follows

  • Since the watch task is already running, it detects the change in the login.hbs and regenerates the templates.js.
  • The idea is to keep that command prompt always running and continue editing the handlebar templates, the templates.js will be seamlessly generated and kept updated for us!
  • Updated templates.js looks as follows.

Isn't this awesome!  We got our modularity and the ease of development, its like have your cake and eat it too!

Well that's all folks, until we meet again, have fun! 

Thursday, October 31, 2013

An Author Is Born - Instant Mock Testing with PowerMock

After quite a few long nights and working weekends, finally the book written by me on PowerMock is published and is available to be bought here, here and here.  It’s available in Paperback and almost all major eBook format.

It’s a book for anyone who wants to learn/master mock testing with PowerMock using the Mockito API.  If you are keen on learning a mocking framework that can mock almost any class, PowerMock is the way to go.  Starting with a basic example of how to create a mock and verify a method invocation, Instant Mock Testing with PowerMock demonstrates various features of PowerMock using clear, practical, step-by-step recipes. 

Moving on from the basics, you will learn how to write unit tests for static, final, and private methods, and write flexible unit tests using argument matchers. Following on from this, you will also learn how to invoke real implementation of some methods and mock only a few methods of a class, using partial mocks or spies. This book will teach you all tricks of the trade and enable you to write good unit tests for a wide range of scenarios.

The Back and Front Cover of the Book
Any feedback would be greatly appreciated (if you want to buy it, please let me know, I can get some deep – pun intended :) – discounts using the Author discount code).

I would like to thank Jayway and the Java Community for a great mocking framework.
I cannot imagine finishing this book without the dedication and support of my loving family, who put up with long nights and working weekends for far longer than I had initially planned for.  

Lastly, I would also want to thank PACKT Publishing and their entire team, without whose efforts this book would have been impossible.  It would be inappropriate on my part, if I didn't extend my special thanks to Govindan, Sherin and Aparna from PACKT Publishing.
I would conclude this post with the preface of the book for your reference:

 Preface
PowerMock is an open source mocking library for the Java world. It extends the existing mocking frameworks such as EasyMocks (see http://www.easymock.org/) and Mockito (see http://code.google.com/p/mockito/) to add even more powerful features to them.
PowerMock was founded by Jayway (see http://www.jayway.com/) and is hosted on Google Code (see http://code.google.com/p/powermock/).
It has a vibrant community with a lot of contributors. Conscious efforts have been made to ensure that PowerMock does not reinvent the wheel. It only extends existing mocking frameworks and adds features that are missing from them. The end result is a mocking library that is powerful and is a pleasure to use.
Sometimes a good design might have to be tweaked to enable testability. For example, use of final classes or methods should be avoided, private methods might need to open up a bit by making them package visible or protected, and use of static methods should be avoided at all costs. Sometimes these decisions might be valid, but if they are taken only because of limitations in existing mocking frameworks, they are incorrect. PowerMock tries to solve this problem. It enables us to write unit tests for almost any situation.
What this book covers
Saying Hello World! (Simple) explains a basic mocking example using PowerMock. It will help us get familiarized with basic mocking and verification syntax.
Getting and installing PowerMock (Simple) demonstrates the steps for setting up PowerMock using IntelliJ IDEA and Eclipse. It also briefly describes other ways of setting up the PowerMock environment.
Mocking static methods (Simple) shows how effortlessly we can mock static methods with PowerMock. Most of the mocking frameworks have trouble mocking static methods. But for Power Mock, it’s just another day at work.
Verifying method invocation (Simple) explains various ways in which we can verify a certain method invocation. Verification is an indispensable part of unit testing.  
Mocking final classes or methods (Simple) covers how easily we can mock final classes or methods. Mocking final classes or methods is something that most mocking frameworks struggle with. Because of this restriction, sometimes a good design is sacrificed.
Mocking constructors (Medium) introduces the art of mocking constructors. Is a class doing too much in its constructor? With PowerMock, we can mock the constructor and peacefully write tests for our own code.
Understanding argument matchers (Medium) demonstrates how to write flexible unit tests using argument matchers. Only verifying that a certain method was invoked is a job half done. Asserting that it was invoked with correct parameters is equally important.
Understanding the Answer interface (Advanced) demonstrates the use of the Answer interface, using which we can create some unusual mocking strategies. Sometimes mocking requirements are extremely complex, which makes it impractical to create mocks in the traditional way. The Answer interface can be used for such cases.
Partial mocking with spies (Advanced) explains the steps to mock only a few methods of a given class while invoking the real implementation for all other methods. This is achieved in PowerMock by creating spies.
Mocking private methods (Medium) covers the steps to mock and verify private methods. Private methods are difficult to test with traditional mocking frameworks. But for PowerMock, it’s a piece of cake.
Breaking the encapsulation (Advanced) shows how we can test the behavior of a private method and verifies the internal state of a class using the Whitebox class. At times, some private method might be performing an important business operation and we need to write unit tests for that method. The Whitebox class can be very handy in such situations.
Suppressing unwanted behavior (Advanced) explains how we can suppress unwanted behavior such as static initializers, constructors, methods, and fields.
Understanding Mock Policies (Advanced) demonstrates the use of Mock Policies to better manage the repeated code needed to set up mocks for a complex object.
Listening with listeners (Medium) demonstrates the steps to listen for events from the test framework. We might want to do some processing when the test method is invoked or create a report about how many tests were run, how many passed, how many failed, and so on. Listeners are a good fit for such requirements.

Sunday, September 29, 2013

How to generate UUID using Haskell - Part - 2

What is your favorite programming language?

Someone asked me this question about a month ago, suddenly I realized that, I can program in many languages, but I didn't know of a language that I am truly in love with.  At that time I was just starting to learn Haskell.

After about a month of hacking on Haskell, I am in a better position to answer this question.  Haskell is an awesome language and certainly is currently my favorite language!

I would like to ask this question to my readers:

Are you always 100% sure that, the code you have written will work without any problems, if it compiles OK?

Ever had to fix your own code because of NullPointerExceptions at run time?

With Haskell the mantra is:

If it compiles, it will work!

Over and above it, features delivered per lines of code ratio is extremely high with Haskell, i.e. more can be achieved by doing less!

I would say, learning Haskell is like falling in Love: It takes time, but once you are in it, its like the most beautiful thing :)

Alright, enough Haskell marketing, lets come back to the main topic of this post.  In the previous post we saw one approach to generate UUID's in Haskell using the system-uuid package, in this post we will look at yet another package that is well equipped to generate UUID's

How do they do it!

Haskell has a package called uuid that exposes a method to generate UUID values.  This package is a bit nicer than the system-uuid package since this package providers methods to convert UUID values to and from String values.

This package exposes methods like
  • toString - Converts UUID value to String value
  • fromString - Converts String value to UUID value
To install the uuid package use the following command
Next, generate the UUID value using this package are pretty straightforward:
  • The module Data.UUID.V1 has a method called nextUUID which will return us random UUID values, wrapped with IO monad (just like how system-uuid did)
  • However, this method also wraps the generated UUID value into the Maybe monad.
  • This is done to indicate that there is a possibility of failure while generating the UUID value.
  • Let's say because of some reason, if the UUID generation fails then, Nothing will be returned.
  • Maybe is an extremely elegant way to handle possibility of failure.
  • Once you have the UUID wrapped in Maybe and IO monad generated, you can extract the UUID value using the fromJust method
  • Let's see these steps in action, lets fire up the GHCI
This code snippet shows some extremely powerful feature of Haskell like Maybe and function composition.

That's all folks!  Till we meet again, Happy Hacking!

Saturday, August 31, 2013

How to generate UUID using Haskell - Part - 1

Well well, what do we have here, in my poor attempt to learn Haskell, I thought it would be great if I document my findings.  This will not only help me understand the concepts better but also might help others who are faced with a similar problem.

UUID could be a great candidate when we want to generate some sort of a random unique string.  Last night I was faced with a similar situation, I had to generate UUID values in Haskell. I looked around and found two ways of doing it.  One way is pretty straight forward and that's what we are going to cover in this post.

How do they do it!

Haskell has a package called system-uuid which could help us generate UUID values.  To install the package using cabal fun the following command

For me the installation failed the first time around.  The reason for the failure was that, the system-uuid package generates UUID using native generators.  The Ubuntu system on which I was running didn't have uuid-dev package installed on it.  Here is the command to install the uuid-dev package on Ubuntu.
After this if we run the cabal install command again it should work.

Now to generate the UUID from the System.UUID.V4 was pretty straightforward.
  • This module has a method called uuid which will return UUID value wrapped with IO monad i.e.  IO Data.UUID.UUID.  
  • What happens when we call this method more than once?  Will it return different value or same value all the time?
  • Haskell is a pure language, which means result of a function call is fully determined by the arguments passed to the function. 
  • The uuid function would be useless if it returns the same value when called multiple times.  The functions that return values wrapped in IO monad can return different values when called multiple times.  A good explanation of how this is implemented can be found here.  This essentially enables our uuid function to return different value when called multiple times.  
  • To see this in action, lets start the GHCI i.e. the Haskell Interpretter
  • Load the module System.UUID.V4
  • Type uuid on the prompt.
  • A bunch of packages would be loaded and then the last statement should be the UUID value printed on the console.
Thats all folks! In the next post I will explain how to generate UUID value without using the system-uuid package.

Sunday, July 28, 2013

Hibernate: How To load one-to-many collections using a custom query

Recently, on one of the user groups, one of my colleague posted a question about loading one-to-many collections.  His requirement was quite unique compared to the stock standard one-to-many collections.  They were using Hibernate as the ORM tool.

The Requirement:

I will try and explain the requirement using an example.
  • Let's say there are entities that need to store a set of attributes. 
  • Attribute's are nothing but (key, value) pairs.  
  • Attributes could be associated with any class that needs to have attributes.  
  • For example, Image can have attributes like what is its dimension, what is its resolution etc.  
  • While Video might need to save information like, what is its length and format.
We could argue here that, both Image and Video are Asset's and Asset can have Attribute's.  However, the point I am trying to make here is, there could be a totally unrelated class that needs to save attributes, for example we could have Attribute's associated with a Car class.  There is really nothing common between Image and a Car.

Hence, for the scope of this post we will assume that attributes could be associated with almost any entity and these entities are not related to each other in any way.

So far so good, the unique part was how they saved the parent entity reference.  Let's have a look at some sample data:
Showing how information will be saved using the ATTRIBUTE_IDENTIFIER column

Notice the ATTRIBUTE_IDENTIFIER column value?

Yes, that's the most interesting part.  To identify which attribute is associated with which entity the reference is stored in the following format:

<Full Class Name of Entity>:<Entity ID>

Weird? Yes, Weird but very interesting!

If we were to design the system from scratch then, obviously we would map the table a little differently but more often than not, we really have to live with what we have in hand.  So given the fact that we cannot alter the schema or store the information in any different way, challenge was to map the Attribute class with Image and Video entities so that we can achieve the desired result?

So much to clarify the requirement, phew!

How do they do it!

I had to look around a bit and try out a few things before I could find the solution for this requirement.

Short Story:

Use the custom SQL query to load the one-to-many collection entity.

Long Story:

Without wasting any more time, let's look at the code.  The Image and Video entity classes would look as follows

They are mostly stock standard classes but a few things to notice:
  • They implement a convenience interface called AttributeProvider.  This interface is purely for convenience reasons its actually not really required (The code for AttributeProvider class is also shown above).  
  • Both Image and Video class have a collection of Attribute entities (i.e. they both have a one-to-many relationship with Attribute entity)
  • The method addAttribute adds the Attribute instance to the collection and sets the back reference to the parent entity in the Attribute class.  We will see how Attribute class handles this back reference in the next section.
 The Attribute class would look as follows:
There are a few things worth noticing about this class:
  • It does not map the parent entity (i.e. Image or Video), it declares a reference to AttributeProvider interface but marks it as @Transient.  This instance is only needed when we generate the value of attributeIdentifier for the first time while saving the Attribute via cascading effect.
  • It has a property called attributeIdentifier this will hold the value that will uniquely identify the entity associated with this Attribute.
  • The getter for attributeIdentifier implements the logic needed to generate the identifier.
    • It first check if the property attributeIdentifier is not null, if so then, return that value
    • Else check if AttributeProvider is not null (i.e. the transient object), if so then, construct the attributeIdentifier in the <Full Class Name of Entity>:<Entity ID> format.
    • In all other cases return null
The entities are done, lets look at the mapping hbm.xml file for these entities

The mapping file for Image would look something like this:
Note that:
  • Everything else looks extremely common, only part that might be a little unique is the <loader /> tag
  • We are specifying a query-ref called loadImageAttributes in the loader tag.  This informs Hibernate that, we want to load this one-to-many collection use the query identified by name "loadImageAttributes" 
  • The key column specified in the mapping is called "ENTITY_ID"Remember this column name, its going to play an important role in the next part.
The mapping file for Video:
Here the name of the loader query-ref is "loadVideoAttributes" and that is the only difference between the two mappings.

The mapping for Attribute:
Wow! this one has no mention of any of the parent entities, it only maps its basic properties without any relations.  Moreover, we didn't notice the mapping for the column "ENTITY_ID" (remember this column was mapped as the key column for the one-to-many association between Image-Attribute and Video-Attribute relationships).

How will the relationship between Image-Attribute and Video-Attribute work without this column?

The real magic happens in the loader queries that we are about to write.  The loader query for "loadImageAttributes":
Few interesting things about this query:
  • Role attribute of <load-collection /> tag needs to point to the collection which will be loaded using this query.  In our example we want to load the Image.attributes collection.
  • In addition to the other columns in the select clause we added another derived column called ENTITY_ID.
  • This column is the same column that we used while mapping the one-to-many association between Image and Attribute.
  • This column value is derived by removing the first 34 characters from the ATTRIBUTE_IDENTIFIER column
    • Why did we remove 34 characters?  How did we reach to this number?
    • Let's recollect how the Attribute is stored.  The ATTRIBUTE_IDENTIFIER column will have the value like com.gitshah.hibernate.test.Image:1.  
    • To map it to an Image we need the Image ID.  The Image ID is stored after 34 characters (i.e. after "com.gitshah.hibernate.test.Image:" whose length is 34 characters) in the ATTRIBUTE_IDENTIFIER column
    • Hence, to get the entity ID for Image entity we strip off first 34 characters from the value stored in ATTRIBUTE_IDENTIFIER column.
  • The where condition constructs the ATTRIBUTE_IDENTIFIER value using the formula <Full Class Name of Entity>:<Entity ID>
  • We only know the Full Class Name of the Entity (in this example com.gitshah.hibernate.test.Image) and ID of each Image instance would differ, because of this, we cannot construct the value of ATTRIBUTE_IDENTIFIER completely.  
  • We let Hibernate fill in the ID value of the Image for us at the run time using a named parameter :imageId.  
  • At run time when Hibernate needs to load Attribute's for Image with ID=9, it will automatically bind the named parameter :imageId to the value 9.
We are almost done.  Let's query look at the "loadVideoAttributes" query.
It looks almost exactly like the previous query, on change is all references of Image have been replaced by Video.

That's it!  We are all set to roll.  Lets test this out.
If we run the above code we would see the following queries.
As expected the information is saved correctly.

Next test will try to fetch the Image and Video and print their attributes
This test simply loads all the AttributeProvider's and prints the attributes associated with them.  If we run the above test we should see an output similar to this:
That's all folks!  We have achieved the desired result.

PS: I tried doing this with @Loader Annotation but looks like there is a bug in Hibernate because of which it throws an NullPointerException.  But the fact remains, that something as unique as this requirement was possible using Hibernate without too much trouble is totally AWESOME!

+1 for Hibernate!

Saturday, June 29, 2013

How To Use IIS7 as Front End to Java Web Servers Like Tomcat and Jetty

While working on a project, I was faced with a situation where, I needed to use IIS server as the front end for the Jetty web server. 

I had a web application developed using Java running on Jetty.  It was time to host it on a public server, I already had a hosting provider and a hosting plan setup (from one of my previous projects) the only problem was, this was a Windows 2012 server running IIS8 on it.  There was already another .NET website hosted on IIS on this server.  I just wanted to add my web application written in Java to this setup.  Well, I know, not a very ideal situation, but I needed to get this done and get it done fast enough!

I thought for a few moments about what are my options here
  • I could install Apache or some other HTTP server and use it to front my Jetty
  • I could use expose Jetty or some other Servlet Container directly to the world (Note that, this was not a mission critical application with huge load and ultra high availability or anything like that)
  • I could use IIS to front my Jetty server
As I mentioned earlier, I needed to get this done as fast as I could.  So out of all the options that came to my mind, last one sounded like the best or at least the fastest option.

I googled around a bit to find out, what was the least painful way to front Jetty with IIS.  There were numerous options that surfaced, I tried a few options like using the JK Connector with ISAPI filter but the one that I ended up using was - Application Request Routing (ARR).  ARR is a free Web extension for IIS 7+.  

It was pretty straightforward to configure ARR on IIS 7+ to front any Java Based web server (or any server for that matter).  In this post I am going to demonstrate the steps needed to configure ARR on IIS7 so that IIS can be used to front any Java based web server.

How Do They Do It?

The process can be divided into 3 easy steps.
  • Install ARR
  • Configure ARR
  • Test the setup
Step - 1 - Install ARR

ARR is available as a free download from  http://www.iis.net/downloads/microsoft/application-request-routing URL.  Download and install this on the machine that has IIS 7+.

After the installation if you open up the IIS Manager (inetmgr) tool, you should see a new section called Server Farms
Server Farm Section Added to IIS

As the name suggests you can do a lot more (like setting up server farms) with ARR, but for now lets just focus on getting using IIS to front Jetty.

Step - 2 - Configure ARR
  • This is where the meat of the solution lies.  Let's start by creation a Server Farm.  Right click on the Server Farm section and select Create Server Farm...
Create Server Farm Section
  • This should open up a wizard that will ask you some details about the Server Farm.  First step is to give a name to the server farm.  In my setup both the IIS and Jetty were on the same physical machine, I called the server farm localhost.  You can call it Blah or Lady Gaga or Tom Hanks or anything else, just go crazy!
Give a name to the server farm
  • Next step would ask you a little more details about the server where we need to forward the request.  Just enter the IP address of you machine.  Please also make sure that, you change the port number in the configuration to match the port number where Jetty is listening for requests.  The port number settings are hidden under the Advanced Settings... link.  In my case Jetty was listening on port 8085.
Provide IP address with port of server where we need to forward the requests
  • Once you are done changing the port number, click Add on the same dialog.  This should add the server to the table below with status as Online.  Once the server is added, you should be all set to Click Finish on the wizard.
Server added to the farm
  • Another confirmation popup would appear which states that, IIS Manager can create a URL rewrite rule to route all incoming request.., just click No here.  We will add the rule ourselves.
  • After clicking Finish, you should have a view very similar to this one
Server Farm Added
  • On this page click the Routing Rules icon.  This should take you to a page where we can configure the Routing Rules.  On this page make sure that Use URL Rewrite to inspect incoming requests is checked.  If it is unchecked, then check it and click Apply on the right side.
Check Use URL Rewrite to inspect incoming requests
  • Click on the URL Rewrite... link under the Advanced Routing section on the right side.  This would open up a page similar to this one.
URL Rewrite Page
  • Click on Add Rule(s)... link on the right side.  On clicking the link it would ask you about what type of rule template you want to start with, just select the Blank rule under the Inbound rules section and click OK.
New Blank Rule
  • Next comes a longish page where we need to configure our rule.  Do not worry, its pretty straightforward.
New Rule Form
  • Just enter the name of the rule as GitShah (again, go crazy here!). Change the Using drop down value from Regular Expressions to Wildcards.  Now the only thing left is to enter a Pattern.
Rule name with Wildcard selected
  • Click the Test pattern next to the Pattern text box.  You should be presented with a dialog where we can test the URL pattern.  
  • Basically in this dialog we need to inform ARR about which URL should it match and where should it forward the request to.  In my case, I wanted to forward any request coming to http://localhost/gitshah to http://localhost:8085/gitshah (served by Jetty).  Hence, any request that starts with /gitshah I wanted to forward to the Jettey server.  
  • This can be done by entering /gitshah* into the Patter (wildcard) text field.  To test whether our understanding is correct, just enter a few patterns in the Input data to test: text box and click the Test button.
Testing the pattern
  • You will notice that in the table below, it shows two rows {R:0} and {R:1}, they are nothing but parts of the test URL that matched the pattern.  {R:0} contains the entire string and {R:1} holds the value other than the pattern itself.  Click Close when you are satisfied with the pattern you want.  It will ask you whether you want to save the pattern, Click Yes.
  • Next you only need to change just more one thing on this page, change the Action type drop down value from Rewrite to Roue to Server Farm
Action value changed to Route to Server Farm
  • That's about all the configuration we need.  Click Apply on the right side and navigate Back to Rules.  You should see a view similar to this one
New rule created
 Step - 3 - Test the setup

All we now need to do is navigate to the URL http://localhost/gitshah/.  If everything is configured correctly (and luck is on your side!), we should see a page that is served from the Jetty web server.
Page served by IIS in Front of Jetty
Success! 

As we can see its pretty simple with ARR to use IIS as Front End to any Java Web Server.  That said, it might not be the best way of doing it.  Depending on your scenario please choose the optimal option.

Sunday, May 26, 2013

How To fix the Out Of Space Problem for Virtual Box with unused large hard disk space

Increasing the size of a VirtualBox disk has now become much easier than before. It is directly supported in VirtualBox 4.0+.  Sometimes the VirtualBox has been allocated huge hard disk space but the storage does not seem to expand automatically even if we have specified the "dynamically allocated storage" option.

This occurs mainly because main partition is followed by a smaller Linux SWAP partition.  This small partition behaves like a boundary between the main partition and the empty unallocated space after the SWAP partition.

The VirtualBox is not smart enough to allocate more memory to the main partition by skip this small partition.  In such scenario we have to move the swap partition a little bit and allocate more memory to the main partition manually.

In this post we are going to look at a way in which we can manually allocate more memory to the main partition using a tool called GParted.

General Disclaimer: Before doing anything mentioned in this post please take a backup of your virtual machine, because if something goes wrong then it can cause a lot of damage to your data and Author cannot be held responsible for that :)

How Do They Do It?

These are the steps involved to manually allocate more space to the virtual hard drive.

Resize the logical capacity of the virtual disk if required

This step is only required if your logical capacity of the virtual disk is smaller than what you actually want.  VirtualBox 4.0+ now directly supports resizing the logical size of the virtual disk. Run the following commands on the host OS to resize the logical capacity of the virtual disk
This command should show you details about the virtual disk.

As we can see the current logical size of the disk is around 108 GB.  Let's increase the logical from 108 GB to around 500 GB.  Run the following command to achieve this.
Remember that the parameter to the --resize option is the new size in MB's its not the delta but actually the new size in MB's.

After running this command if we run the showhdinfo command again we should see the change has taken place
As we can see the logical size of the virtual disk has been increased from around 108 GB to 500 GB.  We have achieved what we intended to do in this step.  Let's move on with the next step.

Download and mount the GParted ISO

Download the GParted ISO from here.  We don't need to burn it to a disk just keep the ISO download to the host OS.

Open the VirtualBox GUI and open the settings of the desired VM.  Under the Storage section click the "Add CD/DVD device" option.  A dialog will open up asking whether you want to keep the CD/DVD drive empty or choose an ISO to mount.  At this step click "Choose Disk" and mount the downloaded GParted ISO.
Adding the CD/DVD Drive to the VM
Now navigate to the System --> Motherboad section of the Settings dialog and make the CD/DVD as the first option in the Boot Order.  This will make sure that the GParted will be given an opportunity to load before The Guest OS is booted.
Making Sure that Boot from CD/DVD is the first option

We are all set, now lets do the actual resizing.

Boot the GParted

Now start the VM, since the GParted ISO is mounted on the CD/DVD drive, it will show us an option to Boot the GParted Live.  Just press enter on "GParted Live (Default Settings)".

The GParted Live Boot Menu
After a few anxious moments you should see a screen similar to this one, do not change anything just click OK
Dont change anything on this screen, Click OK
Next screen lets you select language (English is selected by default).  If you want to change it then enter the number against the language name to choose that language and then press Enter.
Choose Language
Next screen will let you choose the mode in which GParted will be shown.  Unless you call yourself some sort of a Linux Ninja choose the "(0) Continue to start X to use GParted automatically" option which is selected by default.
Mode for start GParted, select 0
 If everything goes as per plan you should see a screen similar to the one shown below. 
The Home screen of GParted
This should mean that we are all setup to actually do the resizing of partitions.

Resizing the main partition

As show in the image the main partition is about 39 GB, its followed by a smaller Linux SWAP partition.  After the Linux SWAP partition there is a huge unallocated space.  Our next task is to increase the size of the main partition.  This cannot be done by simply resizing the main partition since its blocked by the Linux swap partition.  Then how should we go about it?  Well patience people, patience!

First we will need to move the SWAP partition a bit.  We have to move it in such a way that there is enough unallocated space between the main partition and SWAP partition.

Select the SWAP partition and Click the "Resize/Move" button at the top.  A dialog will be shown that will let us move the SWAP partition.
Dialog to resize SWAP partition
Using the Mouse move the SWAP partition towards the right size or change the value of "Free space preciding (MiB)" value to the desired value.
Moving the SWAP partition towards right

After you are satisfied with the size click the "Resize/Move" button.  This will show a warning dialog like this one, just click OK.
Warning informing abou the side effects of the moving SWAP partition
Now the partitions may look something like this
Notice the Free Space between Main and SWAP partitions.
 As we can see there is now some unallocated space between the main partition and SWAP partition.  This unallocated space can now be taken up by the main partition.  Click on the main partition and then click the "Resize/Move" button.  
Now there is space to expand the Main partition.
This time around drag the right size of the partition all the way so that it occupies the entire unallocated space.
Expand the Main partition to occupy the entire unallocated space
Click "Resize/Move" and click OK on the warning dialog.
Once you come out of the Resize dialog, the disk should look something like this
Updated disk sizes, notice that the Main has expanded
Click Apply.  This might take a while depending on how much data is already present in your main partition and how much do you want to expand it.  It might take anything from 2 minutes to 2 hours, be patient, grab a cup of coffee or grab some lunch.
Doing the processing when Apply is clicked
Once this completes you should see the summary of the changes done
Done Applying the Change
That's it, we have now resized our main partition.  All that is left to do is Click Exit on the GParted menu and shutdown the VM.
Shutdown GParted
Before rebooting the VM change the Boot Order again so that Hard Disk is the first option.
Making Hard Disk as the first option to boot from
Once the VM is booted you can check the disk size using the following command
That's all folks!  Until we meet again, Have Fun!
Have some Fun!