Using Red Hat Application Migration Toolkit to see the impact of migrating to OpenJDK

โ€œThis post was originally published on Red Hat Developers, the community to learn, code, and share faster. To read the original post, click here.โ€

Migrating from one software solution to another is a reality that all good software developers need to plan for. Having a plan helps to drive innovation at a continuous pace, whether you are developing software for in-house use or you are acquiring software from a vendor. In either case, never anticipating or planning for migration endangers the entire innovation value proposition. And in today’s ever-changing world of software, everyone who wants to benefit from the success of the cloud has to ensure that cloud innovation is continuous. Therefore, maintaining a stack that is changing along with technological advancements is a necessity.

In this article, we will take a look at the impact of moving to OpenJDK and the results will aid in drawing further conclusions and in planning. It’s quite common to be using a proprietary version of JDK, and this article addresses how to use Red Hat Application Migration Toolkit to analyze your codebase to understand the impact of migrating to OpenJDK.

Continue reading

Getting started with JBehave in 8 steps.

This post is about JBehave and how to quickly get started with it. If you would like to know about BDD please use the following link.
What is Behavioral Driven Development?

Today I have used JBehave for the first time. It does have some convincing factors for instance diving requirements into scenarios which map pretty nicely to the tests that are written with in the Steps. Thus it seems like it would be easier for Stakeholder/s to use it as a good guideline for the initial requirements. Its always quite usual to come up to some disagreements about the development however the tool does help to bring forth the ease for stake holders who really dont have to get into writing code but will have a technical jargon to communicate through to the developers in shape of scenarios.

So the first things come up.
1. Create a Java project in Eclipse.
2. Download the JBehave latest version from the website and add the jar files to your project build classpath
3. Create a scenario file and name it as user_wants_to_shop.scenario

Add the following text to the file

  1. Given user Shaaf is logged in
  2. Check if user shaaf is Allowed to shop
  3. Then show shaaf his shopping cart

The lines above simply state the rules for the scenario we are about to create.

4. Now we will create a java file mapping to it.

Create a new class with the name corresponding to the same as the .scenario file

Add the following code to it

  1. import org.jbehave.scenario.PropertyBasedConfiguration;
  2. import org.jbehave.scenario.Scenario;
  3. import org.jbehave.scenario.parser.ClasspathScenarioDefiner;
  4. import org.jbehave.scenario.parser.PatternScenarioParser;
  5. import org.jbehave.scenario.parser.ScenarioDefiner;
  6. import org.jbehave.scenario.parser.UnderscoredCamelCaseResolver;
  8. public class UserWantsToShop extends Scenario {
  10.         // The usual constructor with default Configurations.
  11. 	public UserWantsToShop(){
  12. 		super(new ShoppingSteps());
  13. 	}
  15. 	// Mapping .scenario files to the scenario. This is not by default.
  16.     public UserWantsToShop(final ClassLoader classLoader) {
  17.         super(new PropertyBasedConfiguration() {
  18.             public ScenarioDefiner forDefiningScenarios() {
  19.                 return new ClasspathScenarioDefiner(
  20.                     new UnderscoredCamelCaseResolver(".scenario"), 
  21.                     new PatternScenarioParser(this), classLoader);
  22.             }
  23.         }, new ShoppingSteps());
  24.     }
  26. }

5. Just that we now have steps we need some place to put the logic for the Steps in the Scenarios. So we create a new Steps class

  1. import org.jbehave.scenario.annotations.Given;
  2. import org.jbehave.scenario.annotations.Then;
  3. import org.jbehave.scenario.annotations.When;
  4. import org.jbehave.scenario.steps.Steps;
  6. public class ShoppingSteps extends Steps {
  8. }

6. Now you can run the Scenario by running it as a Junit Test.

Following should be the output

  1. Scenario: 
  3. Given I am not logged in (PENDING)
  4. When I log in as Shaaf with a password JBehaver (PENDING)
  5. Then I should see my "Shopping Cart" (PENDING)

This means that none of the scenario requirements have been implemented as yet. But the test result should be green to show there are no problems with our setup.

Now lets add the implementation to the tests.

7. Add the body to the ShoppingSteps. I have added the comments with the methods.

  1.         // Given that a user(param) is loggen in
  2. 	@Given("user $username is logged in")
  3. 	public void logIn(String userName){
  4. 		checkInSession(userName);
  5. 	}
  7.         // Check if the user is allowed on the server
  8. 	@When("if user $username is $permission to shop")
  9. 	public void isUserAllowed(String userName, String permission){
  10. 		getUser(userName).isUserAllowed().equals(permission);
  11. 	}
  13.        // finally then let him use the shopping cart.
  14. 	@Then("show $username his Shopping cart")
  15. 	public void getMyCart(String userName, String cart){
  16. 		getUserCart(userName);
  17. 	}

8. Now you should try to run the scenario again. And all of the test should be green. I have not implemented the actual methods so they will be red. ๐Ÿ™‚

Logging with log4J isDebugEnabled

Alot of times I have seen the questions popping up whether to use isDebugEnabled property or not. Arguably most of the times or rather always about performance. Some of the stuff that I feel important about using it follows.

The answer is simple. It is made to be used. However using it has to be with caution.

For instance
If I am using the following line in my code.

  1. log.debug("I am there");

Thats an example of a good practise

However I can really blow it out of proportions if I do the following.

  1. // Not good practise
  2. if (log.isDebugEnabled()){
  3.     log.debug("I am there"); 
  4. }

And why exactly is that so. This is because the method debug in the class Category i.e. extended by Logger in the log4j libraries explicitly checks the mode for the logging itself.

Extract from the class org.apache.log4j.Category is below

  1. public void debug(Object message) {
  2. if(repository.isDisabled(Level.DEBUG_INT))
  3. return;
  4. if(Level.DEBUG.isGreaterOrEqual(this.getEffectiveLevel())) {
  5. forcedLog(FQCN, Level.DEBUG, message, null);
  6. }
  7. }

Okay so thats the case where we say dont use it. What about the one where we actually do use the isDebugEnabled property.

For instance you have a large parameter going in the debug.

  1. // Bad Practise
  2.  log.debug("XML "+Tree.getXMLText());

In such a case it can clearly be said “No dont do it!” If Tree.getXMLText is going to pull out all the hair out of the app then there is no use. As it will be constructed first and if the debug level is not enabled that would be a performance cost.

  1. // Good Practise
  2. if (log.isDebugEnabled()){
  3.     log.debug("XML "+Tree.getXMLText());
  4. }

Thus in the example above it would be wiser to use log.isDebugEnabled() just to make sure the mileage or cost stays lesser.

Keep it simple, short and stupid (KISS)

“everything should be made as simple as possible, but no simpler” – Albert Einstein

I am sitting in a design room today designing a simple requirement for a client. He wants me to enhance the transfer money from one account to another. Just that it happens that I have my best friend Yuky sitting right next to me. He is damn good in Maths and extra ordinarily knows all the calculations on his tips. I am sure if I had asked him what was the angle of sight to the light on the wall he could just tell me in a few seconds looking with 10 eyes at his 12 fingers, trust me it will be correct.

Just that I cannot ignore his brilliance and just that I admire it sometimes and posed to be inspired by it all the time, I don’t agree with it all the time!

And this is what I mention here today.KISS (Keep it simple, short and stupid)

Rule# 1.ร‚ย  Don’t try to be a super genius.

Rule# 2. Don’t over do it!

Rule# 3. Break down your problems

Rule# 4. Common Sense has to prevail

Rule# 5. Keep your mess small, don’t litter around the park.

Most importantly keep everything short, simple and stupid.

I recently had to deliver a few minutes inspirational presentation on Test Driven Development and especially about KISS. Attached is some of the material. Feel free to explore.

What is the KISS principle?

Could be more simpler ๐Ÿ™‚

Automation with Selenium,Junit, Ant

What is Selenium
How does Selenium work
History and how it started
About Ant
And Junit

Much of the technologies above do not or will not need an introduction if you already know them or can read them from the links above.

More over today’s article is more about how we can use all the three selenium, ant and junit to come up with an automated solution for regressive testing.

Please refer to the documentation links above for the basic knowledge on any of the used tools.

I am assuming that you know how to record a test in selenium if you dont then go here

Now that you do know how to record and save the tests simply save them in any directory structure you like as long as they follow the right package convention. like
this would mean the file should be in directory like com/foo/bar. Conventionally and logically today that is how the TestCase is compiled through the ant script. If the package is not the right path then compile time errors will occur.

My calling target for running the whole procedure should look like this

< target name="build_tests" depends="clean, compile, start-server,ร‚ย  tests, stop-server" / >

Details of the depending targets are as follows.
clean: would clean the entire build directories etc to make sure nothing from our past is carried forward into the future.

compile: should compile

start-server: should start the selenium server.

tests: should run the selenium tests.

stop-server: should stop the selenium server once all tests have been executed.

How to start a selenium server from Ant?

    < target name="start-server" >
        < java jar="lib/selenium-server.jar" fork="true" spawn="true" >
            < arg line="-timeout 30" / >
            < jvmarg value="" / >
            < jvmarg value="-Dhttp.proxyPort=44444" / >
        < / java >
    < / target >


The target above is simply taking the selenium-server.jar which is in the classpath and running the main-class from it. the proxy parameters are optional.

How to stop the server from Ant?

 < target name="stop-server" >
        < get taskname="selenium-shutdown" 
            dest="result.txt" ignoreerrors="true" / >
        < echo taskname="selenium-shutdown" message="DGF Errors during shutdown are expected" / >
    < / target >

The selenium server starts on port 4444 by default and to tell it to shutdown is simple, the command in the src is passed to it. cmd=shutDown
That should just shutdown the server.

Executing the test cases?
Selenium Test cases are Junit tests cases and that’s how they are treated in this tutorial.
Thus some of you will be very much familiar with the ant targets junit and junitreport. However I will describe how the following is working.
the following target will run the selenium tests and print a summary report to the ${dir}
The includes and excludes, simply tell the target which files to include or exclude while running the test cases. This is typically done when you don’t want some tests cases to be included or your source for tests and the application are in the same directory and you only want to include something like Test*.class.

  < target name="tests" depends="compileonly" description="runs JUnit tests" >
    < echo message="running JUnit tests" / >
    < junit printsummary="on" dir=".." haltonfailure="off" haltonerror="off" timeout="${junit.timeout}" fork="on" maxmemory="512m" showoutput="true" >
      < formatter type="plain" usefile="false" / >
      < formatter type="xml" usefile="true" / >
      < batchtest todir="${testoutput}" filtertrace="on" >
        < fileset dir="${src}" >
          < includesfile name="${tests.include}" / >
          < excludesfile name="${tests.exclude}" / >
        < / fileset >
      < / batchtest >
      < classpath >
        < pathelement path="${classes}" / >
        < pathelement path="${build.classpath}" / >
      < / classpath >
    < / junit >

The following will take the formatted output from the lines above and generate a report out of it in xml and html and place the results in the ${reports}/index.html
A sample Junit test report might look like this.

   < echo message="running JUnit Reports" / >
   < junitreport todir="${reports}" >
      < fileset dir="${reportdir}" >
        < include name="Test*.xml" / >
      < / fileset >
      < report format="frames" todir="${reports}" / >
    < / junitreport >
    < echo message="To see your Junit results, please open ${reports}/index.html}" / >
  < / target >

In general you can add all of this to your nightly build through any of the CI servers like Cruise Control. Also as a general practice you will need to do a little more then just executing this target every night depending on your application. For instance cleaning up of resource centers like Databases etc.

Calling wsadmin scripts from ant

You can simply add the following to a target.
For the following wsadmin should be in your PATH env.

< exec dir="." executable="wsadmin.bat" logError="true" failonerror="true" output="wsconfig.out" >
< arg line="-lang jython -f ../../"/ >
< /exec >

All output will be logged to wsconfig.out


On my way to my solution store just found this nice to use, old and easy feature.
Possibilities endless, usage typically very easy.

I used the following to generate XML from sqlplus:

 select dbms_xmlgen.getxml('select * from user') from dual; 


 < ROW >
  < TNAME >Employee< / TNAME >
 < / ROW >
< / ROWSET >

Command, Singleton, JMenuItem, JButton, AbstractButton – One Listener for the app

Here I would like to demonstrate a simple use of JMenuItems being used with Single Listener for the entire system.
A simple sample of use would probably be SingleInstance Desktop Application.

Lets see how that is done here.

1. First lets create a OneListener class that should be able to listen to ActionEvents and also be able to add Commands to itself. Please refer to my previous post on Command,Singleton if you would like to see more about this patterns and there usage.

  1. package com.shaafshah.jmenus;
  3. import java.awt.event.ActionEvent;
  4. import java.awt.event.ActionListener;
  5. import java.util.ArrayList;
  7. import javax.swing.AbstractButton;
  9. // Implements the ActionListener and is a Singleton also.
  11. public class OneListener implements ActionListener{
  13. 	private static OneListener oneListener = null;
  15. 	// Holds the list of all commands registered to this listener
  16. 	private ArrayList<Command> commandList = null;
  18. 	// A private constructor
  19. 	private OneListener(){
  20. 		commandList = new ArrayList<Command>();
  21. 	}
  23. 	// Ensuring one instance.
  24. 	public static OneListener getInstance(){
  25. 		if(oneListener != null)	
  26. 			return oneListener;
  27. 		else return oneListener = new OneListener();
  28. 	}
  30. 	// Add the command and add myself as the listener
  31. 	public void addCommand(Command command){
  32. 			commandList.add(command);
  33. 		    ((AbstractButton)command).addActionListener(this);
  34. 	}
  37. 	// All Events hit here.
  38. 	@Override
  39. 	public void actionPerformed(ActionEvent e) {
  40. 		((Command)e.getSource()).execute();
  41. 	}
  43. }

In the above code, the addCommand method adds the command Object and adds a listener to it.
Now how is that possible.
Basically because I am categorizing my UI objects as Commands with in the system having some UI. And I am also assuming that these commands are Currently AbstractButton i.e. JMenuItem, JButton. Lets have a look at the Command Interface and its Implementation.

  1. public interface Command {
  2. 	public void execute();	
  3. }

And the implementation, note that the Command is an interface where as the class is still extending a UI object.

  1. import javax.swing.JMenuItem;
  3. public class TestCmd extends JMenuItem implements Command{
  5. 	public TestCmd() {
  6. 		super("Test");
  7. 		OneListener.getInstance().addCommand(this);
  8. 	}
  10. 	@Override
  11. 	public void execute() {
  12. 		System.out.println("HelloWorld");
  13. 	}
  14. }

Personally don’t like calling the OneListener in the constructor of the class but just for the sake of simplicity of this post I have left it that way. There are many different ways to omit it.

So the TestCmd is a JMenuItem but is also a Command and thats what the OneListener understands.
As this commads Listener is also OneListener all ActionEvents are thrown there and from there only the Command.execute is called on.

So now you dont have to worry about what listeners are you in. The only thing you know is that when execute is called you need to do your stuff.

You can download the code from Here.

Hope this helps.

Continous Integration – A blast from the past

Although this didn’t happen a decade ago but still has been a good case for me to learn and realize how Continuous Integration brings value addition to our work.

As I recall it was like this when they were teenagers ๐Ÿ˜€
Few teams working on different modules of same application, deployed together.
No build process formalized
No builds except for the ones that need major milestone deployments.
No feedback, reports, reviews etc.

So virtually there was no build eco system.

Which meant hideous amounts of communication, tons of trouble shooting, last minute show stoppers, And extermely unhappy end users/stake holders.

So what exactly was going wrong.

There was no Configuration Control i.e. No way to figure out the changes that will be part of the release and no controls to include or exclude them.
No Status accounting i.e. No feedback or status of anything as a whole product.
No management of Environment meant no one knew what hardware/software they were using and sometimes why?
No team work whatsoever meant no guided processes, reviews or tracebility of issues and pitfalls.

So virtually a release day was the doomsday!

It took some time for developers to realize that how important the following were.
* Configuration identification – What code are we working with?
* Configuration control – Controlling the release of a product and its changes.
* Status accounting – Recording and reporting the status of components.
* Review – Ensuring completeness and consistency among components.
* Build management – Managing the process and tools used for builds.
* Process management – Ensuring adherence to the organization’s development process.
* Environment management – Managing the software and hardware that host our system.
* Teamwork – Facilitate team interactions related to the process.
* Defect tracking – Making sure every defect has traceability back to the source

And after all it takes some time but the day to day flow looked like the figure below.

Build Management
Build Management

Developers are distributed on different locations in different teams.
There is an SCM team that takes care of the SCM activities i.e. Configuration Identification, Control, Build Management etc.
There is one single repository for all developers.
A process is defined for Product integration and its checked whenever some one checks in.
And hourly build is dont to check the sanity of the system with the unit tests.
A Nightly build is done with all full blown unit tests, coverage reports, Automated tests and installers and upgrades.
More over all of the above is shown on an FTP/HTTP site. FTP for installer/binary/archive pickup and HTTP for detailed view of builds/feedbacks etc.

All of the above meant strength and health of the build eco system. And more trust in the system that it will report failures along the way.