Creating and deploying a Java 8 runtime container image

“This post was originally published on Red Hat Developers. To read the original post, click here.”

A Java runtime environment should be able to run compiled source code, whereas a development kit, for example, OpenJDK, would include all the libraries/binaries to compile and run the source code. Essentially the latter is a superset of the runtime environment. More details on OpenJDK support and lifecycle can be found here.

Red Hat ships and supports container images with OpenJDK for both Java 8 and 11. More details are here. If you are using Red Hat Middleware, the s2i images shipped are also useful to deploy, for example, on Red Hat Openshift Container Platform.

Note that Red Hat only provides OpenJDK-based Java 8 and 11 images. With that said, there will certainly be situations where developers would like to create their own Java runtime images. For example, there could be reasons such as minimizing storage to run a runtime image. On the other hand, a lot of manual work around libraries such as Jolokio or Hawkular and even security parameters would need to be set up as well. If you’d prefer not to get into those details, I would recommend using the container images for OpenJDK shipped by Red Hat.

In this article we will:

  • Build an image with Docker as well as Buildah.
  • We will run that image with Docker as well as Podman on localhost.
  • We will push our image to Quay.
  • Finally, we will run our app by importing a stream into OpenShift.

This article was written for both OpenShift 3.11 and 4.0 beta. Let’s jump right into it.

Setting up

To use our images and see how they work, we’ll use a web app as part of our bundle. Recently Microprofile.io launched the MicroProfile Starter beta, which helps you get started with MicroProfile by creating a downloadable package. Head out to Microprofile.io and get the package for MicroProfile with Thorntail V2.

Getting the package for MicroProfile with Thorntail V2

Click the Download button to get the archive file.

On my Red Hat Enterprise Linux (RHEL) machine, I first create a temp directory, for example, demoapp, and unarchive the downloaded artifacts into it.

In my temp directory, I run:

$ mvn clean compile package

Now we should have a built demo app with a fat jar that we can call to run Thorntail.

Let’s copy the target/demo-thorntail.jar to the temp directory.

Here is the Dockerfile with comments on each layer. The source code of this file can also be found here on GitHub.

# A Java 8 runtime example
# The official Red Hat registry and the base image
FROM registry.access.redhat.com/rhel7-minimal
USER root
# Install Java runtime
RUN microdnf --enablerepo=rhel-7-server-rpms \
install java-1.8.0-openjdk --nodocs ;\
microdnf clean all
# Set the JAVA_HOME variable to make it clear where Java is located
ENV JAVA_HOME /etc/alternatives/jre
# Dir for my app
RUN mkdir -p /app
# Expose port to listen to
EXPOSE 8080
# Copy the MicroProfile starter app
COPY demo-thorntail.jar /app/
# Copy the script from the source; run-java.sh has specific parameters to run a Thorntail app from the command line in a container. More on the script can be found at https://github.com/sshaaf/rhel7-jre-image/blob/master/run-java.sh
COPY run-java.sh /app/
# Setting up permissions for the script to run
RUN chmod 755 /app/run-java.sh
# Finally, run the script
CMD [ "/app/run-java.sh" ]

Now that we have the Dockerfile details, let’s go ahead and build the image.

One important point to note is that the OpenJDK Java runtime is packaged as “java-1.8.0-openjdk”; this does not include the compiler and other development libraries which are in the -devel package.

The above Dockerfile is built on RHEL, which means I do not need to register with subscription-manager, since the host already has the subscriptions attached.

Below you will find two ways to build this. If you are running RHEL like I am, you can choose any of the two binaries to deploy from. Both of them should be in rhel7-server-extras-rpms.

You can enable the extras repo like this:

# subscription-manager repos --enable rhel-7-server-extras-rpms

Build and run images locally

Building the image with docker:

$ docker build -t quay.io/sshaaf/rhel7-jre8-mpdemo:latest .

Running the image with docker and pointing localhost:8080 to the container port 8080:

$ docker run -d -t -p 8080:8080 -i quay.io/sshaaf/rhel7-jre8-mpdemo:latest

 

We can also use the buildah command, which helps to create container images from a working container, from a Dockerfile or from scratch. The resulting images are OCI-compliant, so they will work on any runtimes that meet the OCI Runtime Specification (such as Docker and CRI-O).

Building with buildah:

$ buildah bud -t rhel7-jre8-mpdemo .

Creating a container with buildah:

$ buildah from rhel7-jre8-mpdemo

Now we can also run the container with Podman, which complements Buildah and Skopeo by offering an experience similar to the Docker command line: allowing users to run standalone (non-orchestrated) containers. And Podman doesn’t require a daemon to run containers and pods. Podman is also part of the extras channel and the following command should run the container.

$ podman run -d -t -p 8080:8080 -i quay.io/sshaaf/rhel7-jre8-mpdemo:latest

More details on Podman can be found in Containers without daemons: Podman and Buildah available in RHEL 7.6 and RHEL 8 Beta and Podman and Buildah for Docker users.

Now that we have an image, we would also like to deploy it to OpenShift and test our app. For that, we need the oc client libraries. I have my own cluster setup; you could choose to use Red Hat Container Development Kit (CDK)/minishift, Red Hat OpenShift Online, or your own cluster. The procedure should be the same.

Deploying to OpenShift

To deploy to OpenShift, we need to have the following few constructs:

  • An image stream for our newly created container image
  • Deployment configuration for OpenShift
  • Service and routing configurations

Image stream

To create the image stream, my OpenShift cluster should be able to pull the container image from somewhere. Up until now, the image has been residing on my own machine. Let’s push it to Quay. The Red Hat Quay container and application registry provides secure storage, distribution, and deployment of containers on any infrastructure. It is available as a standalone component or in conjunction with OpenShift.

First I need to log in to Quay, which I can do as follows:

$ docker login -u="sshaaf" -p="XXXX" quay.io

And then we push our newly created image to Quay:

$ docker push quay.io/sshaaf/rhel7-jre8-mpdemo

Let’s take a look at the constructs before we deploy. For the anxious, you can skip the details on the deployment template and head down to project creation.

Now that we have the image, we define our stream:

- apiVersion: v1
kind: ImageStream
metadata:
name: rhel7-jre8-mpdemo
spec:
dockerImageRepository: quay.io/sshaaf/rhel7-jre8-mpdemo

Again, you can see above we point directly to our image repository at Quay.io.

Deployment config

Next up is the deployment config, which sets up our pods and triggers and points to our newly created stream, etc.

If you are new to containers and OpenShift deployments, you might want to look at this useful ebook.

- apiVersion: v1
kind: DeploymentConfig
metadata:
name: rhel7-jre8-mpdemo
spec:
template:
metadata:
labels:
name: rhel7-jre8-mpdemo
spec:
containers:
- image: quay.io/sshaaf/rhel7-jre8-mpdemo:latest
name: rhel7-jre8-mpdemo
ports:
- containerPort: 8080
protocol: TCP
replicas: 1
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- rhel7-jre8-mpdemo
from:
kind: ImageStreamTag
name: rhel7-jre8-mpdemo:latest
type: ImageChange

The service

Now that have our deployment, we also want to define a service that will ensure internal load balancing, IP addresses for the pods, etc. A service is important if we want our newly created app to be finally exposed to outside traffic.

- apiVersion: v1
kind: Service
metadata:
name: rhel7-jre8-mpdemo
spec:
ports:
- name: 8080-tcp
port: 8080
protocol: TCP
targetPort: 8080
selector:
deploymentconfig: rhel7-jre8-mpdemo

The route

And the final piece of our deployment is the route. It’s time we expose or app to the outside world with the route. We can define which internal service this route points to and which port to target. OpenShift will give it a friendly URL that we will be able to point to and see our resulting MicroProfile app running on our newly created Java 8 runtime–based deployment.

- apiVersion: v1
kind: Route
metadata:
name: rhel7-jre8-mpdemo
spec:
port:
targetPort: 8080-tcp
to:
kind: Service
name: rhel7-jre8-mpdemo
weight: 100

The complete template for the above can be found here on GitHub.

Let’s also create a project in OpenShift like this:

$ oc new-project jredemos

And now we process the template and get our demo with the Java 8 runtime on OpenShift. To do that, run the following command:

$ oc process -f deployment.yaml  | oc create -f -

This will create the entire deployment, and you should be able to see something like this:

Results of creating the entire deployment

Now run the following command:

$ oc get routes

This will show the routes, as follows:

The routes

Let’s copy the route into our browser and we should see the web app deployed running on RHEL 7 with the Java 8 runtime. (The address below is specific to test the cluster only and will be different per cluster.)

http://rhel7-jre8-mpdemo-jredemos.apps.cluster-1fda.1fda.openshiftworkshop.com/

Summary

We have successfully built and created a Java 8 runtime container image with Docker or Buildah. Be aware of the following:

  • It’s important to note that this image is to show how this can be done; it doesn’t include things like Jolokio or Hawkular, which could be necessary for deployments.
  • Moreover, it does not take into consideration parameters required for running Java apps in containers, which are very well explained in this article by Rafael Benevides.
  • Furthermore, when deploying your own container images, always remember to check the support and lifecycle policy here, if you are running Red Hat’s build of OpenJDK.

We then deployed this to OpenShift with our basic MicroProfile demo app from the MicroProfile Starter beta.

If you are looking into creating a more comprehensive demo application, the following are great resources:

The entire resources for this article can be found here.

Using Red Hat Application Migration Toolkit to see the impact of migrating to OpenJDK

“This post was originally published on Red Hat Developers. To read the original post, click here.”

Migrating from one software solution to another is a reality that all good software developers need to plan for. Having a plan helps to drive innovation at a continuous pace, whether you are developing software for in-house use or you are acquiring software from a vendor. In either case, never anticipating or planning for migration endangers the entire innovation value proposition. And in today’s ever-changing world of software, everyone who wants to benefit from the success of the cloud has to ensure that cloud innovation is continuous. Therefore, maintaining a stack that is changing along with technological advancements is a necessity.

In this article, we will take a look at the impact of moving to OpenJDK and the results will aid in drawing further conclusions and in planning. It’s quite common to be using a proprietary version of JDK, and this article addresses how to use Red Hat Application Migration Toolkit to analyze your codebase to understand the impact of migrating to OpenJDK.

Continue reading

Getting started with JBehave in 8 steps.

This post is about JBehave and how to quickly get started with it. If you would like to know about BDD please use the following link.
What is Behavioral Driven Development?

Today I have used JBehave for the first time. It does have some convincing factors for instance diving requirements into scenarios which map pretty nicely to the tests that are written with in the Steps. Thus it seems like it would be easier for Stakeholder/s to use it as a good guideline for the initial requirements. Its always quite usual to come up to some disagreements about the development however the tool does help to bring forth the ease for stake holders who really dont have to get into writing code but will have a technical jargon to communicate through to the developers in shape of scenarios.

So the first things come up.
1. Create a Java project in Eclipse.
2. Download the JBehave latest version from the website and add the jar files to your project build classpath
3. Create a scenario file and name it as user_wants_to_shop.scenario

Add the following text to the file

  1. Given user Shaaf is logged in
  2. Check if user shaaf is Allowed to shop
  3. Then show shaaf his shopping cart

The lines above simply state the rules for the scenario we are about to create.

4. Now we will create a java file mapping to it.

Create a new class with the name corresponding to the same as the .scenario file
UserWantsToShop.java

Add the following code to it

  1. import org.jbehave.scenario.PropertyBasedConfiguration;
  2. import org.jbehave.scenario.Scenario;
  3. import org.jbehave.scenario.parser.ClasspathScenarioDefiner;
  4. import org.jbehave.scenario.parser.PatternScenarioParser;
  5. import org.jbehave.scenario.parser.ScenarioDefiner;
  6. import org.jbehave.scenario.parser.UnderscoredCamelCaseResolver;
  7.  
  8. public class UserWantsToShop extends Scenario {
  9.  
  10.         // The usual constructor with default Configurations.
  11. 	public UserWantsToShop(){
  12. 		super(new ShoppingSteps());
  13. 	}
  14.  
  15. 	// Mapping .scenario files to the scenario. This is not by default.
  16.     public UserWantsToShop(final ClassLoader classLoader) {
  17.         super(new PropertyBasedConfiguration() {
  18.             public ScenarioDefiner forDefiningScenarios() {
  19.                 return new ClasspathScenarioDefiner(
  20.                     new UnderscoredCamelCaseResolver(".scenario"), 
  21.                     new PatternScenarioParser(this), classLoader);
  22.             }
  23.         }, new ShoppingSteps());
  24.     }
  25.  
  26. }

5. Just that we now have steps we need some place to put the logic for the Steps in the Scenarios. So we create a new Steps class ShoppingSteps.java

  1. import org.jbehave.scenario.annotations.Given;
  2. import org.jbehave.scenario.annotations.Then;
  3. import org.jbehave.scenario.annotations.When;
  4. import org.jbehave.scenario.steps.Steps;
  5.  
  6. public class ShoppingSteps extends Steps {
  7.  
  8. }

6. Now you can run the Scenario by running it as a Junit Test.

Following should be the output

  1. Scenario: 
  2.  
  3. Given I am not logged in (PENDING)
  4. When I log in as Shaaf with a password JBehaver (PENDING)
  5. Then I should see my "Shopping Cart" (PENDING)

This means that none of the scenario requirements have been implemented as yet. But the test result should be green to show there are no problems with our setup.

Now lets add the implementation to the tests.

7. Add the body to the ShoppingSteps. I have added the comments with the methods.

  1.         // Given that a user(param) is loggen in
  2. 	@Given("user $username is logged in")
  3. 	public void logIn(String userName){
  4. 		checkInSession(userName);
  5. 	}
  6.  
  7.         // Check if the user is allowed on the server
  8. 	@When("if user $username is $permission to shop")
  9. 	public void isUserAllowed(String userName, String permission){
  10. 		getUser(userName).isUserAllowed().equals(permission);
  11. 	}
  12.  
  13.        // finally then let him use the shopping cart.
  14. 	@Then("show $username his Shopping cart")
  15. 	public void getMyCart(String userName, String cart){
  16. 		getUserCart(userName);
  17. 	}

8. Now you should try to run the scenario again. And all of the test should be green. I have not implemented the actual methods so they will be red. 🙂

Logging with log4J isDebugEnabled

Alot of times I have seen the questions popping up whether to use isDebugEnabled property or not. Arguably most of the times or rather always about performance. Some of the stuff that I feel important about using it follows.

The answer is simple. It is made to be used. However using it has to be with caution.

For instance
If I am using the following line in my code.

  1. log.debug("I am there");

Thats an example of a good practise

However I can really blow it out of proportions if I do the following.

  1. // Not good practise
  2. if (log.isDebugEnabled()){
  3.     log.debug("I am there"); 
  4. }

And why exactly is that so. This is because the method debug in the class Category i.e. extended by Logger in the log4j libraries explicitly checks the mode for the logging itself.

Extract from the class org.apache.log4j.Category is below

  1. public void debug(Object message) {
  2. if(repository.isDisabled(Level.DEBUG_INT))
  3. return;
  4. if(Level.DEBUG.isGreaterOrEqual(this.getEffectiveLevel())) {
  5. forcedLog(FQCN, Level.DEBUG, message, null);
  6. }
  7. }

Okay so thats the case where we say dont use it. What about the one where we actually do use the isDebugEnabled property.

For instance you have a large parameter going in the debug.

  1. // Bad Practise
  2.  log.debug("XML "+Tree.getXMLText());

In such a case it can clearly be said “No dont do it!” If Tree.getXMLText is going to pull out all the hair out of the app then there is no use. As it will be constructed first and if the debug level is not enabled that would be a performance cost.

  1. // Good Practise
  2. if (log.isDebugEnabled()){
  3.     log.debug("XML "+Tree.getXMLText());
  4. }

Thus in the example above it would be wiser to use log.isDebugEnabled() just to make sure the mileage or cost stays lesser.

Keep it simple, short and stupid (KISS)

“everything should be made as simple as possible, but no simpler” – Albert Einstein

I am sitting in a design room today designing a simple requirement for a client. He wants me to enhance the transfer money from one account to another. Just that it happens that I have my best friend Yuky sitting right next to me. He is damn good in Maths and extra ordinarily knows all the calculations on his tips. I am sure if I had asked him what was the angle of sight to the light on the wall he could just tell me in a few seconds looking with 10 eyes at his 12 fingers, trust me it will be correct.

Just that I cannot ignore his brilliance and just that I admire it sometimes and posed to be inspired by it all the time, I don’t agree with it all the time!

And this is what I mention here today.KISS (Keep it simple, short and stupid)

Rule# 1.  Don’t try to be a super genius.

Rule# 2. Don’t over do it!

Rule# 3. Break down your problems

Rule# 4. Common Sense has to prevail

Rule# 5. Keep your mess small, don’t litter around the park.

Most importantly keep everything short, simple and stupid.

I recently had to deliver a few minutes inspirational presentation on Test Driven Development and especially about KISS. Attached is some of the material. Feel free to explore.

What is the KISS principle?

Could be more simpler 🙂

Automation with Selenium,Junit, Ant

What is Selenium
How does Selenium work
History and how it started
About Ant
And Junit

Much of the technologies above do not or will not need an introduction if you already know them or can read them from the links above.

More over today’s article is more about how we can use all the three selenium, ant and junit to come up with an automated solution for regressive testing.

Please refer to the documentation links above for the basic knowledge on any of the used tools.

I am assuming that you know how to record a test in selenium if you dont then go here

Now that you do know how to record and save the tests simply save them in any directory structure you like as long as they follow the right package convention. like com.foo.bar
this would mean the file should be in directory like com/foo/bar. Conventionally and logically today that is how the TestCase is compiled through the ant script. If the package is not the right path then compile time errors will occur.

My calling target for running the whole procedure should look like this

< target name="build_tests" depends="clean, compile, start-server,  tests, stop-server" / >

Details of the depending targets are as follows.
clean: would clean the entire build directories etc to make sure nothing from our past is carried forward into the future.

compile: should compile

start-server: should start the selenium server.

tests: should run the selenium tests.

stop-server: should stop the selenium server once all tests have been executed.

How to start a selenium server from Ant?

    < target name="start-server" >
        < java jar="lib/selenium-server.jar" fork="true" spawn="true" >
            < arg line="-timeout 30" / >
            < jvmarg value="-Dhttp.proxyHost=proxy.proxyhost.com" / >
            < jvmarg value="-Dhttp.proxyPort=44444" / >
        < / java >
    < / target >

 

The target above is simply taking the selenium-server.jar which is in the classpath and running the main-class from it. the proxy parameters are optional.

How to stop the server from Ant?

   
 < target name="stop-server" >
        < get taskname="selenium-shutdown" 
            src="http://localhost:4444/selenium-server/driver/?cmd=shutDown"	
            dest="result.txt" ignoreerrors="true" / >
        < echo taskname="selenium-shutdown" message="DGF Errors during shutdown are expected" / >
    < / target >
 

The selenium server starts on port 4444 by default and to tell it to shutdown is simple, the command in the src is passed to it. cmd=shutDown
That should just shutdown the server.

Executing the test cases?
Selenium Test cases are Junit tests cases and that’s how they are treated in this tutorial.
Thus some of you will be very much familiar with the ant targets junit and junitreport. However I will describe how the following is working.
the following target will run the selenium tests and print a summary report to the ${dir}
The includes and excludes, simply tell the target which files to include or exclude while running the test cases. This is typically done when you don’t want some tests cases to be included or your source for tests and the application are in the same directory and you only want to include something like Test*.class.

  < target name="tests" depends="compileonly" description="runs JUnit tests" >
    < echo message="running JUnit tests" / >
    < junit printsummary="on" dir=".." haltonfailure="off" haltonerror="off" timeout="${junit.timeout}" fork="on" maxmemory="512m" showoutput="true" >
      < formatter type="plain" usefile="false" / >
      < formatter type="xml" usefile="true" / >
      < batchtest todir="${testoutput}" filtertrace="on" >
        < fileset dir="${src}" >
          < includesfile name="${tests.include}" / >
          < excludesfile name="${tests.exclude}" / >
        < / fileset >
      < / batchtest >
      < classpath >
        < pathelement path="${classes}" / >
        < pathelement path="${build.classpath}" / >
      < / classpath >
    < / junit >

The following will take the formatted output from the lines above and generate a report out of it in xml and html and place the results in the ${reports}/index.html
A sample Junit test report might look like this.

   < echo message="running JUnit Reports" / >
   < junitreport todir="${reports}" >
      < fileset dir="${reportdir}" >
        < include name="Test*.xml" / >
      < / fileset >
      < report format="frames" todir="${reports}" / >
    < / junitreport >
    < echo message="To see your Junit results, please open ${reports}/index.html}" / >
  < / target >

In general you can add all of this to your nightly build through any of the CI servers like Cruise Control. Also as a general practice you will need to do a little more then just executing this target every night depending on your application. For instance cleaning up of resource centers like Databases etc.

Calling wsadmin scripts from ant

You can simply add the following to a target.
For the following wsadmin should be in your PATH env.

< exec dir="." executable="wsadmin.bat" logError="true" failonerror="true" output="wsconfig.out" >
< arg line="-lang jython -f ../../createQFactory.py"/ >
< /exec >

All output will be logged to wsconfig.out

Generate XML – DBMS_XMLGEN

On my way to my solution store just found this nice to use, old and easy feature.
Possibilities endless, usage typically very easy.

I used the following to generate XML from sqlplus:

 select dbms_xmlgen.getxml('select * from user') from dual; 

Output:

< ROWSET >
 < ROW >
  < TNAME >Employee< / TNAME >
  < TABTYPE > TABLE < / TABTYPE >
 < / ROW >
< / ROWSET >

Command, Singleton, JMenuItem, JButton, AbstractButton – One Listener for the app

Here I would like to demonstrate a simple use of JMenuItems being used with Single Listener for the entire system.
A simple sample of use would probably be SingleInstance Desktop Application.

Lets see how that is done here.

1. First lets create a OneListener class that should be able to listen to ActionEvents and also be able to add Commands to itself. Please refer to my previous post on Command,Singleton if you would like to see more about this patterns and there usage.

  1. package com.shaafshah.jmenus;
  2.  
  3. import java.awt.event.ActionEvent;
  4. import java.awt.event.ActionListener;
  5. import java.util.ArrayList;
  6.  
  7. import javax.swing.AbstractButton;
  8.  
  9. // Implements the ActionListener and is a Singleton also.
  10.  
  11. public class OneListener implements ActionListener{
  12.  
  13. 	private static OneListener oneListener = null;
  14.  
  15. 	// Holds the list of all commands registered to this listener
  16. 	private ArrayList<Command> commandList = null;
  17.  
  18. 	// A private constructor
  19. 	private OneListener(){
  20. 		commandList = new ArrayList<Command>();
  21. 	}
  22.  
  23. 	// Ensuring one instance.
  24. 	public static OneListener getInstance(){
  25. 		if(oneListener != null)	
  26. 			return oneListener;
  27. 		else return oneListener = new OneListener();
  28. 	}
  29.  
  30. 	// Add the command and add myself as the listener
  31. 	public void addCommand(Command command){
  32. 			commandList.add(command);
  33. 		    ((AbstractButton)command).addActionListener(this);
  34. 	}
  35.  
  36.  
  37. 	// All Events hit here.
  38. 	@Override
  39. 	public void actionPerformed(ActionEvent e) {
  40. 		((Command)e.getSource()).execute();
  41. 	}
  42.  
  43. }

In the above code, the addCommand method adds the command Object and adds a listener to it.
Now how is that possible.
Basically because I am categorizing my UI objects as Commands with in the system having some UI. And I am also assuming that these commands are Currently AbstractButton i.e. JMenuItem, JButton. Lets have a look at the Command Interface and its Implementation.

  1. public interface Command {
  2. 	public void execute();	
  3. }

And the implementation, note that the Command is an interface where as the class is still extending a UI object.

  1. import javax.swing.JMenuItem;
  2.  
  3. public class TestCmd extends JMenuItem implements Command{
  4.  
  5. 	public TestCmd() {
  6. 		super("Test");
  7. 		OneListener.getInstance().addCommand(this);
  8. 	}
  9.  
  10. 	@Override
  11. 	public void execute() {
  12. 		System.out.println("HelloWorld");
  13. 	}
  14. }

Personally don’t like calling the OneListener in the constructor of the class but just for the sake of simplicity of this post I have left it that way. There are many different ways to omit it.

So the TestCmd is a JMenuItem but is also a Command and thats what the OneListener understands.
As this commads Listener is also OneListener all ActionEvents are thrown there and from there only the Command.execute is called on.

So now you dont have to worry about what listeners are you in. The only thing you know is that when execute is called you need to do your stuff.

You can download the code from Here.

Hope this helps.