Thursday, July 15, 2010

My Glassfish support subscription came through - finally

Call me eager, call me insane, I had a JavaEE 6 app up and running in production on Glassfish 3.0 one month after it’s release. The improvements in JSF 2 and CDI were too compelling for me to hold off adopting, and sure enough, the application was quick and painless to develop.

I did however pay a price for being an early adopter – a Glassfish / Weld bug was preventing my JSF login pages from working. Fortunately I was able to quickly find an (ugly) workaround to get me going using JSPs. This brings me to the interesting nugget of this post: the story of my first use of our Glassfish v3 support contract.

I promptly reported the bug to Glassfish, using the support tools provided, and was overall impressed at how quickly the bug was isolated, and a resolution was found. All I had to do was wait for the patch, and my problem would disappear. So I waited for the patch. And waited. And waited…

I reported the bug in January, and heard of a fix in February. The patched release was supposed to come out in March. By May, there was still no release. I made a little noise, and got some attention from both Glassfish support, and the Glassfish Community, but was told the release was delayed by Oracle buying Sun, and the associated “Change in Control”. I was however assured that the release was imminent.

Sure enough, Glassfish 3.0.1 was released June 17th, with the fix to my bug – a turnaround of 5 months. While annoyed that I had to wait so long for the fix, I do appreciate having that support structure in place to ensure we get our apps working. I’m just glad we had a workaround.

Here’s to looking forward to a regular release schedule for Glassfish patches!

tags:

Friday, June 25, 2010

JSF Validation Failed Notice

Here’s how I show a notice on a JSF 2 page indicating that the JSF 2 postback failed due to validation errors. The following facelet snippet is rendered only when validation fails:

<h:outputtext 
   styleclass="errorMessage globalMessage" 
   value="Request *not* saved due to input errors" 
   rendered="#{facesContext.validationFailed}" />

The user then knows they should look through the page to correct the individually marked validation failures.


Friday, April 9, 2010

From svn to mercurial, the hg rises!

I’ve been toying with the idea of moving from svn to a distributed version control system (dvcs) for a long time. I held back, hoping the mercurial (hg) vs. git “war” would declare a winner, but as major projects started adopting the two dvcs’s, it became apparent to me that neither tool was going to disappear anytime soon. This notion was cemented for me when I read a forum post saying hg = git >> svnThat settled it. I had to pick one, and move forward. I chose mercurial, not because it stood out above git, but it seemed to be more widely adopted in the communities I was interested in. Joel Spolsky’s mercurial primer settled it for me. Mercurial was on it’s way in. I had identified identified some key obstacles I needed to overcome to replace my subversion infrastructure with a new mercurial one. I had to figure out:

These were all things I’d achieved with subversion, so they set the bar I had to achieve with any subversion replacement. Each of these steps turned out to be pretty easy overall. I ended up completing the process with a single days work (or two half-days…)

1) Install mercurial in CENTOS 5

I hunted around for a while looking for an RPM, but found nothing recent. On a twitter recommendation, I tried installing from source. Turned out to be pretty easy. The install ended up putting mercurial in my /usr/local/lib64 directory, which is not one of the default places python looks for it’s modules. I had to do set the PYTHONPATH environment variable pointing to this directory to get things to work.

2) svn import into hg

Turns out mercurial has this functionality built in with the bundled Convert Extension. One has to enable this extension in a mercurial configuration file, then it’s pretty simple to point the “hg convert” command at your subersion repo. Done and done!

3) LDAP integration

I’d achieved svn LDAP integration using apache, so I was quite happy to see hg did the same thing. I set up the hgwebdir.cgi script to serve my newly imported hg repositories, hi-jacking the LDAP authentication from my svn setup. Worked like a charm. p. Update, here’s the apache config for authentication:

# cat /etc/httpd/conf.d/hg.conf
ScriptAlias /hg "/var/www/hg/hgwebdir.cgi"

<location /hg>
        Order deny,allow
        Deny from all
        Allow from 10.0.0.1
        AuthType Basic
        AuthBasicProvider ldap
        AuthName "LDAP password"
        AuthLDAPURL "ldaps://ldap.domain.com/ou=users,dc=domain,dc=ccom?uid?sub?(&(uniqueidentifier=*)(objectclass=eperson))"
        AuthLDAPGroupAttribute ibm-allmembers
        require ldap-group cn=mis,ou=web,ou=Groups,dc=domain,dc=com
        Satisfy Any
</Location>

Note: the group terms are specific to the IBM Tivoli LDAP server.

4) hg integration with Hudson

This one seemed simple enough, given that there is a mercurial plugin for hudson. It was simple enough to install, but took a bit of effort to get going. I had to go into the hudson configuration, and define a mercurial “instance” pointing to a wrapper that set the PYTHONPATH environment variable, then delegated the argument to mercurial itself. Again, I think this is due to the /usr/local/lib64 path the mercurial installer chose. Maybe this could be done more cleanly, but it’s working now. p. Another hiccup with the hudson configuration, was that there was no place to put a username and password, as one did with the svn/hudson config. I worked around this by adding the IP of my hudson server to the “allow from” section of my apache config file. Thus hudson could pull from my hg repos without authenticating. p. A few hurdles with this one, but nothing severe. p. Update (in answer to a comment):

My hg warpper looks like:

# cat /usr/local/bin/hg_pythonpath_wrapper
#!/bin/sh
export PYTHONPATH=/usr/local/lib64/python2.4/site-packages
/usr/local/bin/hg "$@"

To get hudson to use this wrapper, goto: Manage Hudson → Configure System Scroll down to the mercurial section, click the “Add Mercurial” plugin, and point the executable to the above wrapper.

5) The maven release plugin

I use the maven release plugin to perform my releases. I initially tried to get maven to push/pull from the “central” hg repository I had setup, but ran into similar authentication issues as I did with hudson. I read some advice on using the maven release plugin with mercurial, and realized I was still thinking in terms of subversion. Of course it makes more sense to do the release against a local hg repo, and only push the change-sets if the release worked. Oh I love mercurial! And that’s it. One day later, and I’ve converted our multi-module subversion repository into several mercurial directories served by apache with LDAP authentication. Our hudson continuous integration polls the “primary” mercurial repository, and maven tags releases against my local repo. I was told correctly, one you ditch subversion and go with mercurial, you’ll never go back.


Wednesday, January 20, 2010

Glassfish V2 and V3 on the same host, behind mod_jk

I’ve jumped on the JavaEE 6 bandwagon, with one application already in production. The developer productivity improvements in JavaEE6/Glassfish V3 are tremendous. The only downside is that I still have some JavaEE 5 applications in production. The JavaEE 5 apps can’t migrate to JavaEE 6 until Icefaces supports JSF 1.2.

One workaround to this is to bundle the JSF 1.2 implementation with your application, then configure the classloader using the sun-web.xml file to load this bundled JSF library instead of the container’s JSF 2.0 library. This however only works with a standalone WAR file; when the WAR is bundled in an EAR, and references other EJB-JAR’s, this trick isn’t possible. Yet I still wanted to move new application development to JavaEE 6.

My solution was to run both Glassfish V2 and Glassfish V3 on the same box, with mod_jk forwarding requests to the appropriate container. In this way I am able to keep my existing JavaEE 5 / Icefaces applications running, and deploy new applications to the JavaEE 6 environment.

The first step was to get GF v2, and GF v3 running on the same machine. I have GF v2 running on the standard ports, and I incremented each port by 1 for GF v3. It looks like:

GF v2 Port GF v3 Port
HTTP 8080 8081
HTTPS 8181 8182
HTTPADMIN 4848 4849
IIOP 3700 3701
IIOP SSL 3820 3821
IIOP SSL-MUTUALAUTH 3920 3921
JMX 8686 8687
JMS 7676 7677

Next, we had to get mod_jk installed and working. The glassfish support team (yes, I pay for support!) pointed me to the following resources:

These were a great starting point, from which I ended up with the solution.

mod_jk.conf:

#mod_jk/1.2.28
LoadModule jk_module modules/mod_jk.so
JkWorkersFile /etc/httpd/conf.d/worker.properties
# Where to put jk logs
JkLogFile /var/log/httpd/mod_jk.log
# Set the jk log level [debug/error/info]
JkLogLevel info
# Select the log format
JkLogStampFormat "[%a %b %d %H:%M:%S %Y] "
# JkOptions indicate to send SSL KEY SIZE,
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories +DisableReuse
# JkRequestLogFormat set the request format
JkRequestLogFormat "%w %V %T"

# Should mod_jk send SSL information (default is On)
JkExtractSSL On
# What is the indicator for SSL (default is HTTPS)
JkHTTPSIndicator HTTPS
# What is the indicator for SSL session (default is SSL_SESSION_ID)
JkSESSIONIndicator SSL_SESSION_ID
# What is the indicator for client SSL cipher suit (default is SSL_CIPHER)
JkCIPHERIndicator SSL_CIPHER
# What is the indicator for the client SSL certificated? (default is SSL_CLIENT_CERT)
JkCERTSIndicator SSL_CLIENT_CERT

# Set the following if you want all vhosts to inherhit JkMounts from global
JkMountCopy All

# Send requests to GlassFish
JkMount /javaee6app* worker1
JkMount /javaee6app/* worker1

JkMount /javaee6app* worker2
JkMount /javaee6app/* worker2

# Send all glassfish-test requests to GlassFish
JkMount /glassfish-test/* worker1

JkShmFile /var/log/httpd/jk-runtime-status

And worker.properties:

## Define 1 real worker using ajp13
worker.list=worker1,worker2
# Set properties for worker1 (ajp13)
worker.worker1.type=ajp13
worker.worker1.host=localhost.localdomain
worker.worker1.port=8009
#Only used for a member worker of a load balancer. 
#worker.worker1.lbfactor=50
#Do not use cachesize with values higher then 1 on Apache 2.x prefork
#worker.worker1.cachesize=10
#connection_pool_size replace cachesize as of v1.2.16
worker.worker1.connection_pool_size=1
worker.worker1.connection_pool_timeout=0
worker.worker1.socket_keepalive=1
#Socket timeout in seconds
worker.worker1.socket_timeout=60

worker.worker2.type=ajp13
worker.worker2.host=localhost.localdomain
worker.worker2.port=8010
#Only used for a member worker of a load balancer. 
#worker.worker2.lbfactor=50
#Do not use cachesize with values higher then 1 on Apache 2.x prefork
#worker.worker2.cachesize=10
#connection_pool_size replace cachesize as of v1.2.16
worker.worker2.connection_pool_size=1
worker.worker2.connection_pool_timeout=0
worker.worker2.socket_keepalive=1
#Socket timeout in seconds
worker.worker2.socket_timeout=60

These are not the worker.properties as prescribed in the above links. After implementing the initial solution, I got reports from the wild of users mysteriously losing sessions. After much reading about mod_jk, I think I narrowed down the problem to a cachesize/connection_pool_size > 1 in conjunction with the prefork mpm apache module. Apparently this is a no-no.

So with these settings in place, I am able to develop new apps in JavaEE 6, while still running my older JavaEE 5 apps, on the same box. Looking forward to Icefaces 2.0 though, so I can drop this needless complexity!


Saturday, November 14, 2009

In the beginning...

Ok, this is my blog. I’m not sure how I will use it, for the most part it will just be a place where I can organize my thoughts and ideas about Java development. Maybe it will be useful for others, but probably not!

tags: