Thursday, May 12, 2011

java.lang.LinkageError in Tomcat 6

Usually the cause of this error is you may unintentionally include following two jars into your war file. 
    el-api.jar jasper-el.jar
They are provided by tomcat 6.

In Tomcat 6, you will get following error:

java.lang.LinkageError: loader constraint violation: loader …

You can solve the problem by adding following snippet into your pom file.

<dependency>
        <groupId>org.apache.tomcat</groupId>
        <artifactId>el-api</artifactId>
        <version>[1,)</version>
        <scope>provided</scope>
</dependency>
<dependency>
        <groupId>org.apache.tomcat</groupId>
        <artifactId>jasper-el</artifactId>
        <version>[1,)</version>
        <scope>provided</scope>
</dependency>

Read this post for more information.

Hadoop datanode version control

Sometimes, when you upgrade your Hadoop, you may get following error in your namenode log:

Incompatible build versions: namenode BV = Unknown; datanode BV =

./common-0.21.0/src/saveVersion.sh generates package-info.java which includes version information.  The content looks like

@HadoopVersionAnnotation(version="0.21.1-SNAPSHOT", revision="1", branch="",
                         user="username", date="Mon Nov 15 12:28:49 EST 2010",
                         url="your_domain/path",
                         srcChecksum="a1aeb15b4854808d152989ba76f90fac")

saveVersion.sh is executed when you build Hadoop using ant.  It is specified in build.xml (target "init").

In Java code, class org.apache.hadoop.util.VersionInfo manages version.  It gets version information from package-info.java generated by saveVersion.sh.

In class org.apache.hadoop.hdfs.server.datanode.DataNode, method handshake checks whether build version is equal.  How build version is calculated is shown below.

public static String getBuildVersion(){
  return VersionInfo.getVersion() +
  " from " + VersionInfo.getRevision() +
  " by " + VersionInfo.getUser() +
  " source checksum " + VersionInfo.getSrcChecksum();
}

So, the quick solution is that you upgrade all installations of Hadoop on different nodes.

How to install user-provided jars to Hadoop

If you write a MapReduce program and compile it to a jar, you usually run it with following command:

./bin/hadoop jar your_jar_name

If you want to your jar loaded when Hadoop is started (e.g. you add a new service which should be initiated and started by Hadoop), you can follow steps shown below.

In file bin/hadoop-config.sh, you can find following snippet:

for f in $HADOOP_COMMON_HOME/hadoop-*.jar; do
  CLASSPATH=${CLASSPATH}:$f;
done

So only jars whose names starting with "hadoop-" are loaded by default.

Drop your jar to the directory where Hadoop is used, and change file bin/hadoop-config.sh to add

CLASSPATH=${CLASSPATH}:$HADOOP_COMMON_HOME/your_jar_name