Monday, December 12, 2011

RabbIT

Enable filter rabbit.filter.ReverseProxy, by adding it to the front of httpinfilters value.
Configure values for sector rabbit.filter.ReverseProxy.
httpinfilters=rabbit.filter.ReverseProxy,rabbit.filter.HttpBaseFilter,rabbit.filter.DontFilterFilter,rabbit.filter.BlockFilter,rabbit.filter.RevalidateFilter
......
[rabbit.filter.ReverseProxy]
# This filter is not enabled by default, add it to httpinfilters if you want it.
# This Filter makes rabbit work as an accellerator for one web site.
# Change requests
transformMatch=^/(.*)
transformTo=http://<target_host>:
<port>/$1
# Deny proxy requests, you probably want this.
# deny=^http(s?)://.*
deny=
# If we want to allow admin access.
allowMeta=true
Run RabbIT:
java -jar jars/rabbit4.jar -f conf/rabbit.conf
Resource: http://www.khelekore.org/rabbit/

Friday, December 02, 2011

Memory allocation settings in Hadoop

Edit file conf/mapred-site.xml to change amount of memory allocated to sorting:

<property> 
    <name>io.sort.mb</name>
    <value>300</value>
</property>

Edit file conf/mapred-site.xml to change amount of memory allocated to each map/reduce task:

<property>
    <name>mapred.child.java.opts</name>
    <value>-Xmx800m</value>
</property>

Edit file conf/hadoop-env.sh to change amount of memory allocated to Hadoop daemons:

export HADOOP_HEAPSIZE=1000

Change ports used by Hadoop

Edit file conf/hdfs-site.xml to change ports used by HDFS

    <property>
        <name>dfs.secondary.http.address</name>
        <value>0.0.0.0:51090</value>
    </property>
    <property>
        <name>dfs.datanode.address</name>
        <value>0.0.0.0:51010</value>
    </property>
    <property>
        <name>dfs.datanode.http.address</name>
        <value>0.0.0.0:51075</value>
    </property>
    <property>
        <name>dfs.datanode.https.address</name>
        <value>0.0.0.0:51475</value>
    </property>
    <property>
        <name>dfs.datanode.ipc.address</name>
        <value>0.0.0.0:51020</value>
    </property>
    <property>
        <name>dfs.http.address</name>
        <value>0.0.0.0:51070</value>
    </property>
    <property>
        <name>dfs.https.address</name>
        <value>0.0.0.0:51470</value>
    </property>

Edit file conf/mapred-site.xml o change ports used by MapReduce

    <property>
        <name>mapred.job.tracker.http.address</name>
        <value>0.0.0.0:51030</value>
    </property>

    <property>
        <name>mapred.task.tracker.http.address</name>
        <value>0.0.0.0:51060</value>
    </property>

exclude directories when using GNU tar

tar zvcf name.tar.gz --exclude path/to/dir1 --exclude path/to/dir2 path/to/tar

Note:

  1. Do not include a trailing '/' in the path of excluded directories.  Otherwise, it won't work.
  2. Put --exclude before the directory/file to be tarred.

Friday, November 18, 2011

Install RPM packages with non-root account

Recently, I need to install RPM to RedHat Linux, but I don't have root access.  I found this post: http://ajaya.name/?p=6353.  However, some commands in the command are not correct or need more clarification.  So I wrote down my experience below.

  • create file ~/.rpmmacros and add following line
        %_rpmlock_path lib/rpm/__db.000
  • Initialize the database by running command:
rpm --initdb \
    --root /home/<user_name>/rpm-local/ \
    --dbpath /home/<user_name>/rpm-local/lib/rpm
  • Check the dependence
rpm --root /home/<user_name>/rpm-local/ \
--dbpath /home/<user_name>/rpm-local/lib/rpm \
-ivh package.rpm
  • Install package
rpm --root /home/<user_name>/rpm-local \
--dbpath /home/<user_name>/rpm-local/lib/rpm \ --relocate /usr=/home/<user_name>/rpm-local \
--nodeps \
-ivh package.rpm

Resources

Thursday, October 27, 2011

Add "Edit with Vim" item to context-menu in Windows for Vim

This post shows how to add "Edit with Vim" and "Tab Edit with Vim" items to context menu (pop out when you right click a file) in Windows.

Run regedit.exe, go to HKEY_LOCAL_MACHINE/SOFTWARE/Classes/*/shell/

  1. Create new key "Tab Edit with &Vim" (Right click parent entry -> New -> Key)
  2. Create new key "Edit with Vim"
  3. Create new key "command" under "Tab Edit with &Vim"
  4. Edit the entry with name "(Default)", change its data to
    <vim_dir>\gvim.exe" -p --remote-tab-silent "%1" "%*"
  5. Create new key "command" under "Edit with Vim".
  6. Edit the entry with name "(Default)", change its data to
    <vim_dir>\gvim.exe "%1"

It seems that latest versions of vim automatically create the registry entry:

  • HKEY_LOCAL_MACHINE\SOFTWARE\Vim\Gvim and the data of entry path points to the vim executable.
  • HKEY_CLASSES_ROOT\*\shellex\ContextmenuHandlers\gvim. Data of the entry "(Default)" is the CLSID (51EEE242-AD87-11d3-9C1E-0090278BBD99).
    You can delete this entry because we have already added menu item "Edit with Vim".
  • HKEY_CLASSES_ROOT\Applications\gvim.exe\shell\edit\command. Data of the entry "(Default)" is "<vim_dir>\gvim.exe" "%1"
  • HKEY_CLASSES_ROOT\CLSID\{51EEE242-AD87-11d3-9C1E-0090278BBD99}
    It has a key named InProcServer32. The data of "(Default)" is <vim_dir>\gvimext.dll
  • Some other keys

Monday, October 17, 2011

Logging in Hadoop

Hadoop uses log4j via Apache common logging.  The config file is conf/log4j.properties.

Some important variables are set in the command line.  Following is a snippet cut from the whole command line used to launch HDFS name node.

-Dhadoop.root.logger=INFO,DRFA
-Dhadoop.log.dir=/N/u/hdfs/programs/hadoop-0.21.0/bin/../logs
-Dhadoop.log.file=hadoop-hdfs-namenode-b009.log
-Dhadoop.home.dir=/N/u/hdfs/programs/hadoop-0.21.0/bin/..
-Dhadoop.id.str=hdfs

You can see that log dir, log file, log level, logger are set.  DRFA is defined in conf/log4j.properties.

Sunday, October 09, 2011

pdf to eps conversion

Currently, I need to convert pdf files to eps so that they can be included in latex files.

  1. Use Acrobat Pro open the pdf file. 
    Click File -> export -> PostScript -> Encapsulated PostScript or use "Save As" and change "Save As Type"
    However, the bounding box is NOT correctly calculated.
    You can use gsview to correct it.  Use gsview open the eps file, click "File -> PS to EPS", select "Automatically calculate Bounding Box" and save the output file.

  2. Use ghostscript.  Execute following command:
    gswin32 -sDEVICE=epswrite -sOutputFile=<filename>.eps <filename>.pdf
    This works well and bounding box is correctly calculated.

  3. Use Xpdf (http://www.foolabs.com/xpdf/download.html)
    pdftops -eps <filename>.pdf <filename>.eps
    However, the bounding box is NOT correctly calculated.

Brief Latex notes for equations

In-line: $…$

Single line, without equation number: \[ … \] or \begin{equation*} … \end{equation*}

Single line, with equation number: \begin{equation} … \end{equation}

Multi-line, without equation number: \begin{align*} ... \end{align*}

Multi-line, with equation number: \begin{align} ... \end{align}

"a double backslash (\\) is used to separate the lines, and an ampersand symbol (&) is used to indicate the place at which the formulas should be aligned."

For align, \label must be put in front of each equation. For equation, it does not matter you put it in front or in the end.

Saturday, October 08, 2011

Make vim-latex to generate output in a specified directory

Recently I started to use latex to write papers.  I want to edit latex in my favorite editor - vim. I found the project vim-latex: http://vim-latex.sourceforge.net/.  It is powerful and convenient to use. 

However, one feature I want is to generate output files (.div, .ps, .log, etc) into a separate directory rather than the same directory as tex files.  It turns out that vim-latex does not support it natively.  So I hacked into the source code to make it work on Windows.

  1. Edit file ~/vimfiles/ftplugin/tex.vim

    Add following config:

      set iskeyword+=: 
      let g:Tex_Outdir='out' 
      let g:Tex_ViewRule_pdf='"Foxit Reader.exe" ' 
      let g:Tex_CompileRule_dvi='mkdir '.g:Tex_Outdir.' & latex -output-directory='.g:Tex_Outdir.' -src-specials --interaction=nonstopmode $*' 
      let g:Tex_CompileRule_pdf=g:Tex_CompileRule_dvi.' & cd '.g:Tex_Outdir.' & dvipdfm $*.dvi'

    Basically above config specifies

    1. output directory of latex compilation
    2. How to view PDF files (if you don't give full path, the command needs to be in env variable PATH)
    3. How to compile dvi: create the output directory and put output there
    4. How to compile pdf: first compile tex to dvi, and then call dvipdfm to generate pdf
  2. Change ~/vimfiles/ftplugin/latex-suite/compiler.vim

    Change line 252 to (this line adds the full path of output directory):
      let execString = 'start '.s:viewer.' "'.expand('%:p:h').'/'.g:Tex_Outdir.'/$*.'.s:target.'"'

    Change line 405 to (this line adds the full path of output directory):
    let execString = 'silent! !'.viewer.' "'.expand('%:p:h').'/'.g:Tex_Outdir.'/'.mainfnameRoot.'.'.s:target.'" '.line('.').' "'.expand('%').'"'

    Note: the text in blue is what I added (diff against the original code)
Use command 'TTarget' to switch amont dvi, ps, pdf, etc.

Monday, September 19, 2011

How to publish artifacts to ivy local repository and use it in another project

Build artifacts that are required by another project

ivy.xml: Attributes organisation, module and revision of element ivy-module/info.

1) Add a new resolver to ivy by adding following snippet to ivysettings.xml:

<filesystem name="gerald-local-ivy" m2compatible="false" force="false" local="true">
    <artifact pattern="${ivy.default.ivy.user.dir}/local/[organisation]/[module]/[type]s/[artifact]-[revision](-[classifier]).[ext]"/>
</filesystem>
This will create a directory local under <user.home>/.ivy2/.

2) Add following snippet to your build.xml:

<target name="publish" depends="jar" description="Publish">
    <!-- following property defines the version to publish -->
<property name="ivy.deliver.revision" value="${version}"/> <ivy:publish resolver="gerald-local-ivy" forcedeliver="true" settingsRef="${ant.project.name}.ivy.settings" overwrite="true"> <artifacts pattern="${build.dir}/[artifact]-[revision](-[classifier]).[ext]" /> </ivy:publish> </target>

The resolver attribute of ivy:publish must match name attribute specified in step 1). Change pattern attribute for artifacts element to match where you put the artifact.

3) Use command ant publish to publish your jar.

Build main project that depends on the artifacts built above.

1) Add the same resolver to ivy by adding following snippet to ivysettings.xml:

<filesystem name="gerald-local-ivy" m2compatible="false" force="false" local="true">
    <artifact pattern="${ivy.default.ivy.user.dir}/local/[organisation]/[module]/[type]s/[artifact]-[revision](-[classifier]).[ext]"/>
</filesystem>

2) Add it to your effective resolver chain.

3) Add dependency declaration to ivy.xml

<dependency org="<organization>" name="<module>" 
            rev="<version>" conf="<common->master>">
The attributes org, name, rev must match the values specified when you built the dependency jar.

4) Manually remove the artifacts that exist in local cache (<user.home>/.ivy2/cache).

5) Build your project

Resources

http://ant.apache.org/ivy/history/2.2.0/use/publish.html
http://ant.apache.org/ivy/history/latest-milestone/resolver/chain.html
http://stackoverflow.com/questions/353336/how-does-ivypublish-work
http://mail-archives.apache.org/mod_mbox/ant-ivy-user/201002.mbox/%3C27714488.post@talk.nabble.com%3E

Sunday, September 18, 2011

Configure ivy to use local Maven repository

This post shows how to configure ivy to use specific local Maven repository.

1) Add following config as a child element of tag "resolvers" in you ivy settings file:

    <filesystem name="local-maven-2" m2compatible="true" force="false" local="true">
       <artifact pattern="${gerald.repo.dir}/[organisation]/[module]/[revision]/[module]-[revision].[ext]"/>
       <ivy pattern="${gerald.repo.dir}/[organisation]/[module]/[revision]/[module]-[revision].pom"/>
    </filesystem>
2) Then add it to your chain resolver config. Example: 
    <chain name="internal" dual="true">
      <resolver ref="local-maven-2"/>
      <resolver ref="apache-snapshot"/> 
      <resolver ref="maven2"/>
    </chain>

If you have multiple resolver chain, make sure that the correct one, which is effective for you build, is changed.

3) After ivy caches the artifacts in its own local repo (the first time the dependency is resolved), it will not pick changes you made to the artifacts in the original Maven repository.  In other words, if you use "mvn clean install" to re-publish the artifact, the new version will NOT propagate to ivy.
You can change the default behavior by tweaking parameters:
    checkmodified, changingPatternchangingMatcher, alwaysCheckExactRevision
Read this article for details: http://ant.apache.org/ivy/history/trunk/settings/resolvers.html
One example:

    <chain name="default" dual="true" 
	     checkmodified="true" changingPattern=".*SNAPSHOT">
     ......
    </chain>

Resources:

http://mail-archives.apache.org/mod_mbox/ant-ivy-user/200807.mbox/raw/%3C94bda3fa0807202250y446a818eodb527dba96c8ac93@mail.gmail.com%3E/

Tuesday, September 13, 2011

Vdbench notes

Two types of supported testings.  Following table shows the configuration groups for these testings.

Storage device testing File system testing
General - optional General - optional
Host Definition(HD) - optional Host Definition (HD) - optional
Storage Definition(SD) File System Definition (FSD)
Workload Definition(WD) File System Workload Definition(FSWD)
Run Definition (RD) Run Definition (RD)

Different configuration groups may be specified in one single file or multiple files (use multiple -f options to specify all of them).

Comments: a line starting with '/', '#', or '*' is a comment.
Continuation: end a line with a comma plus a whitespace.

Output file
-o dirname+   # if dirname has existed, use dirname001.
-o dirname.tod    # the directory name will be dirname.yymmdd.hhmmss

General parameters

Vdbench_global_param

Host definition

Vdbench_host_def_param

Storage definition

Vdbench_storage_def_param

Workload definition

Vdbench_workload_def

Run definition

Vdbench_run_def_param

File System Definition

Vdbench_filesystem_def_param

File System Workload Definition

Vdbench_filesystem_workload_def_param

Run definition for file system testing

Vdbench_run_def_fs_param

Resources:
Official doc: http://downloads.sourceforge.net/project/vdbench/vdbench502.pdf?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Fvdbench%2Ffiles%2F&ts=1315925901&use_mirror=superb-sea2

Friday, September 09, 2011

R notes

Construct a list of lists

a = c(1,2,3)
b = c(4,5,6)
c = list(a, b)
d = list(c, list(a))    # different from list(c, a)
e = list(d, name="gerald")

Append to a list:

mylist[[length(mylist) + 1]] = obj

Append to a vector:

append(vec, val)

 

Type information

typeof(var),  class(var)

Resources

Monday, September 05, 2011

How to get network information in Linux

Following files/dirs can be directly read:

/proc/net/dev
/sys/class/net/<if_name>/
/sys/class/net/<if_name>/statistics

Tool netstat is your friend to get network-related information:

netstat -I  # display interface info
netstat -s  # display statistics info
netstat -r  # display routing info
netstat -tlnp  # display TCP listening info

Other tools: sar, ifconfig, iftop

Friday, July 08, 2011

How to see network card and disk speed

Sometimes, you may want to know the hardware speed limit for network interface cards and disks.

Network Interface Card

I use commands

dmesg | grep -i ethernet
dmesg | grep -i infiniband

The output looks like

Intel(R) Gigabit Ethernet Network Driver - version 2.1.0-k2-1
igb 0000:0b:00.0: Intel(R) Gigabit Ethernet Network Connection
igb 0000:0b:00.1: Intel(R) Gigabit Ethernet Network Connection
and
mlx4_ib: Mellanox ConnectX InfiniBand driver v1.0 (April 4, 2008)
Registered RDS/infiniband transport

I also use command

/sbin/lspci
The output has following useful information
0b:00.0 Ethernet controller: Intel Corporation 82575EB Gigabit Network Connection (rev 02)
0b:00.1 Ethernet controller: Intel Corporation 82575EB Gigabit Network Connection (rev 02)
10:00.0 InfiniBand: Mellanox Technologies MT26418 [ConnectX VPI PCIe 2.0 5GT/s - IB DDR / 10GigE] (rev a0)

Disk

/sbin/lspci | grep -i ata

Other useful tools

mii-tools
hdparm

They need root privilege usually.

Thursday, May 12, 2011

java.lang.LinkageError in Tomcat 6

Usually the cause of this error is you may unintentionally include following two jars into your war file. 
    el-api.jar jasper-el.jar
They are provided by tomcat 6.

In Tomcat 6, you will get following error:

java.lang.LinkageError: loader constraint violation: loader …

You can solve the problem by adding following snippet into your pom file.

<dependency>
        <groupId>org.apache.tomcat</groupId>
        <artifactId>el-api</artifactId>
        <version>[1,)</version>
        <scope>provided</scope>
</dependency>
<dependency>
        <groupId>org.apache.tomcat</groupId>
        <artifactId>jasper-el</artifactId>
        <version>[1,)</version>
        <scope>provided</scope>
</dependency>

Read this post for more information.

Hadoop datanode version control

Sometimes, when you upgrade your Hadoop, you may get following error in your namenode log:

Incompatible build versions: namenode BV = Unknown; datanode BV =

./common-0.21.0/src/saveVersion.sh generates package-info.java which includes version information.  The content looks like

@HadoopVersionAnnotation(version="0.21.1-SNAPSHOT", revision="1", branch="",
                         user="username", date="Mon Nov 15 12:28:49 EST 2010",
                         url="your_domain/path",
                         srcChecksum="a1aeb15b4854808d152989ba76f90fac")

saveVersion.sh is executed when you build Hadoop using ant.  It is specified in build.xml (target "init").

In Java code, class org.apache.hadoop.util.VersionInfo manages version.  It gets version information from package-info.java generated by saveVersion.sh.

In class org.apache.hadoop.hdfs.server.datanode.DataNode, method handshake checks whether build version is equal.  How build version is calculated is shown below.

public static String getBuildVersion(){
  return VersionInfo.getVersion() +
  " from " + VersionInfo.getRevision() +
  " by " + VersionInfo.getUser() +
  " source checksum " + VersionInfo.getSrcChecksum();
}

So, the quick solution is that you upgrade all installations of Hadoop on different nodes.

How to install user-provided jars to Hadoop

If you write a MapReduce program and compile it to a jar, you usually run it with following command:

./bin/hadoop jar your_jar_name

If you want to your jar loaded when Hadoop is started (e.g. you add a new service which should be initiated and started by Hadoop), you can follow steps shown below.

In file bin/hadoop-config.sh, you can find following snippet:

for f in $HADOOP_COMMON_HOME/hadoop-*.jar; do
  CLASSPATH=${CLASSPATH}:$f;
done

So only jars whose names starting with "hadoop-" are loaded by default.

Drop your jar to the directory where Hadoop is used, and change file bin/hadoop-config.sh to add

CLASSPATH=${CLASSPATH}:$HADOOP_COMMON_HOME/your_jar_name

Friday, April 15, 2011

Debug/profile heap/gc in Java

HPROF

Profiler agent.
Examples:
java -agentlib:hprof=help
java -agentlib:hprof=heap=sites
java -agentlib:hprof=heap=dump
java -agentlib:hprof=cpu=samples

" By default, heap profiling information (sites and dump) is written out to java.hprof.txt (in ASCII) in the current working directory.

The output is normally generated when the VM exits, although this can be disabled by setting the “dump on exit” option to “n” ( doe=n). In addition, a profile is generated when Ctrl-\ or Ctrl-Break (depending on platform) is pressed. On Solaris OS and Linux a profile is also generated when a QUIT signal is received ( kill -QUIT pid). If Ctrl-\ or Ctrl-Break is pressed multiple times, multiple profiles are generated to the one file.  "

jmap

The jmap command-line utility prints memory related statistics for a running VM or core file.

Commands:

jmap -histo <pid>                          #show histogram of objects
jmap -dump:format=b,file=<file>    #dump heap in HPROF format (can be processed by jhat)

jstat

"The jstat utility uses the built-in instrumentation in the HotSpot VM to provide information on performance and resource consumption of running applications. "

show garbage collection info, class loading info, compilation info, etc.

visualgc

GUI to show results of jstat.

Java VisualVM

http://download.oracle.com/javase/6/docs/technotes/guides/visualvm/index.html
command:  jvisualvm

"Java VisualVM is a tool that provides a visual interface for viewing detailed information about Java applications while they are running on a Java Virtual Machine (JVM), and for troubleshooting and profiling these applications."

JConsole

"This tool is compliant with Java Management Extensions (JMX). The tool uses the built-in JMX instrumentation in the Java Virtual Machine to provide information on the performance and resource consumption of running applications."

jhat (java heap analysis tool)

"The jhat tool provides a convenient means to browse the object topology in a heap snapshot. This tool was introduced in the Java SE 6 release to replace the Heap Analysis Tool (HAT). "

Command:

jhat <hprof_file_name>

Eclipse MAT

 

jdb

Misc.

"As of Java SE 5.0 update 7, the -XX:+HeapDumpOnOutOfMemoryError command-line option
tells the HotSpot VM to generate a heap dump when an OutOfMemoryError occurs (see
section 1.9).
As of Java SE 5.0 update 14, the -XX:+HeapDumpOnCtrlBreak command-line option tells the
HotSpot VM to generate a heap dump when a Ctrl-Break or SIGQUIT signal is received (see
section 1.10). "

Resources

http://www.oracle.com/technetwork/java/javase/index-137495.html

Friday, April 08, 2011

How to decommission nodes/blacklist nodes

HDFS

Put following config in conf/hdfs-site.xml:
<property>
  <name>dfs.hosts.exclude</name>
  <value>/full/path/of/host/exclude/file</value>
</property>

Use following command to ask HDFS to re-read host exclude file and decommission nodes accordingly.

./bin/hadoop dfsadmin -refreshNodes

MapReduce

Put following config in conf/mapred-site.xml

<property>
  <name>mapred.hosts.exclude</name>
  <value>/full/path/of/host/exclude/file</value>
</property>

Use following command to ask Hadoop MapReduce to reconfigure nodes.

./bin/hadoop mradmin -refreshNodes

Whitelist/Recommission

Also you can "whitelist" nodes. In other words, you can specify which nodes are allowed to connect to namenode/jobtracker. 

HDFS

Put following config in conf/hdfs-site.xml:
<property>
  <name>dfs.hosts</name>
  <value>/full/path/to/whitelisted/node/file</value>
</property>

Use following command to ask Hadoop to refresh node status to based on configuration.

./bin/hadoop dfsadmin -refreshNodes

MapReduce

Put following config in conf/mapred-site.xml

<property>
  <name>mapred.hosts</name>
  <value>>/full/path/to/whitelisted/node/file</value>
</property>

Use following command to ask Hadoop MapReduce to reconfigure nodes.

./bin/hadoop mradmin -refreshNodes

 

Support of -mradmin was added in 0.21.0. See JIRA issue https://issues.apache.org/jira/browse/HADOOP-5643 for details.

Saturday, March 19, 2011

Japan earthquake GPS data visualization gadget

I made a gadget version of QuakeSim Japan earthquake data visualization portal. It shows data (longitude, latitude and height) collected by GPS stations during Japan earthquake.

You can click http://www.google.com/ig/adde?synd=open&source=ggyp&moduleurl=hosting.gmodules.com%2Fig%2Fgadgets%2Ffile%2F105322631994749779353%2Fquakesim-japan.xml to add it to your iGoogle. After it is added, maximize it by clicking the icon near top right corner of the gadget

Link for the gadget is

http://www.google.com/ig/directory?url=hosting.gmodules.com%2Fig%2Fgadgets%2Ffile%2F105322631994749779353%2Fquakesim-japan.xml

Thank Xiaoming Gao for providing service pages.

Wednesday, March 02, 2011

Install ns2 (ns-2.33) on Ubuntu Maverick

Install prerequisite:
sudo apt-get install \  
    tcl tcl-dev \ 
    libotcl1 libotcl1-dev  \ 
    tclcl-dev tclcl \ 
    tk tk-dev

./configure failed complaining that some tcl, tk related files cannot be found. It turns out those packages have been installed but file locations are different than what the configure script expects. Following is a fix.

You need to change two variables in file configure : TCL_TCL_PLACES and TK_TCL_PLACES.

Add
    /usr/share/tcltk/tcl$TCL_VERS \
    /usr/share/tcltk/tcl$TCL_HI_VERS
to variable TCL_TCL_PLACES.

Add
    /usr/share/tcltk/tk$TK_HI_VERS \
    /usr/share/tcltk/tk$TK_VERS"
to variable TK_TCL_PLACES.

execute ./configure

Official page: http://www.isi.edu/nsnam/ns/ns-build.html