Archive

Archive for September, 2012

Install MATLAB on Linux

September 18, 2012 Leave a comment

Installation MATLAB on Linux (e.g. 2012b)

* Go to matlab website download the version that you are going to install
*  Get the license and installation-key, etc
–  login into your matlab online account, go to lisence management, tab “Activationa nd Installation”. Click ‘Active’, there will be pop-up window ask for certain inforamtion. For hostId, you can find it by run ‘/sbin/ifconfig eth0’ in command line, numbers after ‘HWaddr’ will be it.
– After filing the information, you will be able to get a activation key and download the license.
* With all the downloaded files, unzip the zip package, and make a copy of the ‘installer_input.txt’, follow the instruction inside and fill the required options. Also make a copy of the ‘activat.ini’ file, follow the instructions inside and fill the required options.
* Run ‘install -inputFile installer_input_yourcopy.txt’
* I found that after I ran, the activation is still not sucessed. So I change to the folder where I installed Matlab(e.g. Tools/MATLAB). Under ‘bin’ folder, run ‘activate_matlab.sh -propertyiesFile activate_yourcopy.ini’. Then creat a folder ‘Tools/MATLAB/licenses’ and copied my license into the ‘licenses’ folder.
* Now, probably you are ready to go: ‘Tools/MATLAB/bin/matlab -nodisplay’
* You can also make a alias for this(in your e.g. bash_profile), which makes life easier
alias matlab=’your-pathto-matlab/bin/matlab -nodisplay’

Trying out the Part-Models <http://people.cs.uchicago.edu/~rbg/latent/>

– Follow the instruction in the Readme file, and compile the code use

– Simple case to try out, which load the car model to detect the car, result as:

Categories: Computer Vision

Javacv Example

September 17, 2012 2 comments

First try out java opencv, which is java wrapper for opencv. The first thing you should do is to follow the instruction install opencv. You will need to know where are your lib files locate. In this example, I have them all in ‘opencv24lib’.  Here’s what it contains:

Download javacv zip files, unzip it, you will get several jar files(wrappers) which needed for your project. Below is a link to a good warming up example of javacv:

//static imports
import static com.googlecode.javacv.cpp.opencv_core.*;
import static com.googlecode.javacv.cpp.opencv_highgui.*;
import static com.googlecode.javacv.cpp.opencv_imgproc.*;
//non-static imports
import com.googlecode.javacv.cpp.opencv_core.CvScalar;
import com.googlecode.javacv.cpp.opencv_core.IplImage;

public class ColorDetect {
    //color range of red like color
    static CvScalar min = cvScalar(0, 0, 130, 0);//BGR-A
    static CvScalar max= cvScalar(140, 110, 255, 0);//BGR-A    public static void main(String[] args) {
        //read image
        IplImage orgImg = cvLoadImage("colordetectimage.jpg");
        //create binary image of original size
        IplImage imgThreshold = cvCreateImage(cvGetSize(orgImg), 8, 1);
        //apply thresholding
        cvInRangeS(orgImg, min, max, imgThreshold);
        //smooth filter- median
        cvSmooth(imgThreshold, imgThreshold, CV_MEDIAN, 13);
        //save
        cvSaveImage("threshold.jpg", imgThreshold);
    }
}

To compile and run the code (inside the folder of your code):

mkdir classes
javac -classpath .:/yourpathto/javacpp.jar:/yourpathto/javacv.jar:/yourpathto/javacv-linux-x86_64.jar:/yourpathto/javacv-linux-x86.jar *.java -d ./classes
jar cf JOpencvTest.jar -C classes .
java -Djava.library.path=/yourpathto/opencv24lib -cp .:/yourpathto/javacpp.jar:/yourpathto/javacv.jar:/yourpathto/javacv-linux-x86_64.jar:/yourpathto/javacv-linux-x86.jar:JOpencvTest.jar ColorDetect

Input Image:

output Image:

 

Here’s a link how to configure it on Win7, I didn’t give it a try since my windows machine installed of opencv2.3 not 2.4 version.

 

Categories: OpenCV Tags: ,

MapReduce in C++

September 13, 2012 Leave a comment

Since I can’t request the admistrator to configure any particular setting for the cluster nodes.  I found the simplest way is to compile your code locally, use hadoop streaming,  ship all you liked os-files with your mapper (compiled executable) together to the node. You could write a bash codes as the mapper, which config the lib-path, and wrap the exe inside it.  Below are some websites talking about MapReduce in C++, which actually very nice, but may need more to configure.

Boost: http://www.craighenderson.co.uk/mapreduce/

JavaCV : nice for using opencv  http://code.google.com/p/javacv/

Mapreduce-lit:  http://code.google.com/p/mapreduce-lite/

Very nice example(Mapper with Second key):  http://code.google.com/p/hadoop-stream-mapreduce/

Categories: Hadoop, OpenCV

Hadoop Notes 9/10/12

September 10, 2012 Leave a comment

 * hadoop compression

There are several tools like
DEFLATE ( file extentiosn will be .deflate) this is what we often see
gzip
zip
LZO (optimize for speed)

There are also nice different options: -1 means optimize for spped and -9 mieans optimize for space. e.g. gzip -1 file

Another way of doing compression is to use ‘Codecs’
A codec is the implementation of a compression-decompression algorithm. In Hadoop, a codec is represented by an implementation of the CompressionCodec interface.

For performance, it is recommended to use the native library for compression and decompression: DEFLATE, gzip, LZO.

If you are using a native library and doing a lot of compression or decompression in your application, consider using “Codecpool”.

For only compress the mapper output, an example( of also choosing the codec type):
conf.set(“mapred.compress.map.output”, “true”)
conf.set(“mapred.output.compression.type”, “BLOCK”);
conf.set(“mapred.map.output.compression.codec”, “org.apache.hadoop.io.compress.GzipCodec”);
Writable

In Hadoop, there are two basic data types:

WritableComparable   (base interface for keys)

Writable (base class interface for values), Writable is a general-purpose wrapper  for the following: Java primitives, String, enum, Writable, null, or arrays of any of these types, e.g.

  • Text (strings),
  • BytesWritable (raw bytes)
  • IntWritable (Int)
  • LongWritable (Long)

Text is a Writable for UTF-8 sequences. It can be thought of as the Writable equivalent of java.lang.String. Text is a replacement for the UTF8 class. The length of a String is the number of char code units it contains. The length of a Text object is the number of bytes in its UTF-8 encoding. You could basically treat everything as Text, and hack it if it actually has a very complicate content. Also, using you are doing Int value, Text will cost your more time.

NullWritable is a special type of Writable, as it has a zero-length serialization. No bytes are written to, or read from, the stream.

Writable collections

There are four Writable collection types in the org.apache.hadoop.io package: Array Writable, TwoDArrayWritable, MapWritable, and SortedMapWritable. ArrayWritable and TwoDArrayWritable are Writable implementations for arrays and two-dimensional arrays (array of arrays) of Writable instances. All the elements of an ArrayWritable or a TwoDArrayWritable must be instances of the same class, which is specified at construction.

hadoop fs command has a -text option to display sequence files in textual form. e.g.
%hadoop fs -text number.seq | head

Categories: Hadoop, MapReduce

New to Hbase (Note 1)

September 10, 2012 Leave a comment

* TIME OUT PROBLEM

Very new to Hbase. Just had the time-out exception problem, since I am looping though a 300×300 size of image in the mapper. The exception means that the ‘next’ in mapper taking up too long time to wait.

org.apache.hadoop.hbase.client.ScannerTimeoutException: 556054ms passed since the last invocation, timeout is currently set to 300000

One solution would be increase the timeout limit <hbase.rpc.timeout>

conf.setLong("hbase.rpc.timeout", 6000000);

* Understand HADOOP_CLASSPATH

In the very begning, I thought it was the folder path. However, it turns out to be the exactly jar file path. What I end up doing is to have several bash command in the bashprofile, to automatically add each one:

jfs=$(ls /home/username/mylib/*.jar)
for jf in $jfs ;do
  # echo "$jf"
  export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:"$jf"
done

* DIFFERENCE of a typical HADOOP job vs. HBASE-HADOOP job

For hadoop job,  to set the mapper and reducer classes (input from hdfs, output to hdfs):

job.setMapperClass(Mapper.class);
job.setReducerClass(Reducer.class);

The input/output hdfs pathes will also need to be set, e.g. ‘FileOutputFormat.setOutputPath

Mapper and Reducer class extend ‘org.apache.hadoop.mapreduce.Mapper;’ and ‘org.apache.hadoop.mapreduce.Reducer;

For hbase-hadoop job,  to set the mapper and reducer classes(input from htable, output to htable):

TableMapReduceUtil.initTableMapperJob("hbase-input-table-name", scan,  hMapper.class, OneKindWritable.class,
                    OneKindWritable.class, job);
TableMapReduceUtil.initTableReducerJob("hbase-output-table-name", hReducer.class, job);

Mapper and Reducer class extend ‘org.apache.hadoop.hbase.mapreduce.TableMapper;’ and ‘org.apache.hadoop.hbase.mapreduce.TableReducer;’

The ouput table should be created before launch the job, with corresponding column family name and qualifier name as you may did in your code. For input part, you can set up certain filters for ‘scan’, add input columns with family name and qualifier.

A nice thing is you can mix those settings, so you can read data from hdfs, output to hbase, or read data from hbase output to hdfs.

Nice tips:

efficient hadoop : http://www.cloudera.com/blog/2009/05/10-mapreduce-tips/

http://hbase.apache.org/book/mapreduce.example.html

Categories: Hadoop, Hbase