Archive

Archive for June, 2012

Interesting evolution

June 26, 2012 Leave a comment

A reading list for advanced computer vision in 2010, interestingly, if you compare with the list in the previous years, such as 2003 and 2004, etc.

2003:

2006:

Estimating scene geometry and discovering objects using a soup of segments.
Level set segmentation
Video Visualization
Recognition
Combining Detection, Recognition and Segmentation
Video Object Segmentation
Background cut
Hashing, kNN in High Dimensions
Recent Progress in Optical Flow Computation
Learning Optical Flow
Matting
Illumination
Optimization

2007–List

From Local to Global Visual Similarity in Space and in Time.
Fast Image Search.Sound and motion, in harmony.
Standard Brain Model for Vision.
Multiclass SVM and Applications.
CRF/DRF and Application to Human Pose.
Direct visibility of point sets.
Color Image Understanding.
Globally Optimal Estimates for Geometric Reconstruction Problems.
Image Parsing.
Integral Shape Matching, Inner Distance, Diffusion Distance.
Motion Blur.

2009-list

Human visionSequence to sequence alignment
Lightfield and natural image matting
Visibility constraints on features of 3D objects
Image and video descriptors
Image Descriptors

Video Descriptors

Survey / comparison papers for different applications (recognition / matching)

Efficient search in large image databases

Exploiting wealth of huge image libraries

Dictionaries for sparse representation modeling

Statistics of natural images

Blind deconvolution
Action recognition
Graph cuts

2010 — list

1. Denoising

2. Compressed Sensing

3. Super-Resolution (in images)

4. Shape from Illumination

5. Deep Learning

6. Random forests

7. Pascal Grand Challenge

Categories: Computer Vision

Notes on cvpr 12

June 23, 2012 Leave a comment

Well coming back from this year’s CVPR, haven’t got chance to write down something. I guess I’ll just take this weekend to do some short notes for some interesting papers.

* Tutorial

There are several nice tutorials this year. More people than expected  are interested in ‘deep learning’,  thus they have to switch to another bigger room. I like the Graph-cut slides, which unfortunately I didn’t get chance to attend. Since mobile application is a trending topic recently, there is also one opencv for mobile application short tutorial. They also gave some example codes/project as well as pre-compiled opencv for both ios and Android. Qualcomm also gave a lunch talk on their FastCV package which is specifically for mobile computer vision on Tuesday (6/19).

* The Open Source Award goes to “FREAK: Fast Retina Keypoint by Alexandre Alahi, Alexandre; Raphael OrtizPierre Vandergheynst.”,  In their abstract, it says:

the deployment of vision algorithms on smart phones and embedded devices with low memory and computation complexity has even upped the ante: the goal is to make descriptors faster to compute, more compact while remaining robust to scale, rotation and noise. To best address the current requirements, we propose a novel keypoint descriptor inspired by the human visual system and more precisely the retina, coined Fast Retina Keypoint (FREAK). A cascade of binary strings is computed by efficiently comparing image intensities over a retinal sampling pattern.

The source code is here.  Detailed information can be found at <http://www.ivpe.com/freak.htm>

to be continued…. 🙂

 

Categories: Computer Vision

Random Forest in Python

June 21, 2012 Leave a comment

milk is the machine learning package written in python. It also comes with a complimentary data set called milksets which includes several U.C.I machine learning dataset.

from milksets import wine

features,labels = wine.load()
features will be a 2d-numpy.ndarray of features (noSample * noFeatureDim) and labels will be a 1d-numpy.ndarray of labels starting at 0 through N-1 (independently of how the labels were coded in the original data).

Below is an example using milk -random forest to predict the labels for the wine data. Three classes, feature is a (178L, 13L) np-matrix.  Sample with maker ‘0’ is the correct predictions, with maker ‘x’ is the incorrect prediction. It takes some time to do the prediction, the cross-validation accuracy = 0.943820224719.

Categories: Machine Learning, Python

Mean Shift Segmentation

June 13, 2012 Leave a comment

There are actually two steps in Mean-shift image segmentation: mean-shift filtering and then some merging and eliminating for segmentation. here’s a paper well states the process. I found it quite clear and easy to understand. Below are some notes on Mean Shift Segmentation.

1. Based on non-parametric density estimation, no assumptions about probability distributions, and no restriction on the spatial window size (which is different from bi-lateral filtering)

2. Spatial-range joint domain (x, y, f(x, y)), spatial domain refers to image spatial coordinates, while range domain refers to image dimension, such as gray image (1), rgb color image (3), etc

3. Finds the maximm in the (x, y, f) space, clusters close in both space and range correspond to classes.

4. The 3 parameters (such in EDISON) are :

sigmaS: — normalization parameter for spatial

sigmaR — normalization parameter for range domain

minRegionSize  — minimum size (lower bound) that a ‘region’ is declared as a class

     To understand the two normalization parameter, sigmaS and sigmaR, think about the window size in the kernel function in the kernel density estimation. It controls the ‘range’ or say smoothness of the kernel, or how fast the kernel decays. Larger the normalization parameter is, it is smoother in the corresponding space( either spatial or range), or decays slower. A ZERO value corresponds to a delta function, which only concentrate on the center, i.e. the filtering output will be the same as the input, all details (each pixel) are remained. Larger sigmaS smoothes the spatial, while larger sigmaR smoothes the range (color domain).  And from the results I obtain, singmaS is much more sensitive as compare to sigmaR.

The most well known open source for mean-shift, which is also very fast, is EDISON. If you want to use it in Matlab, there are also some wrappers.

Left: Original, Middle (4, 4, 5), Right: (10, 4, 5)

 

Select and Crop Region of the Figure in Python

June 13, 2012 Leave a comment

I wanted to be able to select and crop some region of the figures in python interactively. Here’s some ways that I found quite useful. You could modify the code and adapt to your need.

* http://stackoverflow.com/questions/6136588/image-cropping-using-python
* http://stackoverflow.com/questions/6916054/how-to-crop-a-region-selected-with-mouse-click-using-python
* http://kitchingroup.cheme.cmu.edu/software/python/matplotlib/interacting-with-data-sets
* http://scienceoss.com/interactively-select-points-from-a-plot-in-matplotlib

And superisingly, matplot has the same function of ginput just as in matlab.

Categories: Python

ImageJ

June 12, 2012 Leave a comment

Here’s the wonderful imageJ <http://rsbweb.nih.gov/ij/plugins/index.html&gt;

Sometimes, one can really learning something by just reading these fundamental algorithmes.

And here’s a version of mean-shift.  And  a sklearn python version.

Here’s a nice comparison of different clustering method.

Categories: Image Processing, Java

How to map drive through MAC to PC

June 6, 2012 Leave a comment

This is what I found recently, which is very useful.

 

1. In the Finder, click on the Go menu, selectConnect to Server.

2. Enter the address to where the resource is you wish to map.

i.e.   smb://www.domain.com/foldername

or if you have the ip address:

i.e.    smb://http://100.250.95.98/YOURSharedFolderName

3. Enter your network password when prompted.

4. A new icon should appear on the desktop. That is your mapped network drive.

Categories: MacOS