Random notes

July 1, 2014 Leave a comment

If you type relentless.com into browser, you will be re-routed to Amazon. Amazon is introducing their new Fire phone, which includes the OCR technology called firefly to recognize movies, songs, etc. Certainly interesting, but look forward to see how good it may perform and how fast when it finally comes out.

 

 

Categories: MISC

Installing Mercurial on Mac

May 29, 2014 Leave a comment
$ brew install mercurial

If you see errors like:

clang: error: unknown argument: '-mno-fused-madd' [-Wunused-command-line-argument-hard-error-in-future] clang: note: this will be a hard error (cannot be downgraded to a warning) in the future

you can disable the ‘warning’ (which is now showing as error ) by:

$ ARCHFLAGS=-Wno-error=unused-command-line-argument-hard-error-in-future \ 
brew install mercurial

Again, after the install successed, if you see linking error:

Error: Could not symlink file: /usr/local/Cellar/mercurial/2.9/share/man/man5/hgrc.5
/usr/local/share/man/man5 is not writable. You should change its permissions.

You can change the permission. It is said to be safe to change the permission for the whole /usr/local. If you don’t want to do so, just do it for this case

$ sudo chown -R 'your-user-name' /usr/local/share/man/man5
 $ brew link mercurial
Categories: Python, Software

Deep learning on visual recognition task

May 13, 2014 Leave a comment

The current benchmark on visual recognition task:

http://www.csc.kth.se/cvap/cvg/DL/ots/

Categories: Uncategorized Tags:

Install Deepnet on Mac

November 15, 2013 3 comments

This may help to have Nitish’s deepnet work on your mac. The code is very clean, most important thing is to follow the instructions here https://github.com/nitishsrivastava/deepnet/blob/master/INSTALL.txt

(1) DEPENDENCIES

a) You will need Numpy, Scipy installed first, because the tools is largely python. Simply way is to use ‘brew‘. For example, follow the instructions here.

b) CUDA Toolkit and SDK.
Follow the instructions(CUDA5.5):  http://docs.nvidia.com/cuda/cuda-getting-started-guide-for-mac-os-x/
NVIDIA CUDA Toolkit (available at http://developer.nvidia.com/cuda-downloads)

I followed both instruction on http://docs.nvidia.com/cuda/cuda-getting-started-guide-for-mac-os-x/
and instruction from the deepnet to set the system paths:

export PATH=/Developer/NVIDIA/CUDA-5.5/bin:$PATH
export DYLD_LIBRARY_PATH=/Developer/NVIDIA/CUDA-5.5/lib:$DYLD_LIBRARY_PATH

Follow the deepnet instruction: for mac, it is the ‘~.profile’, edit/add to the file:

export CUDA_BIN=/usr/local/cuda-5.0/bin
export CUDA_LIB=/usr/local/cuda-5.0/lib
export PATH=${CUDA_BIN}:$PATH
export LD_LIBRARY_PATH=${CUDA_LIB}:$LD_LIBRARY_PATH

First make sure CUDA installed right:
install the examples: cuda-install-samples-5.5.sh <dir>

and go to /Developer/NVIDIA/CUDA-5.5/samples, choose any simple example subfolder, go into and do ‘make’, after make completed, you can do a simple test.

(c) Protocol Buffers.

Download the file: http://code.google.com/p/protobuf/

Follow the instructions to compile/install it.  It will be install (generally in /usr/local/bin/protoc). It was said that you only need to include the directory that contains ‘proc’, so add to path:
export PATH=$PATH:/usr/local/bin

(2) COMPILING CUDAMAT AND CUDAMAT_CONV

For making the cuda work, do ‘make’ in cudamat , but change all the ‘uint’ to ‘unsigned’ in file: cudamat_conv_kernels.cuh
or do a #define uint unsigned
Then run ‘make’ in cudamat folder

(3,4) STEP 3,4

continue follow step 3, and 4 on https://github.com/nitishsrivastava/deepnet/blob/master/INSTALL.txtand you will get there.

Note (1): I did not install separately for  cudamat library by Vlad Mnih and cuda-convnet library by Alex Krizhevsky.

Note (2): If you do NOT have GPU: another alternative is to not use GPU, most recent mac come with NVIDIA 650, but some old version may use intel graphical card. In that case you can still do the deep learning part, but using eigenmat. The drawback is that it will be very slow. 

Install eigen from here: http://eigen.tuxfamily.org/index.php?title=Main_Page
if given error <Eigen/..> can not found, change to “Eigen/…”
also you need to change python path, including path to where ‘libeigenmat.dylib’ located. It it still fails to find: libeigenmat.dylib. It may not hurt to give it a direct path, edit the file <eigenmat/eigenmat.py>.
_eigenmat = ct.cdll.LoadLibrary(‘the-path-to/libeigenmat.dylib’)

Rectifier Nonlinearities

November 6, 2013 Leave a comment

There are multiple different choice of activation functions for a NN. Many work has shown that using Rectified linear unit (ReLU) helps improve discriminative performance.

The figure below shows few popular activation functions, including sigmoid, and tanh.

activation_funcs

sigmoid:       g(x) = 1 /(1+exp(-1)). The derivative of sigmoid function g'(x) = (1-g(x))g(x).

tanh :              g(x) = sinh(x)/cosh(x) = ( exp(x)- exp(-x) ) / ( exp(x) + exp(-x) )

Rectifier (hard ReLU) is really a max function

g(x)=max(0,x)

Another version is Noise ReLU max(0, x+N(0, σ(x)). ReLU can be approximated by a so called softplus function (for which the derivative is the logistic functions):

g(x) = log(1+exp(x))

The derivative of hard ReLU is constant over two ranges x<0 and x>=0, for x>0, g’=1, and x<0, g’=0.

This recent icml paper has discussed the possible reasons that why ReLU sometimes outperform sigmoid function:

  • Hard ReLU is naturally enforcing sparsity.
  • The derivative of ReLU is constant, as compared to sigmoid function, for which the derivative dies out if we either increase x or decrease x.
Categories: Machine Learning

Exercising Sparse Autoencoder

November 5, 2013 Leave a comment

Deep learning recently becomes such a hot topic across both academic and industry area. Guess the best way to learn some stuff is to implement them.  So I checked the recent tutorial posted at

ACL 2012 + NAACL 2013 Tutorial: Deep Learning for NLP (without Magic)

and they have a nice ‘assignment‘ for whoever wants to learn for sparse autoencoder. So I get my hands on it, and final codes are here.

There are two main parts for an autoencoder: feedforward and backpropagation. The essential thing needs to be calculated is the “error term”, because it is going to decide the partial derivatives for parameters including both W and the bias term b.

You can think of autoencoder as an unsupervised learning algorithm, that sets the target value to be equal to the inputs. But why so, or that is, then why bother to reconstruct the signal? The trick is actually in the hidden layers, where small number of nodes are used (smaller than the dimension of the input data —  the sparsity enforced to the hidden layer).  So you may see autoencoder has this ‘vase’ shape.

sparseAutoencoder_hiddenlayer

 

 

Thus, the network will be forced to learn a compressed representation of the input. You can think it of learning some intrinsic structures of the data, that is concise, analog to the PCA representation, where data can be represented by few axis. To enforce such sparsity, the average activation value ( averaged across all training samples) for each node in the hidden layer is forced to equal to a small value close to zero (called sparsity parameters) . For every node, a KL divergence between the ‘expected value of activation’ and the ‘activation from training data’ is computed, and adding to both cost function and derivatives which helps to update the parameters (W & b).

After learning completed, the weights represent the signals ( think of certain abstraction or atoms) that unsupervised learned from the data, like below:

weights

 

 

 

 

 

Andrew Ng’s talk @ TechXploration

September 4, 2013 Leave a comment

Here’s video recording for Andrew Ng’s  talk @ TechXploration

Abstract: How “Deep Learning” is Helping Machines Learn Faster

What deep learning is and how its algorithms are shaping the future of machine learning; computational challenges in working with these algorithms; Google’s “artificial neural network,” which learns by loosely simulating computations of the brain.

1: http://youtu.be/UfKHi8cFWBQ

2. http://youtu.be/pAjwoDEOTzU

3. http://youtu.be/sREaRN0uY1A

4. http://youtu.be/NvMCM82dDlc

5. http://youtu.be/Rrq0-xtbGcE

6. http://youtu.be/qGkEgL_Tye4

 

Categories: Computer Vision
Follow

Get every new post delivered to your Inbox.

Join 118 other followers