Deep learning recently becomes such a hot topic across both academic and industry area. Guess the best way to learn some stuff is to implement them. So I checked the recent tutorial posted at
ACL 2012 + NAACL 2013 Tutorial: Deep Learning for NLP (without Magic)
There are two main parts for an autoencoder: feedforward and backpropagation. The essential thing needs to be calculated is the “error term”, because it is going to decide the partial derivatives for parameters including both W and the bias term b.
You can think of autoencoder as an unsupervised learning algorithm, that sets the target value to be equal to the inputs. But why so, or that is, then why bother to reconstruct the signal? The trick is actually in the hidden layers, where small number of nodes are used (smaller than the dimension of the input data — the sparsity enforced to the hidden layer). So you may see autoencoder has this ‘vase’ shape.
Thus, the network will be forced to learn a compressed representation of the input. You can think it of learning some intrinsic structures of the data, that is concise, analog to the PCA representation, where data can be represented by few axis. To enforce such sparsity, the average activation value ( averaged across all training samples) for each node in the hidden layer is forced to equal to a small value close to zero (called sparsity parameters) . For every node, a KL divergence between the ‘expected value of activation’ and the ‘activation from training data’ is computed, and adding to both cost function and derivatives which helps to update the parameters (W & b).
After learning completed, the weights represent the signals ( think of certain abstraction or atoms) that unsupervised learned from the data, like below:
Here’s video recording for Andrew Ng’s talk @ TechXploration
Abstract: How “Deep Learning” is Helping Machines Learn Faster
What deep learning is and how its algorithms are shaping the future of machine learning; computational challenges in working with these algorithms; Google’s “artificial neural network,” which learns by loosely simulating computations of the brain.
Interesting work on “language memorability” by Jon Kleinberg
* You had me at hello: How phrasing affects memorability, Cristian Danescu-Niculescu-Mizil, Justin Cheng, Jon Kleinberg, Lillian Lee
It’s counterpart “memory on visual data“:
* Making Personas Memorable , CHI, 2007, extended abstracts on Human Factors in Computing Systems.
* Konkle, T., Brady, T. F., Alvarez, G. A., & Oliva, A. (2010). Conceptual distinctiveness supports detailed visual long-term memory for real-world objects. Journal of Experimental Psychology: General, 139(3), 558-78. <Data http://cvcl.mit.edu/MM/objectCategories.html >
Another dataset: <http://visualrecall.org/datasets.html >
More publications: <http://visualrecall.org/publications.html >
* Understanding the intrinsic memorability of images, NIPS 2011, Phillip Isola, Devi Parikh, Antonio Torrala, Aude Oliva,
Very data-driven, really data-centric, talk of “big visual data” by Alexei A. Efros.
Installation MATLAB on Linux (e.g. 2012b)
* Go to matlab website download the version that you are going to install
* Get the license and installation-key, etc
– login into your matlab online account, go to lisence management, tab “Activationa nd Installation”. Click ‘Active’, there will be pop-up window ask for certain inforamtion. For hostId, you can find it by run ‘/sbin/ifconfig eth0’ in command line, numbers after ‘HWaddr’ will be it.
– After filing the information, you will be able to get a activation key and download the license.
* With all the downloaded files, unzip the zip package, and make a copy of the ‘installer_input.txt’, follow the instruction inside and fill the required options. Also make a copy of the ‘activat.ini’ file, follow the instructions inside and fill the required options.
* Run ‘install -inputFile installer_input_yourcopy.txt’
* I found that after I ran, the activation is still not sucessed. So I change to the folder where I installed Matlab(e.g. Tools/MATLAB). Under ‘bin’ folder, run ‘activate_matlab.sh -propertyiesFile activate_yourcopy.ini’. Then creat a folder ‘Tools/MATLAB/licenses’ and copied my license into the ‘licenses’ folder.
* Now, probably you are ready to go: ‘Tools/MATLAB/bin/matlab -nodisplay’
* You can also make a alias for this(in your e.g. bash_profile), which makes life easier
alias matlab=’your-pathto-matlab/bin/matlab -nodisplay’
Trying out the Part-Models <http://people.cs.uchicago.edu/~rbg/latent/>
– Follow the instruction in the Readme file, and compile the code use
– Simple case to try out, which load the car model to detect the car, result as:
Here’s some interesting link I found from ‘ IMAGE AND VISUAL REPRESENTATION GROUP IVRG ‘
1. The free book of ‘joy of visual perception‘
2.CVonline: Vision Related Books including Online Books and Book Support Sites
Their algorithm: Radhakrishna Achanta and Sabine Susstrunk, Saliency Detection using Maximum Symmetric Surround, International Conference on Image Processing (ICIP), Hong Kong, September 2010.
is dealing with the problem of “salient regions comprise more than half the pixels of the image, or if the background is complex, the background gets highlighted instead of the salient object. “. Tested on several images, though not working for all cases, but it’s quite a nice saliency detection since it dose avoid some problems by most of the saliency detection — focusing mostly on high frequency, dense edge region.
Also, it seem that a website is built based on their research for automatically cropping image to give the best composition.