资源简介
This is a small library that can train Restricted Boltzmann Machines, and also Deep Belief Networks of stacked RBM's.
Train RBM's:
%train an RBM with binary visible units and 500 binary hidden
model= rbmBB(data, 500);
%visualize the learned weights
visualize(model.W);
Do classification:
model= rbmFit(data, 500, labels);
prediction= rbmPredict(model, testdata);
Train a Deep Belief Network with 500,500,2000 architecture for classification:
models= dbnFit(data, [500 500 2000], labels);
prediction= dbnPredict(models, testdata);
see included example code for more
I can be contacted on andrej.karpathy@ gmail.
NOTE: This was a class project that I worked on for 1 month and then abandoned development for almost 4 years ago. Please do not send me specific questions about issues with the code or questions on how to do something. I only put this code online in hope that it can be useful to others but cannot fully support it.
If you would like pointers to more actively maintained implementations, have a look here (https://github.com/rasmusbergpalm/DeepLearnToolbox) or maybe here (https://github.com/lisa-lab/DeepLearningTutorials)
Sorry and best of luck!
原文:http://code.google.com/p/matrbm/
代码片段和文件信息
load mnist_classify;
%% Train RBM for classification
%train rbm with 100 hidden units
m=rbmFit(data100labels‘verbose‘true);
yhat=rbmPredict(mtestdata);
%print error
fprintf(‘Classification error using RBM with 100 hiddens is %f\n‘ ...
sum(yhat~=testlabels)/length(yhat));
%visualize weights
figure(1)
visualize(m.W);
title(‘learned weights‘);
%visualize the mislabeled cases. Note the transpose. Visualize assumes DxN
%as is the case for weights
figure(2)
visualize(data(yhat~=testlabels:)‘);
title(‘classification mistakes for RBM with 100 hiddens‘);
drawnow;
%% Train model and denoise images
m2= rbmBB(data100‘verbose‘true);
%distort 100 images around by setting 95% to random noise
imgs=testdata(1:100:);
b=rand(size(imgs))>0.95;
noised=imgs;
r=rand(size(imgs));
noised(b)=r(b);
%reconstruct the images by going up down then up again using learned model
up = rbmVtoH(m2 noised);
down= rbmHtoV(m2 up);
%figure
z1=visualize(noised‘);
z2=visualize(down‘);
figure(3)
imshow([z1 z2])
title(‘denoising 95% noise with RBM with 100 hidden units‘);
drawnow;
%% Train a DBN
op.verbose=true;
models=dbnFit(data[100 100]labelsopop);
yhat2=dbnPredict(modelstestdata);
%print error
fprintf(‘Classification error using DBN with 100-100 hiddens is %f\n‘ ...
sum(yhat2~=testlabels)/length(yhat2));
%visualize weights
figure(4)
subplot(121)
visualize(models{1}.W);
title(‘learned weights on DBN layer 1‘);
subplot(122)
visualize(models{2}.W);
title(‘learned weights on DBN layer 2‘);
%visualize the mislabeled cases. Note the transpose. Visualize assumes DxN
%as is the case for weights
figure(5)
visualize(data(yhat2~=testlabels:)‘);
title(‘classification mistakes for DBN with 100-100 hiddens‘);
属性 大小 日期 时间 名称
----------- --------- ---------- ----- ----
文件 1796 2010-10-31 13:00 RBMLIB\examplecode.m
文件 2974337 2010-10-31 13:00 RBMLIB\mnist_classify.mat
文件 1577 2010-10-31 13:01 RBMLIB\RBM\dbnFit.m
文件 495 2010-10-31 13:01 RBMLIB\RBM\dbnPredict.m
文件 409 2010-10-31 13:01 RBMLIB\RBM\interweave.m
文件 65 2010-10-31 13:01 RBMLIB\RBM\logistic.m
文件 977 2010-10-31 13:01 RBMLIB\RBM\nunique.m
文件 690 2010-10-31 13:01 RBMLIB\RBM\prepareArgs.m
文件 3819 2010-10-31 13:01 RBMLIB\RBM\process_options.m
文件 5217 2010-10-31 13:01 RBMLIB\RBM\rbmBB.m
文件 6147 2010-10-31 13:16 RBMLIB\RBM\rbmFit.m
文件 358 2010-10-31 13:01 RBMLIB\RBM\rbmHtoV.m
文件 877 2010-10-31 13:22 RBMLIB\RBM\rbmPredict.m
文件 355 2010-10-31 13:01 RBMLIB\RBM\rbmVtoH.m
文件 286 2010-10-31 13:01 RBMLIB\RBM\softmaxPmtk.m
文件 371 2010-10-31 13:01 RBMLIB\RBM\softmax_sample.m
文件 750 2010-10-31 13:01 RBMLIB\RBM\visualize.m
文件 13554 2015-03-23 21:12 RBMLIB\readme.docx
目录 0 2015-03-23 21:10 RBMLIB\RBM
目录 0 2015-03-23 21:12 RBMLIB
----------- --------- ---------- ----- ----
3012080 20
- 上一篇:access数据库考勤管理系统
- 下一篇:sift特征的Matlab提取
评论
共有 条评论