资源简介
正在学习无约束共轭梯度方法希望大家一起讨论!
代码片段和文件信息
function [s err_mse iter_time]=block_gp(xAgroupvarargin)
% block_gp: Block Gradient Pursuit algorithm (modification from [1])
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Usage
% [s err_mse iter_time]=block_gp(xPm‘option_name‘‘option_value‘)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Input
% Mandatory:
% x Observation vector to be decomposed
% P Either:
% 1) An nxm matrix (n must be dimension of x)
% 2) A function handle (type “help function_format“
% for more information)
% Also requires specification of P_trans option.
% 3) An object handle (type “help object_format“ for
% more information)
% m length of s
%
% Possible additional options:
% (specify as many as you want using ‘option_name‘‘option_value‘ pairs)
% See below for explanation of options:
%__________________________________________________________________________
% option_name | available option_values | default
%--------------------------------------------------------------------------
% stopCrit | M corr mse mse_change | M
% stopTol | number (see below) | n/4
% P_trans | function_handle (see below) |
% maxIter | positive integer (see below) | n
% verbose | true false | false
% start_val | vector of length m | zeros
% GradSteps | ‘auto‘ or integer | ‘auto‘
%
% Available stopping criteria :
% M - Extracts exactly M = stopTol elements.
% corr - Stops when maximum correlation between
% residual and atoms is below stopTol value.
% mse - Stops when mean squared error of residual
% is below stopTol value.
% mse_change - Stops when the change in the mean squared
% error falls below stopTol value.
%
% stopTol: Value for stopping criterion.
%
% P_trans: If P is a function handle then P_trans has to be specified and
% must be a function handle.
%
% maxIter: Maximum of allowed iterations.
%
% verbose: Logical value to allow algorithm progress to be displayed.
%
% start_val: Allows algorithms to start from partial solution.
%
% GradSteps: Number of gradient optimisation steps per iteration.
% ‘auto‘ uses inner products to decide if more gradient steps
% are required.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Outputs
% s Soluti
属性 大小 日期 时间 名称
----------- --------- ---------- ----- ----
目录 0 2009-06-07 17:44 GroupSparseBox\
文件 14588 2009-01-30 15:47 GroupSparseBox\block_gp.m
文件 17851 2009-01-30 15:47 GroupSparseBox\block_nomp.m
文件 15806 2009-01-30 15:48 GroupSparseBox\block_pcgp.m
文件 856 2009-01-30 16:07 GroupSparseBox\BMP.m
文件 919 2009-01-24 22:42 GroupSparseBox\BOMP.m
文件 1218 2009-01-30 16:14 GroupSparseBox\demo.m
文件 549 2009-01-24 22:18 GroupSparseBox\fdrthresh.m
文件 878 2009-01-24 22:18 GroupSparseBox\GenGroupSparseProblem.m
文件 872 2009-01-30 16:07 GroupSparseBox\GMP.m
文件 863 2009-01-24 22:17 GroupSparseBox\GOMP.m
文件 15088 2009-01-30 16:11 GroupSparseBox\group_gp.m
文件 18126 2009-01-30 15:46 GroupSparseBox\group_nomp.m
文件 16090 2009-01-30 15:46 GroupSparseBox\group_pcgp.m
文件 106 2009-01-24 22:18 GroupSparseBox\matrix_normalizer.m
文件 1683 2009-01-24 22:18 GroupSparseBox\ReGOMP.m
文件 1354 2009-01-24 22:17 GroupSparseBox\StGOMP.m
文件 1559 2009-06-07 17:44 license.txt
相关资源
- 最优化方法课后习题答案
- 最优化导论Solutions_Manua答案.pdf.zip
- Distributed Optimization and Statistical Learn
- 数值最优化 Numerical Optimization 第二版
- [W. Sun Y. Yuan] Optimization Theory and Metho
- 最优化理论代码
- 比较不同搜索法(最速下降法,共轭
- 最优化方法-鲍威尔法,用0.618法进行
- 共轭梯度法求解方程的极小值解.程序
- 最优化问题相关算法
- 预处理共轭梯度法求线性方程组Ax=b的
- 最优化方法试题.docx
- 《最优化导论》习题答案
- 最优化方法试题
- 最优化设计方案及排队论
- 最优化方法课程设计··
- GA遗传算法fortran源代码
- 运筹学与最优化方法-课件
- 最优化方法乘子法程序
评论
共有 条评论