神经网络—神经元和神经网络MATBLE代码

    xiaoxiao2021-03-25  116

    %%%%%感知机神经元 %%函数newp可以用来生成一个感知器神经网络 %net=newp(PR,S) %PR为R*2维矩阵,S是感知器神经网络中的神经元个数 net = newp([-2,+2;-2,+2],2); W=net.IW{1,1}; b=net.b{1}; %感知器网络中的初始权值和阈值为零,是因为生成对象时,newp已默认初始化函数initzero对网络的权值和阈值进行零初始化。可以使用 %init函数对网络重新初始化。 clear all; close all; clc; net = newp([-2,+2;-2,+2],2); net.inputweights{1,1}.InitFcn='rands'; net.biases{1}.InitFcn='rands'; net = init(net); W=net.IW{1,1}; b=net.b{1}; %%函数learnp是在感知器神经网络学习过程中计算网络权值和阈值修正量最基本的规则函数 %MATLAB神经网络工具箱中感知器权值和阈值的学习函数learnp %function [out1,out2] = learnp(varargin) %LEARNP Perceptron weight/bias learning function. % % <a href="matlab:doc learnp">learnp</a> is the perceptron weight/bias learning function. % % <a href="matlab:doc learnp">learnp</a>(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs, % W - SxR weight matrix (or b, an Sx1 bias vector). % P - RxQ input vectors (or ones(1,Q)). % Z - SxQ weighted input vectors. % N - SxQ net input vectors. % A - SxQ output vectors. % T - SxQ layer target vectors. % E - SxQ layer error vectors. % gW - SxR gradient with respect to performance. % gA - SxQ output gradient with respect to performance. % D - SxS neuron distances. % LP - Learning parameters, none, LP = []. % LS - Learning state, initially should be = []. % and returns, % dW - SxR weight (or bias) change matrix. % LS - New learning state. % % <a href="matlab:doc learnp">learnp</a>(CODE) returns useful information for each CODE string: % 'pnames' - Returns names of learning parameters. % 'pdefaults' - Returns default learning parameters. % 'needg' - Returns 1 if this function uses gW or gA. % % Here we define a random input P and error E to a layer % with a 2-element input and 3 neurons. % % p = rand(2,1); % e = rand(3,1); % % LEARNP only needs these values to calculate a weight change. % % dW = <a href="matlab:doc learnp">learnp</a>([],p,[],[],[],[],e,[],[],[],[],[]) % % See also LEARNPN, NEWP, ADAPT, TRAIN. % Mark Beale, 1-31-92 % Revised 11-31-97, MB % Copyright 1992-2010 The MathWorks, Inc. % $Revision: 1.1.6.8 $ $Date: 2010/04/24 18:09:27 $ %% ======================================================= % BOILERPLATE_START % This code is the same for all Learning Functions. % persistent INFO; % if isempty(INFO), INFO = get_info; end % if (nargin < 1), nnerr.throw('Not enough arguments.'); end % in1 = varargin{1}; %if ischar(in1) % switch in1 % case 'info' % out1 = INFO; % case 'check_param' % out1 = check_param(varargin{2}); % otherwise, % try % out1 = eval(['INFO.' in1]); % catch me % nnerr.throw(['Unrecognized first argument: ''' in1 '''']) % end % end % else % [out1,out2] = apply(varargin{:}); % end % % end % % sf.apply = @apply; % end % function v = fcnversion % v = 7; % end % BOILERPLATE_END % ======================================================= % function info = get_info % info = nnfcnLearning(mfilename,'Perceptron',... % fcnversion,subfunctions,false,true,true,false,[]); % TODO - Indicate that it requires error % end % function err = check_param(param) % err = ''; % end % function [dw,ls] = apply(w,p,z,n,a,t,e,gW,gA,d,lp,ls) % dw = e*p'; % end %% %p为输入适量,学习误差e为目标矢量t和网络实际输出矢量azhi % clear all; % close all; % clc; % e = t-a; % dW = learnp(W,p,[],[],[],[],e,[],[],[]); % db = e; % W = W+dW; % b = b+db; %具有二维输入线性神经元的训练过程 clear all; close all; clc; P = [1,2,-1,0;2,2,0,-1]; t = [0,0,1,1]; net = newlin([-2,2;-2,2],1); net.trainParam.goal = 0.01; [net,tr] = train(net,P,t);%挺强大的一个函数,这个要好好看看。 W=net.IW{1,1}; b=net.b{1}; A = sim(net,P); error = t-A; mse = mse(error) %%线性神经网络 %线性神经网络可以根据输入和目标矢量直接设计出来。函数newlind无需经过训练,就可以 %直接设计出来线性神经网络,使得网络实际输出与目标输出的平方和误差SSE为最小。格式: clear all; close all; clc; P = [1,2,-1,0;2,2,0,-1]; t = [0,0,1,1]; net = newlind(P,t); W=net.IW{1,1}; b=net.b{1}; A = sim(net,P); error = t-A; mse = mse(error)
    转载请注明原文地址: https://ju.6miu.com/read-18386.html

    最新回复(0)