您好,欢迎访问三七文档
当前位置:首页 > 商业/管理/HR > 公司方案 > RBF网络逼近sin函数-原创
RBF神经网络逼近y=sin(t)函数——bynuaazdh【原理】RBF神经网络是一种常见的局部性神经网络。RBF网络只有一层隐含层,一个多输入多输出的RBF网络结构如图1所示,其中m,p和n分别表示个输入层、隐含层和输出层的单元个数。1xmx1yny输入层隐含层输出层mpn1bpbjb11w1nwnpw1pw图1多输入多输出RBF网络结构图径向基函数采用高斯基函数:22()exp2fxcxσ式中c为基函数中心向量,与输入同维数,σ为基函数跨度参数向量,它们的位数均与输入向量相同。设输入向量为12[,,...,]mxxxx时,输出向量为12[,,...,]nyyyy,此时隐含层第j个单元的输出为22()exp,1,2,...,2jjjhjpxcxσ输入层和隐含层之间的权值为p×n的全1矩阵,并固定不变。隐含层和输出层之间的权值矩阵为n×p维,且随着网络的训练不断进行修正。RBF网络的第i维输出为:1()(),1,2,...,piijjjygwhinxx()g为输入到输出等效的映射。设第k步迭代的误差:**1[][][](),1,2,...,piiiiijjjekykyykwhinx,式中*[]iyk为第i维输出的期望值。采用随机梯度法加上动量修正项的方法,得到各参数的修正公式为:[1][][][](1)(2)ijijwijwijijwkwkekhkwkwk23[][][][][]jjjjijjjxckekwkhkk[1][][][1][2]jjjjjkkkkk2[][][][][]jjjjijjjxcckekwkhkk[1][][][1][2]jjcjcjjckckckckck式中j=1,2,…,p,i=1,2,…,n,ηw、ησ和ηc分别为权值w、基函数跨度参数σ和基函数中心c的学习率,可以取不同的值,αw、ασ和αc为相应的动量修正参数。【运行结果】取[0,π]内取20个点作为y=sin(t)函数的训练点,最大训练代数取200,对RBF网络进行训练,并用[0,π]内每隔0.1个长度单位取一点对网络进行检验,观察网络的训练效果。训练样本的最小均方误差如图2所示,网络训练输出(实际输出)和期望输出(y=sin(t)曲线)对比曲线如图3所示。02040608010012014016018020002468101214161820最小均方误差-迭代次数曲线GenerationMeanSquareError图2最小均方误差曲线00.511.522.533.500.20.40.60.811.21.4网络期望输出和实际输出曲线ty&ypyyp图3实际输出和期望输出对比曲线用上述网络检验[0,2π]区间内的y=cos(t)函数,得到训练样本均方误差曲线和实际输出/期望输出对比曲线分别如图4和图5所示。02040608010012014016018020002468101214最小均方误差-迭代次数曲线GenerationMeanSquareError图4对y=cos(t)逼近的均方误差曲线01234567-1-0.500.511.5网络期望输出和实际输出曲线ty&ypyyp图6对y=cos(t)的逼近效果【代码】%RBF网络逼近y=sin(t)函数%作者:nuaazdh%时间:2012年3月12日09:38:05clearall;closeall;clc;t=[0:0.1:pi]';%自变量t,每一行表示一个输入size_n=size(t,1);%输入样本的个数y=sin(t);%因变量yyp=0*y;%网络输出train_num=10;%训练样本个数%output_dim=3;train_in=zeros(train_num,1);%训练样本输入train_out=zeros(train_num,1);%训练样本输出hid_num=5;%隐含层神经单元个数%w_in_hid=zeros(hid_num,input_dim);%输入层-隐含层之间的权值w=0.3*zeros(1,hid_num);%隐含层-输出层的权值b=0.8*ones(1,hid_num);%节点基宽center=1.6*rand(1,hid_num);%基函数中心h=zeros(1,hid_num);%RBF网络的径向基向量eta_w=0.1*ones(1,hid_num);%学习率ηeta_b=0.1*ones(1,hid_num);eta_c=0.15*ones(1,hid_num);alpha=0.5;%动量因子αmaxgen=800;%最大迭代次数error_goal=1e-3;%误差精度要求mse=zeros(1,maxgen);%均方误差wk_1=w;wk_2=w;%前12时刻的权值bk_1=b;bk_2=b;%前12时刻的基宽值ck_1=center;ck_2=center;%前12时刻的中心值delta_c=zeros(1,hid_num);B=zeros(maxgen,hid_num);W=zeros(maxgen,hid_num);C=zeros(maxgen,hid_num);%提取训练样本fori=1:train_numseq=floor(i/train_num*size_n);train_in(i)=t(seq,:);train_out(i)=y(seq,:);endgen=0;%while1fori=1:maxgengen=gen+1;%当前迭代次数ye=zeros(1,train_num);%训练时的网络实际输出E=zeros(1,train_num);%训练时的性能指标函数B(gen,:)=b;W(gen,:)=w;C(gen,:)=center;%训练样本forj=1:train_numfork=1:hid_numh(k)=exp(-norm(train_in(j)-center(k))^2/(b(k)^2));endye(j)=w*h';E(j)=0.5*(train_out(j)-ye(j))^2;%性能指标函数值%修正权值fork=1:hid_nume=train_out(j)-ye(j);%当前输出误差w(k)=wk_1(k)+eta_w(k)*e*h(k)+alpha*(wk_1(k)-wk_2(k));delta_b=e*w(k)*h(k)*(train_in(j)-center(k))/b(k)^3;b(k)=bk_1(k)-eta_b(k)*delta_b+alpha*(bk_1(k)-bk_2(k));delta_c(k)=e*w(k)*h(k)*(train_in(j)-center(k))/(b(k))^2;center(k)=ck_1(k)+eta_c(k)*delta_c(k)+alpha*(ck_1(k)-ck_2(k));end%重新赋值前一时刻的各参数wk_2=wk_1;wk_1=w;bk_2=bk_1;bk_1=b;ck_2=ck_1;ck_1=center;enderror=0;%均方误差forj=1:train_numfork=1:hid_numh(k)=exp(-norm(train_in(j)-center(k))^2/(b(k)^2));endye(j)=w*h';error=error+(train_out(j)-ye(j))^2;%性能指标函数值endmse(gen)=error;%记录当前代数下的均方误差iferrorerror_goal%精度达到要求break;%跳出训练endend%使用网络进行输出fori=1:size_nfork=1:hid_numh(k)=exp(-norm(t(i)-center(k))^2/(b(k)^2));endyp(i)=w*h';end%绘制曲线figure(1);plot(train_in,train_out,'linewidth',2);title('train_out-train_in');%{figure(2);plot(t,y,'linewidth',2);%}figure(2);plot([1:gen],mse(1:gen),'linewidth',2);title('最小均方误差-迭代次数曲线');xlabel('Generation');ylabel('MeanSquareError');figure(3);plot(t,y,t,yp,'r--','linewidth',2);title('网络期望输出和实际输出曲线');xlabel('t');ylabel('y&yp');legend('y','yp');figure(4);plot([1:gen],B(1:gen,1),[1:gen],B(1:gen,2),'r--',[1:gen],B(1:gen,3),'k:','linewidth',2);title('基宽B的调整过程曲线');figure(5);plot([1:gen],W(1:gen,1),[1:gen],W(1:gen,2),'r--',[1:gen],W(1:gen,3),'k:','linewidth',2);title('权值W的调整过程曲线');figure(6);plot([1:gen],C(1:gen,1),[1:gen],C(1:gen,2),'r--',[1:gen],C(1:gen,3),'k:','linewidth',2);title('中心调整曲线');
本文标题:RBF网络逼近sin函数-原创
链接地址:https://www.777doc.com/doc-3962468 .html