{"id":267,"date":"2015-07-29T02:43:03","date_gmt":"2015-07-29T02:43:03","guid":{"rendered":"https:\/\/mgric.wordpress.com\/?p=267"},"modified":"2015-11-05T01:06:57","modified_gmt":"2015-11-05T01:06:57","slug":"advanced-optimization-methods-artificial-neural-networks-part-3","status":"publish","type":"post","link":"https:\/\/marcelloricottone.com\/blog\/2015\/07\/29\/advanced-optimization-methods-artificial-neural-networks-part-3\/","title":{"rendered":"Advanced Optimization Methods: Artificial Neural Networks Part 3"},"content":{"rendered":"<p>In our last part we went over the mathematical design of the neurons and the network itself. Now we are going to build our network in MatLab and test it out on a real world problem.<\/p>Let&#8217;s say that we work in a chemical plant. We are creating some compound and we want to anticipate and optimize our production. The compound is synthesized in a fluidized bed reactor. For those of you without a chemical engineering background, think of a tube that contains tons of pellets. Fluid then runs over these pellets and turns into a new compound. Your boss comes to you and tells you that there is too much impurity in our output stream. There are two things you can change to reduce the impurity, catalyst (the pellets in our tube) amount and stabilizer amount.<\/p>In the pilot scale facility, you run a few tests varying the amount of catalyst and stabilizer. You come up with the following table of your results.<\/p>\n<table class=\" aligncenter\" style=\"border-collapse: collapse; width: 259px; height: 260px;\" border=\"0\" width=\"195\" cellspacing=\"0\" cellpadding=\"0\">\n<colgroup>\n<col style=\"width: 65pt;\" span=\"3\" width=\"65\" \/> <\/colgroup>\n<tbody>\n<tr style=\"height: 15pt;\">\n<td style=\"height: 15pt; width: 65pt;\" width=\"65\" height=\"15\">Catalyst<\/td>\n<td style=\"width: 65pt;\" width=\"65\">Stabilizer<\/td>\n<td style=\"width: 65pt;\" width=\"65\">Impurities %<\/td>\n<\/tr>\n<tr style=\"height: 15pt;\">\n<td style=\"height: 15pt;\" align=\"right\" height=\"15\">0.57<\/td>\n<td align=\"right\">3.41<\/td>\n<td align=\"right\">3.7<\/td>\n<\/tr>\n<tr style=\"height: 15pt;\">\n<td style=\"height: 15pt;\" align=\"right\" height=\"15\">3.41<\/td>\n<td align=\"right\">3.41<\/td>\n<td align=\"right\">4.8<\/td>\n<\/tr>\n<tr style=\"height: 15pt;\">\n<td style=\"height: 15pt;\" align=\"right\" height=\"15\">0<\/td>\n<td align=\"right\">2<\/td>\n<td align=\"right\">3.7<\/td>\n<\/tr>\n<tr style=\"height: 15pt;\">\n<td style=\"height: 15pt;\" align=\"right\" height=\"15\">4<\/td>\n<td align=\"right\">2<\/td>\n<td align=\"right\">8.9<\/td>\n<\/tr>\n<tr style=\"height: 15pt;\">\n<td style=\"height: 15pt;\" align=\"right\" height=\"15\">2<\/td>\n<td align=\"right\">0<\/td>\n<td align=\"right\">6.6<\/td>\n<\/tr>\n<tr style=\"height: 15pt;\">\n<td style=\"height: 15pt;\" align=\"right\" height=\"15\">2<\/td>\n<td align=\"right\">4<\/td>\n<td align=\"right\">3.6<\/td>\n<\/tr>\n<tr style=\"height: 15pt;\">\n<td style=\"height: 15pt;\" align=\"right\" height=\"15\">2<\/td>\n<td align=\"right\">2<\/td>\n<td align=\"right\">4.2<\/td>\n<\/tr>\n<\/tbody>\n<\/table><p>&nbsp;<\/p>&nbsp;<\/p>After looking at the results you decide to create a neural network to predict and optimize these values. As we know we have two inputs, catalyst and stabilizer, and one output, impurity percent. From our last part on structures of neural networks we decided that we need two neurons in our input layer (one for catalyst and one for stabilizer), and one neuron in our output layer (impurity percent). That only leaves our hidden layer, since we do not expect a complex difficult problem that requires deep learning we only choose one layer. As for neurons we will choose 3 neurons to make the problem a little more interesting. The structure is shown below.<\/p><a href=\"http:\/\/marcelloricottone.com\/blog\/wp-content\/uploads\/2015\/07\/screen-shot-2015-07-28-at-9-26-26-pm.png\"><img loading=\"lazy\" decoding=\"async\" class=\" size-full wp-image-274 aligncenter\" src=\"http:\/\/marcelloricottone.com\/blog\/wp-content\/uploads\/2015\/07\/screen-shot-2015-07-28-at-9-26-26-pm.png\" alt=\"Screen Shot 2015-07-28 at 9.26.26 PM\" width=\"572\" height=\"426\" srcset=\"https:\/\/marcelloricottone.com\/blog\/wp-content\/uploads\/2015\/07\/screen-shot-2015-07-28-at-9-26-26-pm.png 572w, https:\/\/marcelloricottone.com\/blog\/wp-content\/uploads\/2015\/07\/screen-shot-2015-07-28-at-9-26-26-pm-300x223.png 300w\" sizes=\"auto, (max-width: 572px) 100vw, 572px\" \/><\/a><\/p>Now that we have the structure let us build our network in MatLab.\u00a0The code is actually quite simple for this part. First we input our two variables in a x by 2 matrix. We then multiply these by our first weights from our hidden layer and pass them through our sigmoid function. These values are then multiplied by the weights from the output layer then passed through the sigmoid function again. After they pass through they become our output, impurity %. \u00a0So lets see how our network performs the vector on the left is our actual values (scaled to the max) and on the right is what our network determined.<\/p><a href=\"http:\/\/marcelloricottone.com\/blog\/wp-content\/uploads\/2015\/07\/screen-shot-2015-07-28-at-10-31-56-pm.png\"><img loading=\"lazy\" decoding=\"async\" class=\" size-full wp-image-277 alignnone\" src=\"http:\/\/marcelloricottone.com\/blog\/wp-content\/uploads\/2015\/07\/screen-shot-2015-07-28-at-10-31-56-pm.png\" alt=\"Screen Shot 2015-07-28 at 10.31.56 PM\" width=\"114\" height=\"158\" \/><img loading=\"lazy\" decoding=\"async\" class=\" size-full wp-image-276 alignnone\" src=\"http:\/\/marcelloricottone.com\/blog\/wp-content\/uploads\/2015\/07\/screen-shot-2015-07-28-at-10-31-21-pm.png\" alt=\"Screen Shot 2015-07-28 at 10.31.21 PM\" width=\"100\" height=\"159\" \/><\/a><\/p>As you can see, the network did not guess even remotely correctly. Well we are missing the most important part of the neural network, the training. We must train our network to get the right predictions. In order to do this we need to do our favorite thing, optimize.<\/p>-Marcello<\/p>Heres the code:<\/p>\n<pre class=\"brush: matlabkey; title: ; notranslate\" title=\"\">\r\n\r\n% ANN code\r\n% structure:\r\n% 2 input\r\n% 3 hidden nodes\r\n% 1 output\r\n\r\n%initial input data [ catalyst, stabilizer]\r\ninput_t0 = [0.57\t3.41; 3.41\t3.41;0\t2;4\t2;2\t0;2\t4;2\t2];\r\n%normalize input data\r\ninput_t0(:,1) = input_t0\/max(input_t0);\r\ninput_t0(:,2) = input_t0\/max(input_t0);\r\n\r\n%normalize output data\r\noutput_t0 = [3.7 4.8 3.7 8.9 6.6 3.6 4.2];\r\noutput_t0 = output_t0\/max(output_t0);\r\n\r\n%randomly assigned weights\r\nweight_in = [.3 .6 .7;.2 .8 .5];\r\nweight_out = [ .4 .6 .7]';\r\n\r\n%initialize matrices\r\nactHidSig = zeros(7,3);\r\nactOutSig=zeros(7,1);\r\n\r\n%find activation for hidden layer\r\nact_hid = input_t0*weight_in ;\r\n\r\n%apply sigmoid to hidden activation\r\nfor i = 1:7\r\n    for j = 1:3\r\n        actHidSig(i,j) = 1\/(1+exp(-act_hid(i,j)));\r\n    end\r\nend\r\n\r\n%find activation for output layer\r\nact_out = actHidSig*weight_out;\r\n\r\n%apply sigmid to output activation\r\nfor i = 1:7\r\n    \r\n        actOutSig(i) = 1\/(1+exp(-act_out(i)));\r\n    \r\nend\r\n\r\n%show results\r\noutput_t0'\r\nactOutSig\r\n\r\n<\/pre>\n","protected":false},"excerpt":{"rendered":"<p>In our last part we went over the mathematical design of the neurons and the network itself. Now we are going to build our network in MatLab and test it out on a real world problem.Let&#8217;s say that we work in a chemical plant. We are creating some compound and we want to anticipate and&hellip; <\/p>\n<p class=\"toivo-read-more\"><a href=\"https:\/\/marcelloricottone.com\/blog\/2015\/07\/29\/advanced-optimization-methods-artificial-neural-networks-part-3\/\" class=\"more-link\">Read more <span class=\"screen-reader-text\">Advanced Optimization Methods: Artificial Neural Networks Part 3<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3,1,2],"tags":[],"class_list":{"0":"post-267","1":"post","2":"type-post","3":"status-publish","4":"format-standard","6":"category-advopt","7":"category-allposts","8":"category-opt","9":"entry"},"_links":{"self":[{"href":"https:\/\/marcelloricottone.com\/blog\/wp-json\/wp\/v2\/posts\/267","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/marcelloricottone.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/marcelloricottone.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/marcelloricottone.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/marcelloricottone.com\/blog\/wp-json\/wp\/v2\/comments?post=267"}],"version-history":[{"count":3,"href":"https:\/\/marcelloricottone.com\/blog\/wp-json\/wp\/v2\/posts\/267\/revisions"}],"predecessor-version":[{"id":305,"href":"https:\/\/marcelloricottone.com\/blog\/wp-json\/wp\/v2\/posts\/267\/revisions\/305"}],"wp:attachment":[{"href":"https:\/\/marcelloricottone.com\/blog\/wp-json\/wp\/v2\/media?parent=267"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/marcelloricottone.com\/blog\/wp-json\/wp\/v2\/categories?post=267"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/marcelloricottone.com\/blog\/wp-json\/wp\/v2\/tags?post=267"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}