So, I've come up with some pseudocode for determining the highest and lowest intervals. It will compare the range of the bounds and will also compare the respective upper and lower bounds to see which is higher or lower. To do this properly, this will be broken into two functions as shown:
intervalH (n)
h = 0
for t = 2 to 8 ++1;
if (b_(n,t) - a_(n,t) > b_(n,t-1) - a_(n,t-1)) and (a_(n,t) > a_(n,t-1))
then h = t;
else h = t - 1;
next t;
return h;
intervalL (n)
h = 0
for t = 8 to 2 --1;
if ( b_(n,t-1) - a_(n,t-1) < b_(n,t) - a_(n,t)) and ( b_(n,t-1) < b_(n,t));
then h = t - 1;
else h = t ;
next t;
return h;
With so many loops, it's no wonder why most designers of neural networks go for the single weight system. As simple as this all seems, I can already tell that the computational time of all of this we definitely eat up a lot of processing power just to do one iteration of forward propagation, not to mention what it will take for the many iterations for the network to learn or do anything. This will not discourage me though. I will still press on. As I said before, it is only a model of abstraction that can be, with time, optimized later to overcome such challenges. However, if the first steps are not taken, no one will get anywhere.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.