Close
0%
0%

A Three-Layer Perceptron in Modula-2

A blast from the past - 31 years ago (in 1994) I implemented my first neural network

Public Chat
Similar projects worth following
0 followers
Figure that - there was a time before TensorFlow and PyTorch, and folks actually had to implement Neural Networks from scratch! This is how we did it in 1994.

This TopSpeed Modulaa-2 program features an interactive pattern editor for training data setup and inference, backpropagation learning, and online loss function visualization during backpropagation learning - think TensorBoard :rofl

You even had to create your own GUI without libraries - DOS text mode would do just fine, and for graphics, you could switch into VGA mode for some basic plotting :-)

This was developed in January 1994 using TopSpeed Modula-2 Version 1.17 for DOS (text mode and VGA graphics).

TopSpeed Modula-2 a

TopSpeed Modula-2 b

To run the program, you can download DOSbox.

Background

At that time, I was taking an introductory AI class at the University of Hamburg (Prof. Peter Schefe, R.I.P.), and implemented this as a demonstration of "MNIST"-like pattern recognition using a three-layer perceptron with backpropagation learning for the AI lab class.

The three-layer perceptron and backpropagation learning algorithm was described in pseudo-code in English in the well-known 1991 AI book "Artificial Intelligence - Rich, E. and Knight, K, 2nd edition, McGraw-Hill".

Program

The workflow with this interactive program is as follows:

  1. Determine the topology of the three-layer perceptron, and other hyper-parameters such as the learning rate and the number of training epochs:

    Network Topology

  2. Use the pattern editor to create the "training data", i.e., the "MNIST"-like one- or two-dimensional patterns that the perceptron shall learn to recognize. Use the keypad number keys (4, 6, 8, 2) for cursor movement, and the 5 to toggle a bit in the pattern. Use + and - to switch between patterns, or completely clear the current pattern with the l key:

    Training Data Editor 1

    Training Data Editor 2

  3. With the patterns (= training data) specified, start the backpropagation learning process by leaving the editor with the e key.

    The learning process starts, and the progress and convergence is visualized - the loss function for each pattern is shown graphically epoch by epoch. Usually, it converges quickly for each pattern; i.e., 100 epochs are typically more than enough with a learning rate of ~1. The graphs require a VGA graphics card (DOSbox emulates it):

    Loss Function

  4. With the three-layer perceptron fully trained, we can now use it for inference. After training for the requested number of epochs, the program returns to the pattern editor.

    We can now recall the individual training patterns and send them to the perceptron with the s key; the editor then shows the ground truth training label as well as the output computed by the network under Netz:. This simply shows the levels of the output perceptrons binarized via a > 0.5 threshold for on vs. off. Compare the Netz: classification result with the ---> training label. If the net was trained successfully, the training and computed labels should match for each pattern. Toggle through the different patterns with the + and - keys, and repeatedly send them to the network via s.

    Check the predefined patterns for correct classification, and also modify them a bit (or drastically; i.e., change them with the editor and feed the modified patterns into the perceptron using the s key).

    Sometimes, the perceptron learned to focus on a few characterstic "bits" in the training patterns; it is interesting to remove as many bits as possible from the patterns without changing the classification results. This "robustness" to noise and large changes in the input pattern without affecting classification was (and still is) a selling point for perceptrons and/or neural networks, in general.

    Runtime Inference 1

    Runtime Inference 2

    Runtime Inference 3

    Runtime Inference 4

Source & Executable

You can find a DOS executable as well as the TopSpeed Modula-2 source code here.

Enjoy!

Source Code

MODULE Neuronal;

(* Algorithmus aus "E. Rich/K. Night: Artificial Intelligence" *)
(* Implementiert und Umgebung von Michael Wessel, Januar 1994  *)

FROM InOut     IMPORT Read, ReadInt, WriteInt, WriteString, Write, WriteLn;
FROM RealInOut IMPORT ReadReal, WriteReal;
FROM MathLib0  IMPORT exp;
FROM Lib       IMPORT RAND;
FROM Graph     IMPORT InitVGA, TextMode, GraphMode, Plot, Line;

(* maximal 10 Output-Units, 20 Hidden-Units und 8x8=64 Input-Units *)

TYPE net = RECORD
             Input  : ARRAY[0..64] OF REAL;         (* Level d. Input-Units *)
             Hidden : ARRAY[0..20] OF REAL;         (* Level d. Hidden-Units *)
             Output : ARRAY[1..10] OF REAL;         (* Level d. Output-Units *)
             Goal   : ARRAY[1..10] OF REAL;         (* verlangter Output *)
             w1     : ARRAY[0..64],[1..20] OF REAL; (* Gewichte Input->Hidden *)
 w2 : ARRAY[0..20],[1..10] OF REAL;...
Read more »

NEURONAL.EXE

DOS EXEcutable. Run with DOSbox.

x-msdownload - 25.06 kB - 01/29/2025 at 07:05

Download

NEURONAL.zip

ZIP Archive

x-zip-compressed - 19.75 kB - 01/29/2025 at 07:05

Download

NEURONAL.MOD

TopSpeed Modula-2 Sourcecode

MPEG Video - 11.03 kB - 01/29/2025 at 07:05

Download

  • Running it on a real XT PC @ 9.54 MHz

    Michael Wessel11 hours ago 0 comments

    Sometimes I really forget how *insanely slow* XT machines were back in the day... just ran it on my Schneider Euro PC. The machine is slightly beefed up, i.e., it has an XT CF IDE adapter, 640 KBs (instead of 512 KB stock), a VGA graphics card, a MPU-401 and Adlib clone sound card in a neat little external DIY industrial ISA backplane to the left, powered by my custom power supply. Needless to say, none of these cards make the machine any faster, but far more practical and fun to use (e.g., file exchange is a matter of putting files on the CF card).

    And, what can I tell you... it only takes seconds with DOSbox to train the perceptron with 20 hidden units on 4 patterns for 100 epochs, but on the Euro PC it takes more than 5 minutes for 2 patterns with 5 hidden units for 50 epochs. Insanely slow. I forgot HOW slow these machines were (and I didn't even run DOSbox at a higher "frame rate" - just default settings). Wow. I guess an 8087 Co-Processor would have helped a lot! And indeed, the 2nd revision of the Euro PC came with one, as well as full 640 KBs (and who would ever need more than that...)

    Anyhow, it's definitely nice to see this 31 years old program on an even older piece of hardware running (Schneider Euro PC ~ 1988), and to have my Modula-2 compiler back! Nothing better than the real deal.

View project log

Enjoy this project?

Share

Discussions

Michael Wessel wrote 2 hours ago point

No no, you got it all wrong - ceci n'est pas une labler! 

  Are you sure? yes | no

Ken Yap wrote 8 hours ago point

THIS IS A LABLER. 🙂 Just went to a Magritte exhibition yesterday. Anyway, those were the days. We still got work done without GHz CPUs.

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates