The tabular output from the program doesn't give any indication of what the Perceptron does with input values other than the 'ideal' 0.0 and 1.0, so I got it to plot the entire 'landscape' of results with each input varying over the full range:Richard Russell wrote: ↑Tue 11 Feb 2025, 15:01 I asked DeepSeek to write BBC BASIC code for a two-input, two-layer Perceptron and this is what it produced.
Interestingly, although this particular output represents a near-perfect result, the learning process (even with 10,000 iterations) doesn't reliably generate it. Some runs give very different results, which whilst still solving the exclusive-or problem are far more 'marginal'. This is presumably because with training data consisting of only four different states, and nine weights to adjust, the model is under-constrained.