What Is Connectionism?
Connectionism is a perspective in cognitive science and artificial intelligence that explains human cognition and mental processes through networks of simple interconnected units, often inspired by the human brain. Instead of viewing the mind as a step-by-step computer program, connectionist models emphasize parallel distributed processing, where many units work together to represent knowledge and perform tasks.
This approach builds on the pioneering work of Walter Pitts and Warren McCulloch (1943), who proved that neural networks could approximate logical reasoning. In these models, information flows as inputs are transformed through activation functions, passed along connections, and finally expressed in output units. Learning occurs through adjustments in the strength of these connections—a principle famously described as Hebbian learning (Hebb, 1949): "cells that fire together, wire together."
By modeling variable binding, information processing, and adaptive changes in weights, connectionist networks or artificial neural nets have become foundational to modern artificial intelligence.
Why Puzzles?
At Connectionism.online, we believe there is a natural bridge between connectionist learning and the experience of solving puzzle games. Each puzzle is like a miniature neural net:
🧠 Input Processing
You receive an input (the puzzle's challenge), just like how neural networks process initial data.
⚡ Activation Functions
Your brain activates possible strategies, spreading signals like an activation function in neural nets.
🔄 Weight Adjustment
Through trial and error, you search for the right pathway—similar to how neural nets adjust their weights.
🎯 Output Achievement
Finally, you achieve the correct output, proving a higher-level connection has been made.
In this way, every puzzle solved mirrors how a neural network discovers the best connections to reach its goal.
Our Vision
Connectionism.online is more than a game portal. It is a space where:
Players train their own "neural connections" while enjoying escape games and logic puzzles.
Articles explore the intersections of artificial intelligence, connectionist networks, and human cognition.
Future original games will illustrate how concepts like activation functions and parallel distributed processing can be experienced interactively.
By combining puzzle play with insights from AI research, we hope to make both more engaging—and more connected.
Connectionism: A Deeper Background
Connectionism has deep historical roots:
McCulloch & Pitts (1943)
Introduced the idea that networks of simple "on/off" units could model mental processes, showing how logical functions could emerge from neural-like structures.
Donald Hebb (1949)
Proposed a biologically inspired learning rule—Hebbian learning—which explained how the strength of connections between neurons adapts through experience.
Rosenblatt (1958)
Developed the Perceptron, an early model of a learning machine that could classify inputs into categories, paving the way for modern artificial intelligence.
Rumelhart & McClelland (1986)
Advanced the field with the framework of Parallel Distributed Processing, demonstrating how large-scale connectionist networks could model language, memory, and other higher-level cognitive functions.
Together, these milestones shaped the foundation of today's neural nets—systems capable of learning patterns, representing information processing at scale, and mimicking certain aspects of human cognition.
References
- McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5, 115–133.
- Hebb, D. O. (1949). The Organization of Behavior: A Neuropsychological Theory. Wiley.
- Rosenblatt, F. (1958). The Perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386–408.
- Rumelhart, D. E., & McClelland, J. L. (1986). Parallel Distributed Processing: Explorations in the Microstructure of Cognition. MIT Press.