Skip to main content

2 posts tagged with "turing"

View All Tags

· 5 min read
Rivan Chanian

Neural networks have shaped the way we interact with the world. From the deep learning technologies behind self-driving cars to the Natural Language Processing enhancements that power intelligent systems, neural networks are at the forefront of modern AI. But to truly appreciate the deep learning applications we use today, it’s important to examine the foundational theories that lay the groundwork for the field. By first looking at what a neural network is and then exploring the concepts underlying McCulloch and Pitts' theoretical neural network design, we can better appreciate the ingenuity of the technology that has transformed modern AI.

What is a Neural Network?

A neural network is a computational model inspired by the structure of the brain. Neural networks typically consist of layers of nodes, or artificial neurons—an input layer, one or more hidden layers, and an output layer—connected to each other in a way that mimics the interconnected nature of neurons in the brain. Each node has its own weight and threshold associated with it. If the output of any individual node is above the specified threshold value, it becomes activated, passing information to the next layer. The network "learns" by adjusting the weights of these connections through a process called backpropagation, which minimizes errors over multiple training iterations.

Modern neural networks are complex multi-layered networks capable of solving intricate tasks like image recognition, natural language processing, and autonomous driving. They have had a profound impact on modern technology, revolutionizing and enriching people's lives through their application in solutions ranging from large language models like GPT-4 to advancements in healthcare, such as disease detection and drug discovery.

How an artificial neural network works: input layer, hidden layers, output layers. (Image source: Facundo Bre)

The Turing Machine

To truly appreciate modern neural networks, it’s important to look at the story of their first theoretical inception. The origins of neural networks are intertwined with the origins of artificial intelligence itself, beginning in Cambridge in 1936, where a mathematician named Alan Turing was quietly laying the foundation for modern AI.

In 1936, Turing was tasked with the Entscheidungsproblem, a question posing whether there is an algorithm that can determine the truth or falsity of any statement within a specified system. To prove that no such algorithm exists for sufficiently complex systems, Turing invented a theoretical problem-solving machine called a Turing Machine. A Turing Machine consists of an infinite tape divided into cells, a head that can read and write symbols on the tape, and a set of rules. The machine operates by moving the head along the tape, reading symbols, and following the rules to write new symbols and move left or right, allowing it to simulate any algorithm given to it.

Using this, he answered the Entscheidungsproblem by proving that no algorithm can universally decide whether an arbitrary Turing machine will halt or run forever on a given input. This became known as the Halting Problem, which he detailed in his 1936 paper “On Computable Numbers, with an Application to the Entscheidungsproblem.” Turing’s insight—that any computable function could be broken down into simple operations through reading and writing symbols on an infinite tape—was a revolutionary idea that sparked the development of all artificial intelligence fields that followed.

The Universal Turing machine: complete with Turing Machine descriptions, tape, and transitions. (Image source: MIT)

The First Neural Network

Inspired by Turing’s 1936 paper, Warren McCulloch, a neuroscientist, and Walter Pitts, a logician, published their influential 1943 paper "A Logical Calculus of the Ideas Immanent in Nervous Activity" in which they explored how the brain might perform computations. Turing’s paper provided a theoretical basis for thinking of computation in strictly formal terms and had shown that any computable function could be realized by a Turing machine. Pitts and McCulloch saw a parallel between Turing’s machine and the way groups of neurons might process and transmit information.

They proposed that neurons could be modeled as binary on-off units, firing when inputs exceeded a certain threshold (akin to receiving enough excitatory signals). By connecting these idealized neurons in various configurations, they demonstrated that the systems could implement basic logical operators like AND, OR, and NOT. This offered the possibility that these systems might simulate logical operations or even more complex computations. They were the first to describe what later researchers would call a neural network.

The McCulloch-Pitts Neuron Model. (Image source: available under fair use, Creative Commons)

Although their model was only theoretical and faced several limitations, their mathematical approach to neural functioning inspired subsequent generations of researchers—paving the way for cybernetics and later the field of artificial intelligence. Their work ultimately shaped the path for modern AI and deep learning, which are now deeply embedded in our everyday lives.

References

A. Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem,” 1936. Available: https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf.

W. S. Mcculloch and W. Pitts, “A LOGICAL CALCULUS OF THE IDEAS IMMANENT IN NERVOUS ACTIVITY*,” Bulletin of Mathematical Biology, vol. 52, no. 2, pp. 99–115, 1943, Available: https://www.cs.cmu.edu/~./epxing/Class/10715/reading/McCulloch.and.Pitts.pdf.

· 4 min read
Sofiya Flenova

New “modern intelligence” technologies such as signals, telegraphy, and radios emerged at the beginning of the 20th century. They quickly became indispensable, especially in wartime, where they gave significant strategic advantages to military forces. But until recently, we underappreciated the significance of the once-obscure role intelligence played in WW2, especially for the “Big Four” Allies (the United States, Soviet Union, United Kingdom and China).

‘Often intelligence is a winning factor.” - John Ferris, The Cambridge History of the Second World War.

However, the most interesting take-away is that technologies born of the war planted the seed for Artificial Intelligence as a concept. The appearance of AI can be traced back to the work of cryptographers during World War II, who used early computing machines to crack military codes. The appearance of AI can be traced back to the work of cryptographers during World War II, who used early computing machines to crack military codes.

The Enigma and the Bombe

Every country involved in World War II had to encode its military communication to protect vital information from enemy forces. The Enigma Machine used by the Germans at that time was an electromechanical device used to encrypt and decrypt messages. Operators would input plaintext messages on the keyboard, and the machine would encipher the message by illuminating a corresponding letter. The encoded message would then be transmitted, typically via radio. The algorithm would constantly change, so each keystroke would result in a new encryption pattern, which made the Enigma especially complex to crack.

To counteract Enigma algorithms, the brilliant minds at Bletchley Park, including now-renowned Alan Turing developed the Bombe. The Bombe was designed to automate the process of testing possible Enigma rotor configurations to deduce the daily settings. It worked by going through millions of rotor settings of the Enigma machine to determine which composition was used to encrypt a message.

But how was this code-breaking machine a predecessor to modern AI?

The Enigma machine

Breaking the Enigma cipher involved identifying patterns in ciphertext to deduce the underlying encryption keys. Modern neural networks, particularly in deep learning, excel at recognising patterns in complex datasets. Image and speech recognition, genomics, and medical imaging are all fields where we use trend detection nowadays.

Alan Turing’s Bombe is considered to be an early example of a deterministic (predictable) finite automaton (DFA), which is a theoretical model of computation that processes input step by step (ciphertext), transitioning between predefined states (in his case, rotor positions) according to a set of rules. Bombe's operation highlighted the essence of algorithmic problem-solving and showed the potential of automation, by minimising human error and speeding up repetitive tasks, both now the core principles of AI.

The Turing Test

The lessons learnt from Enigma’s cryptanalysis then got Turing thinking about intelligence. His well-known 1950 paper, “Computing Machinery and Intelligence,” posed the provocative question, “Can machines think?”. He proposed the Turing Test, a standard to determine whether the machine intelligence can be distinguished from the human one. The original Turing Test is set out this way:

  • Three participants, human evaluator, human respondent and a machine.
  • After some dialogue, the evaluator will have to decide whether they are interacting with a machine or a human.
  • If the machine succeeds in deceiving the evaluator, it has passed the Turing Test.

The principles of the Turing Test have sparked divergent evaluations over the past decades. Critics today argue that machines might pass the Turing Test by simply mimicking human-like responses without needing to demonstrate any thought or consciousness. So, they say, passing the Turing Test does not mean that these machines have human-like intelligence; rather, it is overly focused on language, and cognition cannot be fully captured by text-based communication. There now exist numerous alternatives such as Coffee Test, Visual Turing Test, The Lovelace Test, The Theory of Mind test, that take many other modalities into account such as visual perception, decision-making or even emotional understanding.

Alan Turing and the work of cryptologists during WWII have directed and pioneered the development of modern computing and created out many major ideas, still inspiring works in the Artificial Intelligence field today. However, we are still on a journey to realise how machines evolve. Even current AI models like ChatGPT-4 and Google Bard haven't yet advanced to a point where they can consistently pass the Turing test. In fact, nothing in AI today is even close to meeting the true intent of the Turing Test. The pursuit of creating machines that truly emulate human-like intelligence remains an ongoing challenge.

References

Ferris, J. (2015). Intelligence. The Cambridge History of the Second World War, pp.637–663. doi:https://doi.org/10.1017/cho9781139855969.027.

Dolan, J. (2023). Is the Turing Test Outdated? 5 Turing Test Alternatives. [online] MUO. Available at: https://www.makeuseof.com/is-turing-test-outdated-turing-test-alternatives/.