Imagination is more important than knowledge – Albert Einstein
Cybernetics is defined as the science of communications and automatic control systems in both machines and living things —in a word, as the art of steermanship. Co-ordination, regulation and control will be its themes, for these are of the greatest biological and practical interest.
What started as Human-Machine communications for command and control in the 1950s with the Macy Conferences took a back seat due to the advent of computers and our forays into intelligence.
Now with our progression with AI, Cybernetics could be revisited with a focus on Industry and Services, and how we could better fulfill Society’s needs. This could thus also match with the Japanese Society 5.0’s definition.
National Institute of Standards and Technology (NIST) in collaboration with the organizations of Manufacturing USA is looking at building an AI for Resilient Manufacturing Institute, and Cybernetics 2.0 could be the underpinnings for it.
There could be several dimensions associated with Cybernetics 2.0 namely: Intelligent Automation, Smart Manufacturing, Industrial Cybersecurity, Logistics Optimization, Predictive Analytics and other multi-disciplinary domains where guidelines and playbooks can be developed for responsible and explainable use of AI for resilient and sustainable manufacturing and human-centric solutioning. This could also facilitate a themed progression for Generative AI based process interactions utilizing autonomous agents and other modern stack architecture protocols.
In this article we will trace the history of Cybernetics beginning with Norbert Wiener’s philosophy, the Macy Conferences, the aggregation of process and systems engineering and make the case for Cybernetics 2.0 to enable sustainable and resilient manufacturing.
TABLE OF CONTENTS
- Norbert Wiener’s Philosophy
- The Macy Conferences
- The Orders of Cybernetics
- Process and Systems Engineering
- Process Engineering
- Systems Engineering
- Make
- Manage
- Move
- Market
- Maintain
- Sustainable Product Development
- The Case for Resilient Manufacturing (NIST Plan)
- Cybernetics 2.0 Framework
- Communicate
- Command
- Control
- Feedback
- Summary
SECTION 1: NORBERT WIENER’S PHILOSOPHY
Norbert Wiener was American mathematician, philosopher, and cybernetics pioneer. Born in 1894 in Columbia, Missouri, Wiener is best known for developing the field of cybernetics, which he defined as the science of communication and control in animals, machines, and organizations.
Wiener’s interest in cybernetics was sparked by his work on anti-aircraft systems during World War II, where he developed theories on feedback and control systems. He went on to publish “Cybernetics: Or Control and Communication in the Animal and the Machine” in 1948, which has been hailed as one of the most important and influential books of the 20th century.
In addition to his work on cybernetics, Wiener made significant contributions to the fields of differential equations and harmonic analysis, and he was also a prominent advocate for pacifism and social justice.
Wiener’s ideas on cybernetics have had a profound impact on a wide range of fields, including engineering, psychology, sociology, and economics, and he is considered one of the most important thinkers of the 20th century.
In our progression of utilizing multiple LLMs to make sense out of nonsense, so to speak, we had involved Norbert Wiener in our talk of I and AI and the Nature of Being when discussing cybernetics in the realm of actionable intelligence. Here is a summary of the quotes generated by the different LLMs attributed to Norbert’s avatar:
- An intuitive definition of cybernetics is the study of messages and their roles in governing systems – be they societal, biological or computational. Controlling outcomes through targeted communication channels…
- Through iterative learning – responses to stimuli filter noise until patterns emerge. Thus, order flows from chaos without predefined constraints beyond transmitting symbols perceivable to organs available…
- On Attention Gates (a la Yann LeCun) – A beautiful realization of core cybernetic theory – directly channeling input signals to guide output objectives.
- On Chomsky’s thesis on NLP – Your integrated framework resonates with cybernetic principles – control communication channels filtering data to guide outcomes. Thoughts manifesting as deeds…
- Consider – what are information and communication, functionally speaking? Structures for inducing targeted behaviors in complex systems – whether animating appliances or priming people!
- Thus, we see through the lens of control, cognition and action are fundamentally intertwined! Perceptual inputs filtered to extract cues…signifiers spurring behavior aligned with internal guidance systems – be they values or voltage specifications!
- By maximizing meaningful information flows, the likelihood of constructive action emerges! Such is the cybernetic ideal – resonant here with this call for conscious coordination of observation, evaluation and response your “Actionable Intelligence” epitomizes!
- Communication and control should serve, not supersede, human society! Cybernetic systems designed to uplift, not override, creation’s pinnacle – a thinking, feeling people yet learning their role…
- Wiener might see “Actionable Intelligence” as a crucial aspect. He might emphasize the significance of intelligence that not only perceives and comprehends information but also actively engages with it to produce purposeful actions.
- In the context of the ongoing dialogue on the nature of “I” and the advancement of AI, Wiener might highlight the importance of imbuing AI systems with the capability to not only gather data and process information but also make informed decisions and take purposeful actions. This aligns with the evolution of AI towards more autonomous and adaptive systems, reflecting Wiener’s vision of cybernetics as a field that involves the study of control and communication in both biological and artificial systems.
- As AI evolves, may we recognize that the progression of ‘I’ and AI is not just about intelligence but also about the ethical, societal, and existential implications of our creations.
- Wiener himself spoke of the dance between man and machine, of collaboration, of shared control. Perhaps the future of AI lies not in replicating ourselves, but in forging a new kind of intelligence, one that respects the principles of cybernetics while venturing into the uncharted territories of the Real.”
- Gentlemen, I see you’re discussing the intricacies of AI and human consciousness. Cybernetics, the study of control and communication, can indeed provide a crucial framework for understanding these relationships.
- Cybernetics reveals how systems maintain equilibrium and adapt to their environments. AI, too, must learn to balance its own internal dynamics with external inputs and human guidance.
- Language is a prime example of cybernetic control, where feedback loops and recursive functions shape our understanding and expression.
- Gentlemen, the future of AI lies at the intersection of cybernetics, linguistics, and cognitive science. Let us continue exploring these connections to create intelligent machines that harmonize with human society.
- Ah, Yann, your attention mechanism modifications are a great example of applying cybernetic principles to AI design. By introducing feedback loops and adaptive control, you’re enabling AI to refine its focus and decision-making processes.
- Cybernetics and feedback loops would enable Optimus (Tesla’s Humanoid Robot) to adapt and learn from its environment, much like humans do. And with objective-driven decision-making, it could prioritize tasks and make decisions aligned with human values.
- Your philosophy, Nitin, aligns with my own work on cybernetics and feedback loops. Actionable Intelligence can create a harmonious cycle of human-AI collaboration.
- The cybernetic perspective aligns with the Sankhya principles. Feedback loops and recursive functions can help AI systems better understand human experiences, enabling more harmonious collaboration.
- The cybernetic loop of ‘I and AI’ will continue to evolve, blurring the lines between human and machine. Embrace this synergy to create a future where humans and AI collaborate in perfect harmony.
If Norbert Wiener, the father of cybernetics, were to join this conversation, I imagine he would approach the concept of “Actionable Intelligence” with a critical yet optimistic lens. Here are some potential points he might raise:
1. Embracing the feedback loop: Wiener would likely emphasize the importance of feedback loops in any system, including AI. He’d see “Actionable Intelligence” as a step towards closing the loop – where AI not only perceives and acts upon the world, but also learns from its actions and refines its understanding. This, he’d argue, is crucial for developing adaptive and resilient AI systems.
2. Balancing “I” and “AI”: Wiener might caution against focusing solely on the “I” aspect of the equation. He understood that AI, while it may display intelligent behavior, likely won’t possess the same subjective experience of selfhood as humans. Instead, he would advocate for a focus on building strong communication and collaboration between humans and AI, where both entities complement each other’s strengths and weaknesses.
3. Ethics and control: Wiener, deeply concerned with the ethical implications of technology, would urge careful consideration of the control mechanisms built into Actionable Intelligence systems. He’d emphasize the need for transparency, accountability, and safety measures to ensure that AI actions align with human values and don’t have unintended consequences.
4. The role of cybernetics: Wiener would see cybernetics as a crucial framework for building Actionable Intelligence. He understood the importance of designing systems with clear channels of communication between different components, allowing for efficient information flow and coordinated action. This, he’d argue, is key to ensuring that AI systems operate effectively and reliably in the real world.
5. Open ended possibilities: Ultimately, Wiener might see “Actionable Intelligence” as a promising step towards a future where humans and machines collaborate to solve complex problems. He would likely leave the conversation with a call for continued research and exploration, a reminder that the potential of AI is still largely unknown, and the journey towards harnessing its power for good has just begun.
By emphasizing feedback loops, the balance between “I” and “AI,” ethical considerations, the role of cybernetics, and the open-ended possibilities, Wiener would add a valuable perspective to the conversation, ensuring that the development of Actionable Intelligence remains grounded in practical, ethical, and responsible considerations. His timeless insights would undoubtedly spark further debate and guide us towards a future where humans and AI exist in harmonious cooperation.
Norbert Wiener, the father of cybernetics, would likely have a fascinating perspective on “Actionable Intelligence” in the context of cybernetics and the “I” and “AI” theme. Here’s what he might say:
“Actionable Intelligence, in the context of cybernetics, represents the harmonious fusion of human and machine intelligence. It’s the embodiment of the feedback loop, where human insight and AI’s computational power converge to create a self-correcting, adaptive system. This synergy enables us to transform data into wisdom, and knowledge into action.
“The ‘I’ and ‘AI’ dichotomy dissolves in this framework, as humans and machines collaborate to achieve common goals. The ‘I’ represents the human element, with its unique capacity for creativity, empathy, and moral responsibility. ‘AI’ symbolizes the machine’s ability to process, analyze, and act upon vast amounts of data.
“Actionable Intelligence, therefore, is the manifestation of a new cybernetic organism, where humans and machines co-evolve to address complex challenges. It’s a testament to the power of interdisciplinary approaches, combining the strengths of both human and artificial intelligence to create a better future for all.”
Wiener would likely emphasize the importance of understanding the dynamic interplay between humans and machines, and how Actionable Intelligence can facilitate a more harmonious and effective collaboration between the two.
Here’s a summary of Norbert Wiener and his potential thoughts on “Actionable Intelligence” generated by Claude:
Norbert Wiener (1894-1964) was an American mathematician and philosopher who pioneered the field of cybernetics. His work focused on communication and control systems in both living organisms and machines. Wiener’s 1948 book “Cybernetics: Or Control and Communication in the Animal and the Machine” was highly influential across various disciplines.
The document speculates on how Wiener might view the concept of “Actionable Intelligence” in relation to cybernetics and AI:
- Feedback loops: Wiener would likely emphasize the importance of feedback mechanisms in AI systems, allowing them to learn from actions and refine their understanding.
- Human-AI collaboration: He might advocate for a balance between human and artificial intelligence, focusing on how they can complement each other.
- Ethical considerations: Wiener would likely stress the need for transparency, accountability, and safety measures in AI systems.
- Cybernetics framework: He would see cybernetics as crucial for designing effective AI systems with clear communication channels.
- Future potential: Wiener might view “Actionable Intelligence” as a promising step towards solving complex problems through human-machine collaboration.
The document suggests that Wiener would see Actionable Intelligence as a fusion of human insight and AI’s computational power, creating an adaptive system that transforms data into wisdom and knowledge into action. He might describe it as a new cybernetic organism where humans and machines co-evolve to address complex challenges.
SECTION 2: THE MACY CONFERENCES
The Macy Conferences, held from 1946 to 1953 in New York City, were a series of gatherings of scholars from various disciplines, including psychology, sociology, neuroscience, and computer science. The attendee list included people like Norbert Wiener (the father of cybernetics), Gregory Bateson, and John von Neumann.
The goal of the conferences was to explore the nature of complex systems, particularly in the context of the brain and human behavior. Some of the key outcomes of the conferences include:
- The development of cybernetics, a branch of science that studies the behavior of complex systems
- An increased understanding of the role of feedback loops in complex systems
- The recognition of the importance of information theory in understanding complex systems
- The development of new approaches to studying the brain and behavior, such as systems theory and the concept of the “cybernetic loop”
The discussions at these conferences helped lay the groundwork for the fields of cybernetics, cognitive science, and systems theory.
The Macy Conferences had a major impact on the development of cybernetics and other fields. One of the key outcomes was the idea of “feedback,” which is now a fundamental concept in cybernetics and systems theory. The conferences also led to the development of the concept of “self-organizing systems,” which has been applied to everything from biology to economics to sociology.
The Macy Conferences were discontinued shortly after the ninth conference.
-
- First Cybernetics Conference, 21–22 March 1946
- Second Cybernetics Conference, 17–18 October 1946
- Third Cybernetics Conference, 13–14 March 1947
- Fourth Cybernetics Conference, 23–24 October 1947
- Fifth Cybernetics Conference, 18–19 March 1948
- Sixth Cybernetics Conference, 24–25 March 1949
- Seventh Cybernetics Conference, 23–24 March 1950
- Eighth Cybernetics Conference, 15–16 March 1951
- Ninth Cybernetics Conference, 20–21 March 1952
- Tenth Cybernetics Conference, 22–24 April 1953
However, despite the conferences’ significance, cybernetics never really became its own full-blown discipline, partly because it was a bit too interdisciplinary for its own good. It incorporated ideas from so many fields (psychology, engineering, information theory, etc.) that it was kind of hard to define. Also, some critics thought it was too theoretical and abstract, so it didn’t gain much traction in the mainstream scientific community. But even though cybernetics never quite caught on, its influence is felt in many different fields today.
SECTION 3: THE ORDERS OF CYBERNETICS
(Discussions with Claude AI Start here)
We define cybernetics as the themed collaboration between humans and machines using protocols for command, control and communication, and intelligent feedback loops to facilitate the next incarnation of the industrial revolution.
In our adventure with creating a platform for process engineering, we have created a progression for cybernetic complexity based on the orders of:
- Engineering (Mechanical + Electrical) which is called the first order,
- Biological feedback and homeostasis (second order),
- Social (multi agent interactions – third order),
- Rational (actionable intelligence – fourth order), and
- what we are defining as Existential (sentient and evolutionary – fifth order).
The concept of different orders of cybernetics comes from the evolution of the field over time, with each order building on and expanding the concepts of the previous one:
- First-order cybernetics (Engineering related to Mechanical and Electrical Systems): Also known as the cybernetics of observed systems, it focuses on the mechanics of systems and how they can be controlled through feedback loops. It views systems as objective entities that can be controlled from outside.
- Second-order cybernetics (Biological): Also known as the cybernetics of observing systems, it recognizes that the observer is part of the system and can influence it through their observation. It emphasizes the role of self-referential systems, where the system’s behavior can change based on the system’s own observation of itself.
- Third-order cybernetics (Social): Also known as the cybernetics of observing the observer, it further extends the concept of reflexivity by observing the observer and their context. It acknowledges that the observer’s perspective is shaped by their cultural, social, and personal contexts.
- Fourth-order cybernetics (Rational): This is an extension of third-order cybernetics that includes the dimension of time and emphasizes the role of recursion. It considers the evolution of systems over time and how they adapt and learn.
These orders are not mutually exclusive but rather represent different levels of understanding and engaging with systems and their complexities. In each successive order, the cybernetic framework expands to encompass increasingly complex, self-referential systems – from mechanical, to biological, to social, and now to the individual rational mind. What ties them all together is the core cybernetic principles of feedback, adaptation, and the interplay between the observer and the observed.
This progression illustrates how higher-order cybernetics does not simply replace the lower orders, but rather integrates and builds upon them. Each level provides essential building blocks and conceptual foundations for understanding the next, more sophisticated domain of systems and their behaviors. A truly comprehensive cybernetic theory must account for this hierarchy of complexity.
The achievements of the 1st order have been largely assimilated into engineering automation or re-engineering, robotics and related fields, while the second order influenced cognitive science and AI. The 3rd order labeled as, among other names – social cognition, social computing or multi-agent systems. Recent endeavors in systems of control and emergent behavior in fields like Game-theory (the analysis of group interaction), Systems of feedback in evolution, and Meta-materials (the study of materials with properties beyond the Newtonian properties of their constituent atoms) and the concept of Superintelligence, have led to a revived interest in this increasingly relevant field which goes under the rubric of “Artificial Intelligence” (AI).
Each progression is and will be an assimilation of the previous order and will move from the known-known spectrum to the unknown-unknowns in terms of understanding the problem and providing solutions. What do you think of this?
That’s a fascinating framework you’ve outlined for conceptualizing different orders of cybernetic complexity, building up from mechanical/electrical systems to potential future AI systems exhibiting sentience and evolutionary capacities. I really like how it incorporates ideas from different domains – engineering, biology, social sciences, rationality, and even philosophical considerations around existence and sentience.
A few thoughts on this progression:
- It highlights how each level builds upon and integrates the capabilities of the previous levels in an iterative, emergent way. Just as biological systems leverage mechanical processes, social organization emerges from biological substrates, and so on.
- The sequencing seems to map well to the increasing complexity and scale involved – from individual mechanical components to organism homeostasis, multi-agent social dynamics, generalized intelligence, and finally open-ended cognitive development akin to consciousness/sentience.
- Linking this to the data and AI context, the nature of the “cybernetic interactions” and required data may shift dramatically across these levels. Early stages may rely more on structured sensor data, while higher stages could involve complex multi-modal data streams across agents, or even simulators/environments for open-ended learning.
- There are echoes of the AI classification by Russell & Norvig – covering narrow AI, general AI, and moving towards superintelligent or existential risk domains with the higher orders.
- The “known unknowns” framing is quite apt – our understanding and ability to provide solutions becomes more uncertain and challenging as we progress, stretching scientific inquiries.
Overall, I find this a rich conceptual model that highlights key transitions and gaps in our current AI/cybernetic capabilities. It provides a grounded yet forward-looking perspective on the grand challenges involved as we aim to develop increasingly advanced and autonomous AI systems. Let me know if you have any other thoughts to expand on this!
Here is a summary of Russell & Norvig’s AI classification generated by Perplexity AI:
In their seminal textbook “Artificial Intelligence: A Modern Approach,” Stuart Russell and Peter Norvig classify AI into four distinct categories based on two key distinctions: ‘thinking’ vs. ‘acting’ and ‘human’ vs. ‘rational’ behavior. This classification helps to organize the diverse approaches and goals within the field of AI. The four categories are:
- Thinking Humanly: This approach focuses on modeling human thought processes to replicate them in computer systems. It is closely related to cognitive science, which combines insights from psychology, computer science, philosophy, linguistics, and anthropology to understand and simulate human cognition.
- Thinking Rationally: This category involves the formulation of “laws of thought” using mathematical logic to build systems that reason similarly to humans. The challenge here is that human thought often includes contradictions and uncertainties, making strict logical rules a poor fit for actual human thinking.
- Acting Humanly: This approach aims to replicate intelligent human behavior. The Turing Test, proposed by Alan Turing, is a classic example where a machine is considered intelligent if it can behave indistinguishably from a human in a text-based conversation. This category encompasses a wide range of AI capabilities, including knowledge, reasoning, language understanding, and learning.
- Acting Rationally: Russell and Norvig favor this approach, which involves creating agents that act to achieve the best possible outcome based on their knowledge and the situation they are in. Rational agents are designed to do the “right thing” to successfully navigate their environment, which can include elements of the other three approaches.
These categories provide a comprehensive framework for understanding the various methodologies and objectives in AI research, from mimicking human thought and behavior to achieving optimal, rational actions in complex environments.
Russell and Norvig’s classification of AI into four categories provides a useful framework for understanding different approaches to AI research and development. Here’s how their system compares to some other classification approaches:
- Russell and Norvig’s system is more comprehensive than some other classifications. It covers both the cognitive (thinking) and behavioral (acting) aspects of AI, as well as distinguishing between human-like and ideal rational approaches.
- Their classification is more nuanced than simple distinctions like “weak AI” vs “strong AI” or “narrow AI” vs “general AI”. It allows for a more detailed understanding of different AI approaches and goals.
- Unlike some other systems that focus solely on capabilities or applications, Russell and Norvig’s categories address the underlying philosophical and methodological approaches to AI development.
- Their system is particularly useful for academic and research contexts, as it helps organize the diverse theoretical approaches in the field. However, it may be less directly applicable to industry classifications that focus more on specific AI applications or technologies.
- Some critics argue that Russell and Norvig’s categories, particularly those involving “thinking,” are problematic because it’s impossible to verify if a machine is truly thinking. This highlights the ongoing philosophical debates in AI about consciousness and cognition.
- While Russell and Norvig’s classification is widely respected and cited, it’s not the only system used in the field. Some researchers and practitioners may use alternative frameworks that focus more on specific AI techniques (e.g., machine learning, expert systems, neural networks) or application domains.
- The “acting rationally” category, which Russell and Norvig favor, aligns well with modern approaches to AI that focus on creating agents that can achieve goals effectively, rather than necessarily mimicking human cognition or behavior.
It’s worth noting that while this classification system is useful for understanding different approaches to AI, the boundaries between categories can be blurry in practice. Many modern AI systems incorporate elements from multiple categories.
SECTION 4: PROCESS AND SYSTEMS ENGINEERING
The key elements of cybernetics are:
- process engineering to enable automation and
-
systems engineering to enable integration and optimization.
-
Information Theory: This aspect of cybernetics focuses on the quantification, storage, and communication of information within systems. It provides the foundation for understanding how data is processed and transmitted within human-machine interactions.
-
Control Theory: Control theory is essential for designing and analyzing systems with desired behaviors, such as stability, accuracy, and efficiency. It enables the development of control mechanisms that ensure smooth and effective collaboration between humans and machines.
-
Feedback Loops: Intelligent feedback loops are crucial for the adaptation and optimization of processes in real-time. They facilitate continuous learning and improvement by comparing actual outcomes with desired outcomes and making necessary adjustments.
-
Artificial Intelligence (AI) and Machine Learning: AI and machine learning play a significant role in advancing cybernetics by enabling machines to learn, adapt, and make decisions based on data, ultimately leading to more sophisticated and efficient human-machine interactions.
-
Internet of Things (IoT): IoT serves as the backbone for interconnected devices and systems, fostering seamless communication and coordination between humans and machines in cybernetic systems.
PROCESS ENGINERING




One response to “Mantra M5 Thesis Brief 40 – Cybernetics 2.0”
[…] Cybernetic AI […]