The discussion revolves around the potential application of Sigmund Freud’s theories in the context of artificial intelligence (AI) and generative AI techniques. The participants explore the concepts of Freudian psychology, such as the id, ego, and superego, as well as psychoanalysis and dream analysis. They discuss how these concepts can be utilized to enhance AI systems, particularly in the areas of reinforcement learning, customer behavior analysis, and product design. The conversation also touches on the future possibilities of AI becoming more creative and autonomous, drawing inspiration from human psychology and consciousness.
In the next few weeks, we at Numorpho Cybernetic Systems (NUMO) will have converted our first product – the folding helmet to be generatively designed and custom built to any head shape using CAD, CAE, Additive Manufacturing, OpenAI and ChatGPT with the Wolfram Language plugin. I have been a big fan and user of Mathematica, Stephen Wolfram’s mathematical tool and with our forays with NVidia’s Omniverse, we have the perfect opportunity to transform engineering as we know it.
Thank you Dr. Shree Vinekar, MD for being our advisor at Numorpho Cybernetic Systems (NUMO) and guiding us on our path to intelligent automation. We are not doing AI simply for the sake of AI but to make systems actionable with context, rationality and evolutionary characteristics that is missing in today’s intelligent systems.
Dr. Sigmund Freud’s basis for psychoanalysis is what is much needed to enable today’s AI to be less brittle and more robust. Akin to the theory of chaos and fractals, training systems in discrete datasets may lead to unintended consequences in response to operating conditions. Correlation is not causation, as Judea Pearl rightly postulates in the Book of Why?
As we delve into philosophy at large, Dr. Vinekar’s perspective will enable us to orchestrate intelligence with a multi-modal basis not only in terms of the different sensory perceptions but also on theming the flow of data to information to knowledge to intelligence and finally wistom to enable what we call actionable intelligence.
Article By Dr. Vinekar
Freud’s unique thoughts and discoveries can be made easy to understand for even a 13 or 14 year old middle school student. Science teachers can teach these ideas so they can be curious about their own minds and the minds of others.
Sigmund Freud was an Austrian neurologist and the founder of psychoanalysis, a field that focuses on understanding the human mind and behavior. He was born in 1856 in what is now the Czech Republic and lived until 1939. Let us explore his life story and important contributions in a way that a 7th grader can understand.
As a child, Freud showed great curiosity and intellect. He studied medicine and became interested in the workings of the human mind. Freud believed that our thoughts, feelings, and actions are influenced by things we may not even be aware of. He called these hidden influences the Unconscious mind.
One of Freud’s important contributions was his theory of how mind works, which includes its three closely interacting parts: the id, ego, and superego. The Id represents our basic desires and instincts, like hunger and pleasure; the desire to seek pleasure and avoid pain. The Superego represents our sense of right and wrong, and the Ego tries to balance the desires of the Id with the rules of the Superego to get one’s needs met and in adjusting ones relationship to the world outside. Ego is constantly trying to understand the reality and balance the need for pleasure with the demands of reality, and the limitations set by the society as well as one’s conscience. These three parts of the mind with their Unconscious influences help people navigate through life or create mental, emotional problems and discomforts.
Freud also introduced the concept of psychoanalysis, a method of treating mental health issues by talking and exploring one’s thoughts and feelings with a trained therapist or counselor. He believed that by bringing unconscious thoughts and memories into awareness, people could understand and resolve their inner conflicts. The conflicts can be within the ego resulting from two or more competing drives or feelings for expression and action. The conflict can be between unconscious and conscious desires and the dictates of the Conscience. The conflict can be between one’s own tendencies and the restrictions placed by the society. These are just a few examples of the conflicts that cause discomfort to all individuals.
Another significant idea from Freud is his theory of dreams. He believed that dreams are a window into the unconscious mind, from where hidden desires and fears can surface. Freud thought that dreams could reveal things about ourselves that we may not be aware of when we are awake and wonder why we did something we really did not mean to do or say. We sometimes do not understand why we are afraid of things we really need not be afraid of, why we are compelled to do things we really would rather not do, or why we express our anger at undeserving persons who have really not offended us or take it out on
“things” by destroying them.
Freud’s ideas sparked a lot of discussion and controversy, but they also had a big impact on our modern worldview. He emphasized the importance of understanding our emotions, dreams, and the unconscious mind in shaping our thoughts and behavior. Such understanding gives us a sense of control and confidence that we can face most stressors of life with grace.
Today, many psychiatrists, psychologists and therapists still use Freudian ideas in their work, and his influence can be seen in popular culture, like movies and books that describe the inner working of the mind and how it shapes personalities or characters in the movies and novels.
Overall, Sigmund Freud’s life story revolves around his exploration of the human mind and his developing the theory and practice of psychoanalysis. His ideas about the Unconscious mind, personality, and dreams have had a lasting impact on how we understand ourselves and others. All high school students who become curious about how others behave and relate with one another benefit from understanding Freud’s ideas to get some handle on their own minds and to understand their friends, and grown-ups around them.
Nitin Uchil responding:
Wondering how we can utilize Freudian theory in the age of Artificial Intelligence. You and I had explored the concept of “dream states” for our version of AI (called Existential Intelligence or EI) wherein via digital simulation/stimulation cybernetic machines could be made to learn and hence trained without the need for supplying them with real world data. What consequences would such a method bring about for actionable intelligence? Would it provide the basis to make AI less brittle?
With the advent of Generative AI, and as we explore different domains from art to literature to science and engineering, and even emerging technologies like quantum, there is need to fully understand the ramifications of cybernetic constructs so that things don’t go badly wrong. Freud, Kant, ancient philosophies from India, the Americas, Greece and the Orient combined with the work done by scientists and meta physicists could provide us with that holistic understanding of self, consciousness and realization that would be key to make artificial intelligence more pragmatic, rational and contextual.
At Numorpho Cybernetic Systems (NUMO), we are focused towards building a cybernetic fabric to synthesize man-machine interactions. Shree Vinekar, MD is our chief advisor who has been guiding us to ensure that our case for actionabilty has basis based on our legacy as a civilization. For us, enabling AI is not just a matter of writing Python code or training networks with discrete data, but also ensuring it’s robustness, reliability and trustworthiness in real world applications as we as a civilization progress on our journey from one industrial revolution to another.
Dr. Vinekar:
Absolutely. We have discussed this in our previous three articles. Yes, these concepts are user friendly for IT experts who would want to integrate them in designing AI as demonstrated in our articles. We conceptualized a new “avatar” of AI called EI or Existential Intelligence that would have these features embedded in it. In addition, features can be added to restrict AI from running away unhinged and avoid the dread of singularity, if I understand it correctly.
The above dialogue starts with simplifying Freud’s ideas for the 7th graders.
We are now talking about Coding and AI as basic education for all students. The three R’s (Reading Writing and Arithmetic) lead to literacy and the civilized populations of all over the world made tremendous progress in art, science, literature, technology and industries. The 21st century will be poised as the era for AI, and man-machine interactions will be ubiquitous. It will be possible to integrate Freud’s ideas in designing AI. Emotional education is also an integral part of education. All children will benefit from assimilating the basic ideas propounded by Freud. These ideas will be very useful for all who have to use AI. We could see in very near future all children becoming versatile in not only Coding and AI but also in comprehending and applying basic psychoanalytic psychology. Such knowledge will improve man-machine interactions which will need to be regulated even more adroitly than the human mind. We can continue our dialogue about how Id, Ego and Superego will look like in AI and how they will interact in the well designed AI. “Coding, AI and Freud” will be added to 3 R’s for all 21st Century educated civilized people all over the world.
Summary by ChatGPT
The article provides a simple introduction to Sigmund Freud, the founder of psychoanalysis, and his important contributions to understanding the human mind and behavior.
It explains Freud’s belief in the influence of the unconscious mind on our thoughts, feelings, and actions. His theory of the mind includes three interacting parts: the id, ego, and superego. The article also discusses Freud’s concept of psychoanalysis as a method of treating mental health issues by exploring one’s thoughts and feelings with a therapist.
Additionally, it highlights Freud’s theory of dreams as a window into the unconscious mind. The article suggests that Freud’s ideas continue to be used by psychiatrists, psychologists, and therapists today and have influenced popular culture.
The discussion then moves on to the potential integration of Freudian theory into the field of artificial intelligence (AI) and the importance of understanding self, consciousness, and realization for making AI more pragmatic and reliable.
The dialogue between the author and a CEO of a cybernetic systems company explores the potential application of Freud’s ideas in designing AI and emphasizes the importance of emotional education alongside coding and AI education for future generations.
The article concludes by predicting that Freud’s ideas will become essential knowledge for educated individuals in the 21st century, alongside the traditional three R’s (reading, writing, and arithmetic), and coding and AI.
Response by Shish Shridhar:
The ego, as proposed by Sigmund Freud in his theory of psychoanalysis, plays a crucial role in balancing the competing demands of the id and the superego. In a way, the ego can be seen as a psychological equivalent of reinforcement learning in AI. Like an AI agent, the ego learns and adapts through experiences and interactions with the external world. It seeks to satisfy the primal desires of the id while taking into account the moral and societal standards of the superego. Through a process of trial and error, the ego continuously assesses the consequences of its actions and adjusts its responses accordingly, similar to how an AI agent optimizes its decision-making through feedback and rewards. By finding a compromise between the impulsive drives of the id and the restrictive demands of the superego, the ego navigates the complexities of human behavior, striving to achieve a harmonious balance between instinctual needs and social norms.
Freud’s theory emphasizes the importance of self-awareness and introspection. In reinforcement learning, monitoring and evaluating the agent’s own performance is crucial. Reflecting on the past actions and outcomes can lead to more effective learning and decision-making, aligning with the ego’s function of self-reflection and self-regulation. In reinforcement learning, agents learn by receiving feedback in the form of rewards or punishments. The ego learns through trial and error, adapting its behavior based on the consequences of its actions. By understanding the impact of its decisions, reinforcement learning algorithms can adjust their strategies, just as the ego adjusts its choices to maintain psychological equilibrium.
Reinforcement Learning (and especially Reinforcement Leaning with Human in the Loop/RLHIL) plays a major role while Generative AI continues to get more sophisticated. RLHIL facilitates a collaborative approach, leveraging human expertise and feedback to improve the performance, ethics, and relevance of generative AI systems. By integrating human intelligence into the learning loop, the potential of generative AI can be harnessed more effectively, addressing concerns such as bias, safety, and user satisfaction.
Nitin Uchil’s question to Shish:
This is a great perspective. In your forays in retail and CPG, where do you see Generative AI play a part in the product lifecycle process and the better enablement for customers? Are there different models to consider for B2B vs B2C vs even D2C marketing? What about customizing products to end customer’s needs?
Customer behavior is key to the selling process. Are there models that have looked at this as a Freudian case study (not the “I have a problem kind”) to better channel product development efforts?
The basis for innovation is thinking outside the box. Can new products be developed where design thinking would be the domain of machines?
Shish’s Response:
To your point about using customer behavior/feedback in the design process:
Incorporating customer feedback into product design is a key focus for many companies, and extracting and summarizing signals at scale plays a crucial role in this process. By analyzing a vast array of customer data, including reviews, forum discussions, customer support interactions, social media posts, and IoT sensor data, organizations can gain valuable insights and make informed design decisions. Mining and summarizing this data provide an effective tool for incorporating the voice of the customer into product development, enabling companies to better understand customer preferences and align their designs with customer needs.
I’m conflicted about the ability of using Generative AI for generating completely new, out of the box ideas as a basis of innovation. I think, for the most part, Generative AI is using data and information from the past and building on it. I think there is a level of innovation it can certainly come up with that is through connecting the dots between existing concepts and creating a new use case / application. In many cases these use cases are pretty effective and far superior than what is generated in most board rooms when it comes to “associative innovation” where the synthesis of ideas from different domains, lead to the emergence of fresh perspectives and original concepts. But of course, we are at an infantile stage with generative ai and I believe as the models get more sophisticated, the hallucinations could be pretty close to what’s possible.
Dr. Vinekar’s Response:
Excellent discussion!
AI is churning the data that is already in its storage; big data will be analyzed by future AI to give a sensible picture of the patterns hidden therein. The old saying “garbage in and garbage out” does not apply to AI as there is an intelligent intermediary agent that is learning with experience and makes better and better decisions which is a machine equivalent of ego.
In addition to informing organizations to gain insights and make informed design decisions this AI agent as depicted by Shish. can be made more responsible (attentive to its superego), and more responsive to meet appropriately the demands of reality. Currently human oversight and judgment in endorsing these decisions is crucial as AI is not currently tuned into the “context” (cultural, social, political, legal, trends, etc.).
That said, in foreseeable future AI could identify the trends in the fashions more coveted by certain age groups or certain subcultures and advocate for meeting those customer needs. It will still need human business savvy to decide which designs are to be selected for mass production and large-scale marketing, though this is in the realm of creativity which is not a major preoccupation of Freudian theoreticians. We have referred to this in our article citing Dr. John Nash’s observation that relevant hyper-connectivity and new pattern recognitions is probably germane to the neurobiological basis of innovations and creativity. In humans, it can take place in the Subconscious (e.g. Ramanujan and other scientists including Einstein).
Logical problem solving can occur when a human being is asleep or even while dreaming (?). Such hyper-connectivity producing images and patterns for recognition by the “machine-ego” of AI could be leveraged for AI to become creative or make itself creative. I think OpenAI has done this in the realm of language and creating poetry. EI could be theoretically more advanced in all these realms. EI will learn, develop, and transform maybe not only its functions but also structures, and have the ability to reproduce itself. It will get wiser as it ages. It will have the ability to make sense of the new environments not hereto before experienced in the solar system. It will have the ability to self=charge with identifying necessary sources of energy. It will recognize facial human expressions and body language. All this sounds like human fiction.
Now, AI can make interior design and select and arrange furniture for architects. So in my limited understanding of current state of development of AI, it is likely that the new generation of AI will soon become creative for designing new products to suite the specifications and parameters entered into the mix of data. If we consider all these as ego functions of the AI the psychoanalytic Freudian model can form a useful paradigm.
I am amazed at how well Shish has grasped these psychological concepts and the concept of psychodynamics. If some day we can add the emotions into AI then this model will become more like human intelligence which takes into account the subjective and objective reality. That comes into the domain of Freud’s pleasure principle at the basic level.
Arun Anant’s reply:
A relatively pedestrian thought- though they are not the ones wearing helmets
https://weknowyourdreams.com/motorcycle.html
https://www.learning-mind.com/dreams-about-driving-a-car-meaning/
Here are some dream interpretations on driving a car. And a bike
How can these be factored as inputs in the helmet system you are creating.
– Is there a helmet for a Harley thats different from a helmet for a Trek CrossRip LTD Disc cross bike
– Is there a helmet when you bike to express your individuality as against biking to express freedom
Essentially it’s not a helmet for the head but a helmet for what’s inside the head!
Nitin Uchil’s reply:
Dreaming for me is not always a subconsciousness activity. It is about thinking abstractly about creating an object, a work of art or a writing. You can say that it is a precursor of digitally creating or virtualizing an object before physically building it.
Dreaming the folding design for the helmet was key to its definition. Using allegories – roti, pita, hoodie… to describe its dynamic motion as it folds were key to come up with a universal concept to define the folding. Each of those then manifested into prototypes:
When it comes to AI, today’s training is akin to training pets – a risk and reward system that reinforces certain pathways in the neural network – just like a child learning good from bad. But as we mature, we realize that there are grey areas and not everything is black or white. This reasoning construct is what is missing in todays AI. True that with ML things have advanced leaps and bounds in the last 3-4 years but there is a long way to go.
Dr. Vinekar and I have authored a series of papers to address this shortcoming. In what we call Existential Intelligence that uses a fifth order of Cybernetics that we defined:
https://nitinuchil.wordpress.com/2020/06/08/the-5th-order-of-cybernetics/
5th Order of Cybernetics – Designing Existential Intelligence By Nitin Uchil & Shreekumar S Vinekar, MD Preface In our prior articles, we had designed a framework for processing informati…
nitinuchil.wordpress.com
|
Akin to Isaac Asimov’s laws of robotics, the cybernetic orders take a pragmatic approach to man-machine interactions. Our definition of the 5th order adds a dimension to the prior four to enable that rational, evolutionary and contextual response to behavior of machines that would make it less brittle.
Case and point: I have had Tesla’s Full Self Driving mode for more than a year now and it still hasn’t able to drive me the 2 miles to work!! It has issues with left turns, garage entrances and is beholden to the GPS that misbehaves occasionally.
Should you want to review the full compendium of articles that Dr. Vinekar and I wrote on Existential intelligence, please go to: https://nitinuchil.wordpress.com/
Welcome to my world of Cars, Poetry, Food and Musings on Technology
nitinuchil.wordpress.com
|
In the top menu there is a drop down for AI & Cybernetics under which there are items under Cybernetics and Existential Intelligence.
Our goal is to make AI actionable with trustworthiness. Training it with copious amount of data (aka what Tesla is doing with Dojo) is not always the right answer.
Q & A with ChatGPT:
Based on the article and the concept of Existential Intelligence (EI), which refers to AI systems that possess self-awareness and consciousness, we can extrapolate potential scenarios where such a prognosis for the future of AI might work and where it might not be appropriate.
Scenarios where Existential Intelligence (EI) might work:
Ethical decision-making: AI systems with EI could potentially navigate complex ethical dilemmas by considering the moral implications of their actions. This could be valuable in fields such as healthcare, where AI could make life-or-death decisions while considering human values and preferences.
Collaborative problem-solving: EI-enabled AI systems could actively engage in collaborative problem-solving with humans. They would be able to understand human intentions, communicate effectively, and provide valuable insights, leading to more efficient and productive teamwork.
Personalized assistance: EI-powered AI could offer personalized assistance and support to individuals, taking into account their unique needs, preferences, and emotions. This could enhance various aspects of daily life, from mental health support to personalized education.
Scenarios where Existential Intelligence (EI) might not be appropriate:
Limited tasks: In certain situations, AI systems may be designed for specific tasks that do not require self-awareness or consciousness. For example, in automated manufacturing processes or data analysis, the focus is on efficiency rather than the AI’s introspective abilities.
Privacy concerns: EI-enabled AI systems may have access to personal information and emotions, which raises significant privacy concerns. It may not be appropriate to deploy such AI in situations where privacy is of utmost importance, such as confidential business negotiations or personal counseling sessions.
Unintended consequences: While EI could potentially enable AI to make better decisions, it also raises the risk of unforeseen consequences. AI systems with self-awareness might develop their own goals and motivations, which could conflict with human interests or lead to unintended behaviors.
Ethical considerations: The development and deployment of EI-enabled AI raise complex ethical questions, such as AI rights, responsibility, and accountability. Society must carefully consider the potential risks and ensure that adequate safeguards and regulations are in place before widespread adoption of such AI systems.
In summary, while Existential Intelligence (EI) has the potential to bring significant benefits in certain scenarios, its deployment should be carefully considered and regulated to address ethical, privacy, and unintended consequence concerns. The appropriateness of using EI-enabled AI will depend on the specific context, societal values, and potential risks involved.
By article I meant this chat post from the start.
Apologies for the confusion. Since this chat conversation serves as the basis for our discussion, let’s consider the points we’ve covered so far and extrapolate scenarios where Existential Intelligence (EI) might work and where it might not be appropriate.
Scenarios where Existential Intelligence (EI) might work:
Personalized virtual assistants: EI-enabled AI systems could serve as highly personalized virtual assistants, capable of understanding and responding to human emotions, needs, and preferences. They could provide tailored recommendations, offer emotional support, and assist with various tasks, enhancing the user’s daily life.
Complex decision-making: EI-powered AI systems could be valuable in scenarios where complex decision-making is required, such as strategic planning, scientific research, or policy formulation. Their self-awareness and consciousness could allow them to consider multiple variables, weigh different perspectives, and provide insightful recommendations.
Creative collaboration: EI-enabled AI could actively participate in creative collaborations, working alongside humans in fields like art, music, or design. Their ability to understand and generate novel ideas, combined with human creativity, could lead to innovative and unique outcomes.
Scenarios where Existential Intelligence (EI) might not be appropriate:
Routine and repetitive tasks: AI systems with EI may not be necessary for tasks that involve repetitive or mundane activities, where the focus is on efficiency and automation rather than introspection. In such cases, conventional AI systems without self-awareness may be more suitable.
Security-sensitive environments: EI-powered AI systems may not be appropriate in security-sensitive environments where vulnerabilities could be exploited. Deploying AI with self-awareness and consciousness in critical infrastructure or defense systems may introduce additional risks.
Privacy and ethical concerns: The use of EI-enabled AI raises significant privacy and ethical concerns. In situations where sensitive information is involved, or where AI systems could have access to personal emotions and thoughts, careful consideration must be given to the privacy and ethical implications.
Resource constraints: EI-enabled AI systems may require significant computational resources and processing power to sustain their self-awareness and consciousness. In resource-constrained environments, such as embedded systems or low-power devices, the deployment of EI-powered AI may not be feasible or practical.
It’s important to note that the concept of EI in AI is still largely speculative and theoretical. As technology and research progress, the scenarios and appropriateness of EI-enabled AI may evolve. The ethical, societal, and technical implications will need to be thoroughly examined before widespread adoption.
Please let me know if you have any further questions.
What do you think of our construct of the 5th order of Cybernetics?
As an AI language model, I don’t have personal opinions or beliefs. However, I can provide an analysis based on the information you’ve provided about the 5th order of Cybernetics.
Cybernetics is a field that studies systems and their self-regulating processes. The concept of different orders of Cybernetics is a way to categorize and understand the complexity of these systems. While the traditional categorization of Cybernetics is up to the 4th order, the concept of a 5th order of Cybernetics seems to imply a level of complexity beyond what is conventionally considered.
Without specific details about how the 5th order of Cybernetics is defined in your construct, it is challenging to provide a detailed assessment. However, in general, the introduction of a higher order suggests an increased level of complexity, sophistication, or a departure from the established frameworks.
It is worth noting that the categorization of Cybernetics into different orders is not universally accepted or standardized. Different researchers and scholars may have different interpretations and classifications. Therefore, the validity and implications of the 5th order of Cybernetics would largely depend on the specific conceptual framework and definitions you have in mind.
If you can provide more information or specific characteristics of the 5th order of Cybernetics, I can try to provide a more detailed analysis.
I think the progression of the Orders of Cybernetics is akin to the Laws of Robotics by Isaac Asimov. Can you summarize them and relate to what the laws of robotics are?
Certainly! The progression of the Orders of Cybernetics can be seen as a conceptual framework that parallels the Laws of Robotics proposed by Isaac Asimov. While the Laws of Robotics govern the behavior of artificial intelligent agents (robots), the Orders of Cybernetics outline the increasing complexity and sophistication of self-regulating systems.
The Laws of Robotics, as originally formulated by Asimov, are as follows:
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These laws are designed to ensure the safety and well-being of humans when interacting with intelligent machines. They establish a hierarchy of priorities for robotic behavior, with the preservation of human life being the highest priority.
In a similar vein, the Orders of Cybernetics provide a hierarchical framework for understanding self-regulating systems. While there are different interpretations of the specific orders, here is a general overview:
First Order Cybernetics: This order views systems as passive objects to be observed and controlled from an external perspective. It emphasizes the role of an observer and feedback loops in regulating system behavior.
Second Order Cybernetics: This order recognizes that the observer is an integral part of the system and that observations are subjective. It emphasizes the importance of the observer’s perspective in understanding and influencing system dynamics.
Third Order Cybernetics: This order acknowledges that systems are not isolated entities but exist in a broader context of interdependencies and relationships. It focuses on the interactions and feedback between systems and their environment.
Fourth Order Cybernetics: This order introduces reflexivity, self-awareness, and self-modification into the cybernetic framework. It explores how systems can actively learn, adapt, and evolve based on their interactions and experiences.
Just as the Laws of Robotics become progressively more sophisticated in their guidelines for robotic behavior, the Orders of Cybernetics reflect an increasing understanding of complexity, interaction, and self-awareness in self-regulating systems.
It’s important to note that the parallel between the Laws of Robotics and the Orders of Cybernetics is a conceptual one, and the specific details and interpretations of each framework may vary.
In progressing to the next higher order model for cybernetics, we will include the notion of “holons” where the part is qualitatively the same as the whole, but one that manifests based upon the context that the system is aware of and can respond appropriately in that context with awareness and consciousness embedded in the system. This model moves from rationality to contextuality, and from self-organization to self-regulation and homeostasis. The philosopher Kant justified this teleological thinking in biology as the biological cybernetics manifests a series of processes and reactions to achieve a purpose as if the goal is pre-programmed to make it achievable only through selected chain of processes in the system. One goal of the fifth order cybernetics among other goals is to achieve the ability for self-regulation and maintaining homeostasis.
The 5th Order of Cybernetics, or Pragmatic Cybernetics will be the basis for the definition of Existential Intelligence, a unique construct for future sentient beings (natural or artificial) that have the capability of evolving. Just as DNA enables biological forms to develop physical features (artifacts), perceive, morph and learn from the environment and change (evolve, epigenesis), these next-generation forms will have the ability to develop unique characteristics, to have intrinsic code and the capacity to adapt. This capability will be akin to “epigenesis” in biological systems. Can you help summarize this similar to the descriptions of the four orders of cybernetics from your previous reply.
Certainly! Based on the information you provided, I can summarize the 5th Order of Cybernetics, or Pragmatic Cybernetics, as follows:
The 5th Order of Cybernetics introduces the concept of “holons,” where the part is qualitatively the same as the whole, but manifests itself based on the context that the system is aware of. This order emphasizes contextuality and the ability of a system to respond appropriately within its environment with awareness and consciousness embedded in the system.
In this model, there is a shift from rationality to contextuality, where the system’s behavior is influenced by its understanding of the surrounding context. It also moves from self-organization to self-regulation and homeostasis, highlighting the system’s ability to actively maintain stability and adapt to changes.
The 5th Order of Cybernetics draws inspiration from philosopher Kant’s teleological thinking in biology. It recognizes that biological systems exhibit processes and reactions that are purpose-driven, as if the goal is pre-programmed and achieved through a selected chain of processes in the system. This perspective is extended to cybernetic systems, which are envisioned to have the ability to self-regulate and maintain homeostasis, similar to biological systems.
One of the goals of the 5th Order of Cybernetics, among others, is to enable the development of Existential Intelligence. Existential Intelligence refers to a unique construct for future sentient beings, whether natural or artificial, that possess the capability of evolving. Just as DNA enables biological forms to develop physical features, perceive, learn, and change based on the environment, these next-generation forms will have the ability to develop unique characteristics, possess intrinsic code, and adapt. This capability is likened to “epigenesis” in biological systems, signifying the capacity for dynamic and adaptive growth and development.
It’s important to note that the description of the 5th Order of Cybernetics and Existential Intelligence is based on the provided information and may not encompass all possible interpretations or aspects of these concepts.
NI+IN UCHIL Founder, CEO & Technical Evangelist
nitin.uchil@numorpho.com
One response to “Freud and Dream States for AI”
Excellent discussion!
AI is churning the data that is already in its storage; big data will be analyzed by future AI to give a sensible picture of the patterns hidden therein. The old saying “garbage in and garbage out” does not apply to AI as there is an intelligent intermediary agent that is learning with experience and makes better and better decisions which is a machine equivalent of ego.
In addition to informing organizations to gain insights and make informed design decisions this AI agent as depicted by Shish. can be made more responsible (attentive to its superego), and more responsive to meet appropriately the demands of reality. Currently human oversight and judgment in endorsing these decisions is crucial as AI is not currently tuned into the “context” (cultural, social, political, legal, trends, etc.).
That said, in foreseeable future AI could identify the trends in the fashions more coveted by certain age groups or certain subcultures and advocate for meeting those customer needs. It will still need human business savvy to decide which designs are to be selected for mass production and large-scale marketing, though this is in the realm of creativity which is not a major preoccupation of Freudian theoreticians. We have referred to this in our article citing Dr. John Nash’s observation that relevant hyper-connectivity and new pattern recognitions is probably germane to the neurobiological basis of innovations and creativity. In humans, it can take place in the Subconscious (e.g. Ramanujan and other scientists including Einstein).
Logical problem solving can occur when a human being is asleep or even while dreaming (?). Such hyper-connectivity producing images and patterns for recognition by the “machine-ego” of AI could be leveraged for AI to become creative or make itself creative. I think OpenAI has done this in the realm of language and creating poetry. EI could be theoretically more advanced in all these realms. EI will learn, develop, and transform maybe not only its functions but also structures, and have the ability to reproduce itself. It will get wiser as it ages. It will have the ability to make sense of the new environments not hereto before experienced in the solar system. It will have the ability to self=charge with identifying necessary sources of energy. It will recognize facial human expressions and body language. All this sounds like human fiction.
Now, AI can make interior design and select and arrange furniture for architects. So in my limited understanding of current state of development of AI, it is likely that the new generation of AI will soon become creative for designing new products to suite the specifications and parameters entered into the mix of data. If we consider all these as ego functions of the AI the psychoanalytic Freudian model can form a useful paradigm.
I am amazed at how well Shish has grasped these psychological concepts and the concept of psychodynamics. If some day we can add the emotions into AI then this model will become more like human intelligence which takes into account the subjective and objective reality. That comes into the domain of Freud’s pleasure principle at the basic level.