In a prior article we had discussed the complexity of Knowns and Unknowns with respect to the Johari window on ascertaining awareness, perception, knowledge, and wisdom in terms of behaviors (action) and cognition (understanding). We had also detailed how data and information can be effectively managed using 6 schema constructs: Write, Read, Demand, Automate, Evolve and Measure that would serve as the basis to model complexity for effective solutions in our progression to build ever more innovative products and solutions in our technological ascension as a civilization.
Most of our apprehension with AI is driven because of ominous SF movies. There is no reason why machines should have the same inhibitions as life. Plus, we are just scratching the surface of what intelligence really means.
“Data usage does not equal data mindset. Putting data on the map is a big journey of understanding the ramifications and consequences of its veracity and use especially in training AI networks” said Vanessa Eriksson, the Chief Data Officer at Zenseact. “Our AI Edge Labs is not just collecting responses but also looking at swarm learning and federated learning to understand how data can be correctly assessed for use”.
I am AI (Palindrome) is NVIDIA’s theme at their GTC summit.
At Numorpho Cybernetic Systems (NUMO), we plan to have a graded ascension to utilizing intelligence by appropriate data usage. We will be using a multi-modal basis (looking at multiple perception information) to coordinate a holistic approach to actionable intelligence using our TAU Codex Orchestrator. This will define the convergence of engineering and data science to enable autonomous operations.
Here is our basis for actionable intelligence, detailing how we assess complexity – the knowns and the unknowns to come up with a multi-modal basis for understanding causal inferences to come up with the appropriate behavioral output.
Lacan
The one you want it to be – Symbolic
The one you think it is – Imaginary
The one it actually is Real
We posit that for true sentience, we need to move beyond the current definition of Artificial Intelligence (AI) – Supervised, Unsupervised and Reinforced Learning – to a construct that evolves both physically and mentally using current and newly defined cybernetic mechanisms as the basis. Herewith we start the definition of a new Intelligence paradigm – Existential Intelligence that will be the basis for our pivot to the fifth Industrial Revolution (Industry 5.0) and beyond. Utilizing our construct called the 5th order of Cybernetics, we intend to build products and solutions that are pragmatic, contextual, and relational, reducing brittleness and imparting sentient qualities in them.
Existential Intelligence is our unique construct for future sentient beings (natural or artificial or hybrid) that have the capability of evolving. Just as DNA enables biological forms to develop physical features (“artifacts” in IT terms), perceive, morph and learn from the environment and change (evolve), these next-generation forms will have the ability to develop or evolve unique characteristics, to have intrinsic code and the capacity to adapt. It is possible that our growing scientific understanding of epigenetic processes of adaptation and change could be eventually implemented in the functioning of Existentially Intelligent Beings (EIBs).
The Kardashev scale, a method of measuring a civilization’s level of technological advancement based on the amount of energy a civilization is able to use, was proposed by Soviet astronomer Nikolai Kardashev in 1964. The scale has three designated categories:
- A Type I civilization—also called a planetary civilization—can use and store all of the energy available on its planet.
- A Type II civilization—also called a stellar civilization—can harness the total energy of its planet’s parent star – the most popular hypothetical concept being the Dyson sphere—a device which would encompass the entire star and transfer its energy to the planet(s).
- A Type III civilization—also called a galactic civilization—can control energy on the scale of its entire host galaxy.
The construct of EI will be formulated to be consistent with human progress through the Kardashev scale of sociocultural and technological evolution, from planetary to stellar and to galactic levels of civilization.
Existential Intelligence proposes that the final evolution from technological adolescence will enable us to progress thru the Kardashev scale by shedding our fears and inhibitions stemming from our guarded evolution, and our fear of Gods and Superintelligence.
The TAU Codex TransformerTM for Actionable Intelligence enabling multi-modal artificial intelligence and making them rational, pragmatic, contextual, and actionable. Herewith we introduce a new concept called Existential Intelligence based on the 5th order of Cybernetics.
Research Paves the Way for Honey-Based Neuromorphic Computing | Tom’s Hardware (tomshardware.com)
Mercedes Applies Neuromorphic Computing in EV Concept Car – EETimes
Also Lacan We will detail this further as we explore the Lacanian philosophy of the Imaginary, the Symbolic and the Real realms. Existential Intelligence will be that basis for our pivot to the fifth Industrial Revolution (Industry and Services 5.0) and beyond. Utilizing our construct called the 5th order of Cybernetics, we intend to build products and solutions that are actionable, pragmatic, contextual, and relational, reducing brittleness and imparting sentient qualities in them.
AI is as useful as it is ubiquitous, but they are only as intelligent, rational, thoughtful, and unbiased as their creators. Safe, ethical and effective AI will be vital for our future. This is an important issue in AI because systems are increasingly making decisions that have an impact on people’s lives, and we need to understand why they make the decisions they do. In Explainable AI (XAI), we begin to address these issues for making it responsible, trusted and ethical.
For the last fifty years, Artificial Intelligence (AI) has been one of the most fascinating fields in Computer Science. Unfortunately, there is a lack of understanding about what Artificial Intelligence really is. Partly this is because the field has been shrouded in mystery due to the absence of a rigorous mathematical theory – generally regarded by the AI community as an essential component for a proper scientific foundation. Also, partly because certain philosophical problems that are intimately related to AI, such as the nature of intelligence itself, and the relation between mind and body, remain unsolved.
In this section we will detail the different use cases that will form the basis for building the cybernetic ecosystem. All of the use cases are pertinent digital transformation projects that we have accomplished for different clients that we have re-blueprinted using Industry 4.0 as the basis to re-construct the details. Each Digital TwineTM blueprint has a unique ledger number associated with it that corresponds to its build sequence and enables its detailed composition and interaction between systems to form the basis for articulating the needs of the use case.
Over the last decade, the application and performance of Deep Learning has progressed at an astonishing rate. However, the current state of the field is that the neural network architectures are highly specialized to specific domains of application. An important question remains unanswered: Will a convergence between these domains facilitate a unified model capable of performing well across multiple domains? In keeping with the philosophy of Epicurus (sense) and Euclid (reason), we propose an orchestrator that combines sense with sensibility – multi-modal perceptions with an inference engine that we call the Tau Codex Orchestrator that we will describe in this section.
The TAU is a multi-modal neural network architecture that draws from the success of vision, language, and audio networks to compose solutions for problems spanning multiple domains, including industrial automation, sustainability, and infrastructure development. It will coordinate the convergence of multiple different sensory perceptions and appropriate data based on different modalities into a single composition. The Orchestrator will consist of an encoder, aggregator and decoder that interacts with a computational cybernetic fabric composed of a multi-modal database. It will utilize Microsoft platform and components – namely the Azure Cloud, Dynamics 365, and the Power Platform, and OpenAI and the Nuance Communication stack to synthesize the solution.
It will also include Engineering and other domain specific plug-ins for computer aided simulation, additive manufacturing, GPS and mapping, geo-location, and findability to fulfill different types of needs that would arise in the process of solutioning. Spatial Computing (AR/VR/XR) will be included to provide for the future of the Multiverse, and the progression IoT via Digital Threads and Digital Twins to visualize and interact with the virtual world in conjunction with our ideation curriculum (Manthan), the Digital Twine reference architecture and the integrations protocol (Tendril Connector). Emerging technologies like Distributed Ledger, Genetic Programming and Quantum Computing will also be included in a themed progression in the future.
NVIDIA Modulus is a deep learning framework that blends the power of physics and partial differential equations (PDEs) with AI to build more robust models for better analysis.
There is a plethora of ways in which ML/NN models can be applied for physics-based systems. These can depend based on the availability of observational data and the extent of understanding of underlying physics. Based on these aspects, the ML/NN based methodologies can be broadly classified into forward (physics-driven), data-driven and hybrid approaches that involve both the physics and data assimilation.
With NVIDIA Modulus, we aim to provide researchers and industry specialists, various tools that will help accelerate your development of such models for the scientific discipline of your need. Experienced users can start with exploring the Modulus APIs and building the models while beginners can use this User Guide as a portal to explore the possibilities of AI in the domain of scientific computation. The User Guide comes in with several examples that will help you jumpstart your development of AI driven models.
https://en.wikipedia.org/wiki/Theory_of_multiple_intelligences
According to Gardner’s theory of multiple intelligences, humans have several different ways of processing information, and these ways are relatively independent of one another. He has since identified and described eight different kinds of intelligence:
- Interpersonal intelligence
- Intrapersonal intelligence
- Kinesthetic intelligence
- Linguistic-verbal intelligence
- Mathematical intelligence
- Musical intelligence
- Naturalistic intelligence
- Visual-spatial intelligence
He has also proposed the possible addition of a ninth type which he refers to as “existential intelligence.” Existential intelligence is the ninth type of intelligence suggested as an addition to Gardner’s original theory. He described existential intelligence as an ability to delve into deeper questions about life and existence. People with this type of intelligence contemplate the “big” questions about topics such as the meaning of life and how actions can serve larger goals.
The theory of multiple intelligences proposes the differentiation of intelligence into specific modalities of intelligence, rather than defining intelligence as a single, general ability. According to the theory, an intelligence ‘modality’ must fulfill eight criteria:
- potential for brain isolation by brain damage
- place in evolutionary history
- presence of core operations
- susceptibility to encoding (symbolic expression)
- a distinct developmental progression
- the existence of savants, prodigies and other exceptional people
- support from experimental psychology
- support from psychometric findings
The TAU platform combines:
- natural language understanding,
- machine learning,
- explicit knowledge models,
- physmatics based engineering models, and
- automated reasoning
to empower a new class of AI applications capable of efficiently learning and delivering actionable intelligence
A high-level interaction architecture diagram of the TAU is illustrated below:
PREFACE
When Artificial Intelligence is changing the way people live, utilizing the multimodal approach can see and perceive outer situations. Billions of petabytes of data move through AI devices consistently. Nonetheless, at this moment, the vast majority of these AI devices are working independently of one another.
However, as the volume of data moving through these gadgets increments in the coming years, technology organizations and implementers should sort out a path for every one of them to learn, think, and work together to genuinely harness the potential that AI can deliver. An interesting report by ABI Research envisions that while the total installed base of AI devices will increase from 2.69 billion in 2019 to 4.47 billion out of 2024, nearly not many will be interoperable temporarily. Instead of consolidating the gigabytes to petabytes of data coursing through them into a single AI model or framework, they’ll work independently and heterogeneously to sort out the data they’re fed.
Derived from the Latin words ‘multus’ which means numerous and ‘modalis’ which means mode, multimodality, with regards to perception, is basically that the capacity to use various multiple sensory modalities to encode and decode external surroundings. At the point when joined, they make a united, singular perspective on the world. Multimodal perception rises above the universe of innovation. When applied to Artificial Intelligence explicitly, joining different AI information sources into one model is known as Multimodal Learning.
In contrast to conventional unimodal learning frameworks, multimodal systems can convey correlative data about one another, which will possibly become evident when they are both included in the learning cycle. In this manner, learning-based techniques that consolidate signals from various modalities are fit for creating more robust inference, or even new insights, which would be inconceivable in a unimodal framework.
Multimodal learning presents two essential advantages:
- Firstly, numerous sensors observing the same data can make more strong forecasts, since recognizing changes in it may be possible when the two modalities are available.
- Secondly, the combination of numerous sensors can encourage the capture of complementary information or trends that may not be caught by individual modalities.
With its visual dialogue framework, Facebook would have all the earmarks to be pursuing a digital assistant that copies human partners by reacting to pictures, messages, and messages about pictures as normally as an individual may. For instance, given the brief “I need to get a few seats — show me earthy colored ones and enlighten me regarding the materials,” the assistant may reply with a picture of earthy colored seats and the content “How would you like these? They have a strong earthy colored tone with a foam fitting.”
In the automotive space, multimodal learning is being presented with Advanced Driver Assistance Systems (ADAS), In-Vehicle Human Machine Interface (HMI) associates, and Driver Monitoring Systems (DMS) for real-time inferencing and forecast. Robotics vendors are fusing multimodal systems into robotics HMIs and movement automation to widen customer appeal and give more noteworthy collaboration among laborers and robots in the industrial space. Similarly, as we have set up that human perception is emotional, the equivalent can be said for machines.
In a time when AI is changing the way people live and work-AI, utilizing the multimodal approach, can see and perceive outer situations. Simultaneously, this methodology imitates the human way to deal with perception, including imperfections as well. All the more explicitly, the advantage is that machines can replicate this human way to deal with the perception of external scenarios.
However, certain AI technology can see data up to 150x faster than a human (in corresponding with a human guard). With this new turn of events, we are drawing nearer to mirroring human insight, and well, the possibilities are endless.
There’s only one issue. Multimodal systems get on inclinations in datasets. The variety of questions and ideas engaged with assignments like VQA, as well as the absence of great data, regularly keep models from figuring out how to “reason,” driving them to make educated guesses by depending on dataset statistics. The solution will probably include bigger, more comprehensive training datasets. A paper published by engineers at École Normale Supérieure in Paris, Inria Paris, and the Czech Institute of Informatics, Robotics, and Cybernetics proposes a VQA dataset made from millions of narrated videos. Comprising naturally produced sets of questions and answers from transcribed recordings, the dataset dispenses with the requirement for man.
NITIN UCHIL Founder, CEO & Technical Evangelist
nitin.uchil@numorpho.com

2 responses to “ENACT: ORCHESTRATE-EMPOWER”
[…] ENACT – TAU CODEX TRANSFORMERTM to ORCHESTRATE-EMPOWER – The next progression in Artificial Intelligence to prevent brittleness in current trained narrow domain AI methods. It will integrate communication and coordination for actionable intelligence utilizing multi-modal OpenAI GPT-3 and DALL-E trained Neural Nets, DeepMind, and other Cognitive Toolkits. […]
[…] ENACT: ORCHESTRATE-EMPOWER […]