PROLOGUE
In this article, we digress to summarize our intentions for actionable intelligence by reviewing the proceedings of the recently concluded AI day 2022 by Tesla and NVidia’s GTC Fall 2022 conference. As we, at Numorpho Cybernetic Systems (NUMO) progress with our own network to enable actionable intelligence to facilitate process automation, these will be lessons learned that we would incorporate into our methodologies to provide the next generation of AI/ML constructs that are robust and contextual, and less brittle than current systems.
First of all, kudos to Elon Musk and the Tesla team for having advanced our understanding of AI, data driven autonomous driving and in making mechanical joints based on biological functions. Their advancement in their chip technology, refining the DOJO computer stack, minimizing the processing time for simulating the complex interactions of auto driving, and in creating a basis for humanoid robotic functions is unparalleled. I like what both Tesla and NVIDIA are doing to advance our artificial intelligence quotient – a progression that is far surpassing Moore’s Law for hardware enhancements.
OVERVIEW
Putting Data to Work is great and much needed to develop a holistic view of processes so that actions could be computed and predicted with a degree of accuracy and reliability. This is key for engineering where we look at the known knowns to iterate on exacting earthly solutions to mechanical problems and classical physics-based solutions.
But data is not DNA…… Armchair warriors may simulate a lot of possible scenarios, optimize their results and deploy them into solutions, but the data they use and generate is discrete (not analog) and there will always be gaps and edge cases where things will go badly wrong. The premise of non-linear dynamics is that a butterfly flapping its wing in Brazil might affect the weather in Chicago and to account for every single variation and nuance in data is impossible. This is why both NVidia and Tesla have started incorporating Physics in addition to trained and reinforced Neural Networks to validate result predictions. But this still requires a lot of backend calculations in the mother ship (DOJO in case of Tesla) to conform the results and attune the processing of each of the individual outcomes. I personally do not think every edge case can be simulated. Plus, why go thru such complicated machinations for something as routine as auto-driving which every adult today has the capacity to do.
Also converting thought to action is a linked process and appropriate mechanisms need to be devised to enable the movements of the fully defined machine – be it a car or a humanoid robot. This is where Biology comes in – by understanding how our muscles and joints work together to create movement, we can devise engineering solutions that can create similar or even better functionality. This is where I believe both Tesla and NVIDIA are lagging – they have not yet been able to fully understand how to put all the pieces together to create a cohesive solution. In subsequent posts on Allometric Scaling and Biomimicry, we will explore this in more detail.
On a Personal Note
I just got the Full Self Driving (FSD) Beta on my 2017 Tesla Model S and am training myself to “not panic” as she goes thru it’s motions of negotiating the drive. Left turns in particular are more stressful as you sit on the edge of your seat waiting for the procedure to complete. This is where the infamous Yoke steering wheel might be more beneficial than a regular full circle one.
Chicago has a lot of diagonal intersections and am not really convinced that the firmware is there to account for all the nuances. An example when it came to a complete stop in the middle of a very wide intersection at Elston and Ashland since the light turned red. Rainman anyone?
Albeit, Tesla has a large user base (to do the beta testing) and has been simulating and training their DOJO supercomputer, intelligence even a trivial pursuit like auto driving needs to be correctly ascertained to achieve the right result. As Tesla discovered, information speed is critical in making instantaneous decisions when they decided not to use Radar as a sensory input for FSD. I see the difference when I switch between the FSD and the AP modes! One of the nice things about Radar was driving in fog and rain, and the anticipation about sudden traffic stops if the car in front of you did not react to it.
What I would warn is: Be careful, be very very careful!
Another thing to consider is Safety Regulations and Liability. This is key in the long term viability of an AI based solution that directly impacts lives. These systems need to conform to existing rules and help define new rules when warranted.
TO SERVE MANKIND
In the AI Day event, Elon Musk made several refences to the ethical behavior of Robots, referring to Isaac Asimov’s Laws of Robotics and even enhancing them, adhering to rules and regulations, and why a company like Tesla needs to be accountable to the shareholders and not completely follow Elon’s direction. We have a social responsibility to make sure that the robots we build are put to good use and don’t do harm. Sometimes this means that we need to build robots that are specifically designed to help us with our ethical and moral decisions. But these are difficult parameters to code since there is always debate on what is good and what is bad.
Plus creating a humanoid robot has other issues especially due to our tremendous limitations as an organism. Perhaps, it is a take on what is mentioned in the bible – “God created man in his own image… Genesis 1:26-27”, but I think it is also because our industrial revolutions have mechanized and automated certain tasks making it easier and routine for humans to interact with equipment. It’s recreating the story of the horseless carriage when automobiles were invented and replaced horse carriages as the mode of transportation. The earlier versions of the car had a place holder for the horse whip because folks did not know any different.
In answering to a question posed by the audience where the ask was – “How would robots make life simpler?“, Elon’s answer was possibly making Inspector Gadget real or creating an 8-arm robot (octobot?) is definitely a better approach to futuristic enablement of our man-machine interactions. To limit ourselves to humanistic creations is just going to be temporary as our machines to make the machines evolve to more complicated constructions. This is becoming evident in the arena of Additive Manufacturing already where the idea is to build products layer by layer. “Born not Built” is our philosophy for this new engineering – what we call Adaptive Engineering. In this new paradigm, production will not be restricted to linearized processes but will be multi-dimensional with many simultaneous processes coordinating in the making.
AGI IS NOT NATURAL INTELLIGENCE
“Data usage does not equal data mindset. Putting data on the map is a big journey of understanding the ramifications and consequences of its veracity and use especially in training AI networks” said Vanessa Eriksson, the Chief Data Officer at Zenseact. “Our AI Edge Labs is not just collecting responses but also looking at swarm learning and federated learning to understand how data can be correctly assessed for use”.
Swarm and Federated learning do have its strengths and weaknesses. Swarm learning is a good way to learn if you have a lot of data and you want to learn from it quickly. Federated learning is a good way to learn if you have a lot of data and you want to learn from it in a controlled way. There are a few key differences between these two types of learning. Swarm learning is a more unsupervised type of learning, while federated learning is more supervised. This means that in swarm learning, the data is not labeled, and the algorithm has to learn from the data itself. In federated learning, the data is labeled, and the algorithm can learn from it more effectively. Another difference is that swarm learning is more decentralized than federated learning. This means that in swarm learning, each agent is responsible for its own learning. In federated learning, there is a centralized server that controls the learning process. Finally, swarm learning is more scalable than federated learning. This means that it can learn from more data more quickly. Federated learning is more limited in this regard.
Both are good for AGI (Artificial General Intelligence) development, but it is important to understand the implications and consequences of data veracity and use, especially in training AI networks.At Numorpho Cybernetic Systems (NUMO), we plan to have a graded ascension to utilizing intelligence by appropriate data usage. We will be using a multi-modal basis (looking at multiple perception information) to coordinate a holistic approach to actionable intelligence using our Tau Codex Orchestrator. This will define the convergence of engineering, data science and tenets of biology to enable autonomous operations.
MULTIMODAL INTELLIGENCE
Multi-modal AI is a new AI paradigm, in which various data types like image, text, speech and numerical data are combined with multiple intelligence processing algorithms to achieve higher performances. Multi-modal AI often outperforms single-modal AI in many real-world problems. Multimodal AI engages a variety of data modalities, leading to a better understanding and analysis of the information. The Multimodal AI framework provides complicated data fusion algorithms and machine learning technologies.
Multi-modal systems, with access to both sensory and linguistic modes of intelligence, process information the way humans do. Traditionally AI systems are unimodal, as they are designed to perform a particular task such as image processing and speech recognition. The systems are fed a single sample of training data; from which they are able to identify corresponding images or words. The advancement of artificial intelligence relies on its ability to process multimodal signals simultaneously, just like humans.
Multi-modal AI systems have multiple applications across industries including aiding advanced robotic assistants, empowering advanced driver assistance and driver monitoring systems, and extracting business insights through context-driven data mining.
But, Multi-modal compositions from different types of sensory inputs for real time response is inherently difficult. Case and point, the auto pilot feature in Tesla cars.
Originally, the suite of Autopilot sensors – which Tesla claimed would include everything needed to achieve full self-driving capability eventually – included eight cameras, a front-facing radar, and 16 ultrasonic sensors all around its vehicles.
Here is a summary of the progression of the characteristics:
- AP0 – No sensors
- AP1 – Autopilot driven by single front facing camera using Mobileye. This had subsequent updates adding sensors and a front facing radar.
- AP2 – Coordinating the input with Mobileye proved difficult, so Tesla opted for the NVidia GPU chip instead
- AP3 – To increase processing speed, Tesla moved from NVidia to have its own dual chip GPU to process all the sensors and also have redundancy.
- FSD1 – Hands free driving on surface streets added
- FSD2 – Removed radar functionality
- FSD3 – Remove Ultra sonic sensor capability
OUR PERSPECTIVE
Most of our apprehension with AI is driven because of ominous SF movies. There is no reason why machines should have the same inhibitions as life. Plus, we are just scratching the surface of what intelligence really means.
It was conventionally deemed that only human beings and other advanced species possessed intelligence. However, the development of computers, robots, and cybernetic systems indicates that almost all aspects of intelligence may also be created, imparted to, or also implemented by machines and man-made systems. The next generation of Intelligent Computing will extend AI into areas that correspond to human cognition, such as analytic or critical thinking, interpretation, and autonomous adaptation. This is crucial to overcoming the so-called “brittleness” of AI solutions based on neural network training and inference, which depend on literal, deterministic views of events that lack context and commonsense understanding. Therefore, one of the key objectives in cybernetics is to seek a coherent theory for explaining the mechanisms of both natural and machine (artificial) intelligence and their correspondence. Intelligent Computing must be able to address novel situations and abstraction to automate ordinary human activities.
A system called Human Dynamics suggests that there are physical, emotional, and mental sorts of information (data about real things, feelings and ideas), and physical, emotional and mental ways of processing information (doing, feeling, and thinking). Each of us can work in all three modes, but we tend to specialize in one form of information and one form of processing. This produces nine modes of engagement with the world, nine human dynamics, nine styles of cognition and communication. The most widespread of these modes, called emotional-physical, is the habit of 60% of Western populations: they are centered on their feelings about physical things and conditions.
We posit that “Intelligence” operates like a trident – “think, discern and do.”
- To think one must learn,
- To discern one must analyze, and
- To do one must be physically capable.
Sentience is the “unitary” capacity to perceive, feel and experience subjectively, and respond appropriately – an ability that would be needed in the next-generation systems that will contribute significantly to the progression of technology to 3rd and 4th orders of Cybernetics and define an altogether new 5th order.
What if AI were to have another branch – what we have termed Existential Intelligence (EI) in which a combination of “created” neural networks and “created” genetic structures would enable invention of sentient-ness to emerge here on earth and also outside its confines? Would such new “creatures” on this earth and those launched into space facilitate extraterrestrial exploration and research to take place without endangerment to humans? What if AI was coupled with a mechanistic ability that evolved as a sentient being? What if such “artificially intelligent being” (AIB) learns to grow in an environment that is constantly changing? Elon talks about machines making machines. What if machines evolved and reproduced?
Existential Intelligence is our unique construct for future sentient beings (natural or artificial or hybrid) that have the capability of evolving. Just as DNA enables biological forms to develop physical features (“artifacts” in IT terms), perceive, morph and learn from the environment and change (evolve), these next-generation forms will have the ability to develop or evolve unique characteristics, to have intrinsic code and the capacity to adapt. It is possible that our growing scientific understanding of epigenetic processes of adaptation and change could be eventually implemented in the functioning of Existentially Intelligent Beings (EIBs). It is to be clarified that EIBs would not be mere mechanical robots to be remotely controlled and programmed by humans on earth; they would be self-learning, aware, non-living machines, created with a different paradigm in man-machine interactions. Various EI versions could be envisioned as:
- earthlings,
- spacelings and
- starlings,
according to the nature of their mission. The earthlings could be devised to explore inaccessible domains of the earth like deep mines and oceans and explore the earth and the moon; the spacelings could survey the solar system to its outer limits including Neptune; and starlings could take off to explore the space beyond the solar system initially and then explore our own galaxy and beyond.
HORSE WITH BLINDERS ON
Rather than get caught with managing a plethora of data, our goal would be to appropriate actions based on need. Our Mantra cybernetic framework that accounts for a multi-modal schema representation of different types of data will provide the basis for creating Question Focused Datasets (QFDs) that would enable quick actionability – akin to how horses that drew carriages were prevented from being affected by data that would be inconsequential. It would include genetic programming and other emergent technologies (like quantum computing) to rationalize behavior in a non-traditional way that would make the response more natural rather than artificial.
STAGED MATURITY – CRAWL, WALK, THEN RUN
In consulting we used to have a moniker – crawl, walk, run – to define the progression model for maturity. This is also akin to Jean Piaget’s 4 Stages of Cognitive Development of intelligence in humans with capacity for abstract thinking:
- Sensorimotor stage: birth to 2 years where ambulatory functions are personified.
2a. Pre-operational stage: ages 2 to 7 where basic mental functions are developed.
2b. Concrete operational stage: ages 7 to 11 where mental functions are ratified.
- Formal operational stage: ages 12 and up where mature interactions with the environment are performed.
Piaget disagreed with the idea that intelligence was a fixed trait. He regarded cognitive development as a process which occurs due to biological maturation and interaction with the environment. Future machines need to be designed to have the capacity for abstract thinking.
Biology is nonlinear, and our prior reference to chaos theory is appropriate to validate that response cannot be pre-programmed. Machines need to evolve in stages to appropriate the right behavior. In subsequent posts on Allometric Scaling and Biomimicry, we will explore this in more detail.
CONCLUSION
I believe that while engineering and technology are important and necessary, they are not enough on their own to create artificial intelligence that can match or exceed human intelligence. We need to also understand how biology and life work in order to create truly intelligent machines. Stay tuned to our subsequent posts we explore this in more detail utilizing the Lacanian triad of the Real, the Symbolic and the Imaginary to define the dimensions of psychical subjectivity.
AGI is not EI, and our goal is to change the paradigm for machine intelligence.
NI+IN UCHIL Founder, CEO & Technical Evangelist