Agentic AI

COPILOT_AgenticAI

TABLE OF CONTENTS

  • LLMs and such
  • The Basis for LWMs
  • Agentic Future

LLMs AND SUCH

From a post by Jenny Nicholson on LinedIn:

LLMs aren’t just text generators. They are world simulators. Just like us.

Here’s what I mean:

The past and future aren’t “real.” They exist inside our heads. We tell stories, make plans, dream up whole worlds in our heads. The past and future are simulations of what we think happened before and what we hope/fear will happen next. And those simulations are based on the stories our parents told us, the stories their parents told them, the soup of language we simmer in that we call “culture.” Which all makes us a lot more like LLMs than you’d think.

    1. Both humans and LLMs use language to invent realities beyond the immediate present.
    2. We both juggle complex webs of meaning (hello, semiotics!) to make sense of things.
    3. Our brains and LLMs both use intricate rules (aka syntax) to structure these language-based realities.
    4. We’re both constantly interpreting and categorizing the world (that’s what ontology is).

“But wait!” you say. “ChatGPT just generates text. That’s not REAL.”

Except: When you think back to a memory, aren’t you using language to travel back through time inside your head? When you plan your future, isn’t that a reality you’re inventing with language?

The main difference is that our language simulations come with fancy 3D graphics and multi-modal immersion (aka our senses), that we use to ground our stories.

Here’s the REAL mindf*ck: Once you realize language is a reality simulator, you can control it instead of being controlled by it. You can hack your own perceptions, try different ways of communicating, rewrite the stories you’ve been told, and change the way reality unfolds in your future.

Jenny Nicholson is spot on when she says that LLMs are not just text generators but a basis for simulating the world. She argues that the past and future are simulations constructed through language, similar to how LLMs generate text. Humans rely on stories, culture, and language to create mental worlds, just as LLMs process language to model realities. She emphasizes that by recognizing language as a simulator, people can reshape their perceptions and control their narratives. The article invites reflection on the power of language in shaping one’s reality.

Much like humans, LLMs generate meaning from complex linguistic webs, opening up new possibilities for interpreting, planning, and restructuring the world. In her article Jenny states: The main difference is that our language simulations come with fancy 3D graphics and multi-modal immersion (aka our senses), that we use to ground our stories.

THE BASIS FOR LWMs

In a prior article, Jenny posits: LLMs are us. They are the collective knowledge of humanity and mirror the best of our intentions and the worst of our impulses.

In a series of exactly 42 articles titled “Making Sense out of Nonsense”, we at Numorpho Cybernetic Systems (NUMO) have questioned, conversed, reasoned with and challenged different LLMs to chart the course of our process engineering platform.

In a series of exactly 42 articles titled “Making Sense out of Nonsense”, we at Numorpho Cybernetic Systems (NUMO) have questioned, conversed, reasoned with and challenged different LLMs to chart the course of our process engineering platform, the Mantra M5 to enable the 5 aspects of make, manage, move, market and maintain. Through this dialogue, we’ve explored the parallels between how humans and machines simulate realities using language.

In her current article Jenny states: The main difference is that our language simulations come with fancy 3D graphics and multi-modal immersion (aka our senses), that we use to ground our stories.

Converting meaning to motion is our basis for enacting actionable intelligence. Here we plan to use Large World Models (#LWM) blueprinted using our DTWM digital twine reference architecture for a digital spatial representation of a physical scenario by creating complex architectural models, simulating real-world physics, and designing intricate products.

By incorporating LWMs and utilizing Engineering Simulations and AR/VR, we will continue to reshape how our platform evolves, innovating through adaptive narratives, data-driven intelligence and contextual digital twins to enable visual virtual representations like the Industrial Metaverse to facilitate smart manufacturing and other process centric domains.

AGENTIC AI

Utilizing our Linked Solutioning partnership model, we intend to embed, integrate and amplify tools and systems from platform and service providers to create a connected fabric for engineering data and information that would be driven by agentic constructs – meaning AI systems will operate autonomously while acting as agents within defined ecosystems.

AI agents are autonomous software entities designed to perform tasks, make decisions, and interact with their environment without constant human intervention. They typically consist of three key components:

1. Perception: Gathers information about the environment
2. Cognition: Processes information and makes decisions
3. Action: Executes tasks based on cognitive decisions

AGENTIC ELEMENTS

We would like to use these components to create our composite agentic architecture to enable Chain-of-Thought, Self-Reflection, and Tool-former to infuse actionability to our object interactions. Here are some definitions from a prior conversation:

  • Chain-of-Thought: Chain-of-Thought refers to a reasoning process where the AI system breaks down complex problems into a series of intermediate steps or thoughts. This approach helps the AI to tackle problems methodically, allowing it to solve complex tasks by considering each step in the reasoning process. It is particularly useful in scenarios where the AI needs to draw on multiple pieces of information or logic to arrive at a conclusion.
  • Self-Reflection: Self-Reflection in AI involves the system’s ability to evaluate its own performance and reasoning processes. This introspective capability allows the AI to identify errors or inefficiencies in its decision-making and to adjust its strategies accordingly. By reflecting on its actions and outcomes, the AI can improve over time, learning from past experiences to enhance future performance.
  • Tool-former: Tool-former is an approach where language models are trained to use external tools autonomously. This method allows AI systems to extend their capabilities beyond their initial programming by integrating with various tools and systems. Tool-former enables AI to perform tasks that require interaction with external resources, such as databases or APIs, thereby increasing its versatility and functionality.

AGENTIC FRAMEWORK

To develop your agentic framework for enabling Chain-of-Thought, Self-Reflection, and Tool-former functionalities, we can adopt a modular approach that aligns with the perception, cognition, and action components of AI agents.

1. Perception Module

  • Data Gathering Unit: Use sensors, APIs, and external tools to collect real-time data from the environment.
  • Contextual Understanding: Develop pre-processing units to interpret sensor data, contextualize object interactions, and enable situational awareness.

2. Cognition Module

  • Chain-of-Thought Reasoning: Implement a step-by-step reasoning engine that breaks down problems, identifies dependencies, and constructs logical paths to solutions.
  • Self-Reflection Unit: Integrate a feedback loop for introspection, where the agent evaluates past actions and adjusts based on outcomes, enhancing decision-making for future tasks.
  • Memory and Knowledge Base: Create a repository for learned experiences, insights, and decision records, which can be referenced during future problem-solving activities.

3. Action Module

  • Tool-former Integration: Embed APIs and external tools to extend the agent’s ability to act. The agent should autonomously select tools (based on task needs) to execute actions, retrieve data, or run simulations.
  • Execution Layer: Design an execution engine that triggers the agent’s actions, whether that involves interacting with digital twins, manufacturing systems, or decision support tools.

Agentic Composite System

These modules are designed to operate autonomously but are linked to ensure continuous learning and refinement. The agent dynamically switches between reasoning, self-reflection, and tool-former modes, adjusting its strategy based on its success and the environment. This framework can enable your process engineering platform to manage intelligent workflows, automate complex tasks, and evolve over time.

SUMMARY

This agentic approach enables systems to interpret, decide, and enact based on real-world data, simulations, and multi-modal inputs. Our agentic AI will leverage dynamic LWMs, Digital Twins, and immersive technologies to drive intelligent, adaptive solutions across engineering, manufacturing, and process-centric domains.

This paradigm will allow AI to not only respond but anticipate needs, facilitating continuous learning and feedback loops, ultimately enhancing the efficacy of the entire system. By infusing agentic intelligence into our Mantra M5 platform, we foresee a future where AI can independently manage processes, simulate future scenarios, and enable smarter decision-making to transform industries like manufacturing into fully connected, self-regulating ecosystems.

The future of AI is indeed agentic!

NITIN UCHIL Founder, CEO & Technical Evangelist
nitin.uchil@numorpho.com


2 responses to “Agentic AI”

Leave a Reply

Discover more from EVERYTHING CONNECTED - Numorpho's Book of Business

Subscribe now to keep reading and get access to the full archive.

Continue reading