Engaging Disengagement

“I’m learning to fly, but I ain’t got wings. Coming down is the hardest thing” – Tom Petty


Disengagement occurs when an automated system stops working either by intent or because of malfunction or a perception of malfunction. In case of autonomous driven cars, it is a situation when the driver feels that it is necessary to take control of the vehicle to correct a potentially unsafe action, or inaction by the self-driven car.

In our prior articles on automation in the manufacturing processes, we had created a manager to coordinate processes in upstream, midstream, and downstream activities.  Called the Digital TwineTM, it is an aggregation of Digital Threads and Digital Twins to integrate different streams of data and information to harmonize and optimize all enterprise operations.

Herewith, we will review what should happen in conjunction with automation within our development process, production and for our connected products, thus re-affirming our Outside-Inside theme. It is important to reiterate that Automation should not be looked upon as a panacea to all our ills. It is a useful tool that when applied judiciously, will help optimize our processes and make them more efficient. Like everything else in life, it is not a perfect science and there is always room for improvement. The trick is to find the right balance and not go overboard with its implementation.


Starting with a historical basis – the collapse of bridges due to harmonic vibration, the Challenger space shuttle disaster due to brittle O-rings, the Y2K bug, the story of Tesla’s foray into autonomous driving and others we will set the stage for why there is a need to disengage – to step back, review and reassess processes to assess and make changes.

This article will focus on the modalities during disengagement – when we are not actively implementing – on how lessons learned, best practices, feedback loops, rules and regulations need to be understood, adhered to, and applied; and what we plan to do to take control of processes when needed to intervene so that things do not get out of hand. During disengagement, it is essential that we adhere to them to ensure that we have put into place. By doing so, we can take control of the situation and prevent it from getting out of hand.

We will review blueprints for the product value chain where design for manufacturability is key, considerations of security – both industrial and personal privacy, and quality and preventative maintenance. All these make a strong case for having an intelligent and articulated basis, and a coordinated back-office data management scheme for process management.

This paper would also proscribe to our position of being proactive in action rather than being reactive after the fact and trying to fix-up after disastrous consequences. As we include intelligence into our compositions, we will also re-visit the modality of training such networks in a unique construct called the Dream State, wherein the neural network will imagine scenarios to iterate on possibilities thus reinforcing its behavioral habits. This will also be detailed in our multi-modal intelligence composer called the Tau Codex Orchestrator.


Structures like bridges and buildings have a natural frequency of vibration within them. A force that’s applied to an object at the same frequency as the object’s natural frequency will amplify the vibration of the object in an occurrence called mechanical resonance. If the mechanical resonance is strong enough, the bridge can vibrate until it collapses from the movement. Soldiers are asked to break step while marching on a bridge because while crossing the bridge when soldiers march in steps each soldier exerts a periodic force in same phase and therefore the bridge executes the forced vibrations of frequency equal to the frequency of their steps. If the natural frequency of the bridge happens to be equal to the frequency of the steps the bridge will vibrate with a large amplitude due to resonance and bridge may collapse.

  • The Broughton Suspension Bridge was an iron chain suspension bridge built in 1826 to span the River Irwell between Broughton and Pendleton, England. One of Europe’s first suspension bridges, it has been attributed to Samuel Brown, though some suggest it was built by Thomas Cheek Hewes, a Manchester millwright and textile machinery manufacturer. On 12 April 1831, the bridge collapsed, reportedly due to mechanical resonance induced by troops marching in step. As a result of the incident, the British Army issued an order that troops should “break step” when crossing a bridge.
  • The Angers Bridge, also called the Basse-Chaîne Bridge, was a suspension bridge over the Maine River in Angers, France. It was designed by Joseph Chaley and Bordillon and built between 1836 and 1839. The bridge collapsed on 16 April 1850, while a battalion of French soldiers was marching across it, killing over 200 of them.
  • In the case of the original Tacoma Narrows Bridge (the third longest suspension bridge in the world at that time) it was a simple steady gust of wind above 30 mph for a short time that caused the bridge to oscillate and collapse due to aeroelastic flutter. The bridge received its nickname “Galloping Gertie” because of the vertical movement of the deck observed by construction workers during windy conditions. While engineers and engineering professor, F.B. Farquharson were hired to seek ways to stop the odd movements, months of experiments were unsuccessful. The bridge became known for its pitching deck, and collapsed into Puget Sound the morning of November 7, 1940, under high wind conditions. Engineering issues, as well as the United States’ involvement in World War II, postponed plans to replace the bridge for several years; the replacement bridge was opened on October 14, 1950.

The O-ring issue refers to the Space Shuttle Challenger disaster, which occurred on January 28, 1986, when the spacecraft exploded 73 seconds after takeoff, killing all seven crew members. The disaster was caused by a failure in one of the shuttle’s two Solid Rocket Boosters (SRBs), which ignited the main fuel tank. The primary cause of the explosion was the failure of one of the O-rings on the right-hand SRB. The O-ring was not designed to withstand the cold temperatures that were present at launch, and it failed to seal properly, allowing hot gas to escape from the booster and ignite the main fuel tank. This issue was known before the launch, but it was decided that the launch should proceed anyway, in order to meet the schedule for the Space Shuttle program. The decision to launch despite the known issue was widely criticized, and the disaster led to a number of changes in the way shuttle launches were conducted. Morton-Thiokol, the company that built the SRBs, was fined for the disaster.

The Space Shuttle program also brings up a very interesting facet of progression in engineering and technology wherein adding safety measures and redundancies can actually slow down the rate of advancement. The Challenger disaster, in which the shuttle exploded 73 seconds after takeoff, killing all seven astronauts on board, led to a redesign of the shuttle which incorporated redesigned SRB joints and seals, added crew pressure suites and escape capability, and banned potentially unsafe practices and payload from the missions. The solid rocket boosters were not only safer, but they also allowed the shuttle to be launched in colder weather, which was a problem with the original design. So, while the redesign of the solid rocket boosters may have made the Space Shuttle safer, it also slowed down the rate of advancement in the program, because the redesign was not based on any sort of actual testing or experimentation, but rather on the results of a disaster. This shows that, in order to make progress in engineering and technology, it is sometimes necessary to take risks, even if those risks may lead to disaster. Eventually, thought adding all the safety measures bloated the expense of the program that it had to be cancelled due to cost considerations.

The Millennium bug or the Y2K scare happened because banks used to code years dates using two digits. This made the year 2000 00, thus having the potential of messing up interest calculations and host of other date related transactions. Albeit no major disasters happened when the century changed, a lot of effort and expense was put in to fix the issues. One of the outcomes was the meteoric rise in Information Technology and data related activities.

Fast forward to today and we find ourselves in the midst of another potential Y2K problem – the connected world. With the rollout of the Internet of Things (IoT), cars, appliances, and other devices are becoming increasingly connected, and as a result, more and more susceptible to failure and hacking.

A perfect example of this is Tesla’s autonomous driving capability that comprises of a combination of multiple sensory devices – cameras, radar and sensor that needed to be coordinated by an effective GPU. Starting with:

    • AP0 that did not have self-driving enabling devices,
    • AP1 had a single camera-based MobilEye system to review the path ahead along with radar,
    • AP2 that used a more robust NVIDIA GPU to coordinate multiple cameras (8), radar (1), and sensors (16)
    • AP3 that uses Tesla’s own Full-self Driving (FSD) chip to better and faster coordination of the sensory output

What Tesla realized as they progressed with AP3 was that the radar stream was complicating their codebase and causing a lag to enable responsive auto-driving. Thus, the radar device was removed in later vehicles. Albeit MobilEye was deemed inadequate for Tesla’s needs, the company was bought by Intel in 2016 for $15 billion and is part of Intel’s autonomous unit which will be spun off in an IPO valued at $50 Billion.

A non-engineering but related issue was also dealing with Tesla’s co-founder Elon Musk when he decided to sell 10% of his stock options in Tesla that had him pay the largest tax ever $10 Billion. What happened here was a failure in the accounting and taxing software that needed to be upgraded to account for more zeros.


Manufacturers embracing Industry 4.0 fall into four areas: standard, automated, intelligent, and resilient. On average it takes 12-18 months to transform from a standard factory into a resilient factory. Today’s standard factories are lean, and the first steps these companies take in their Industry 4.0 journey is implementing a Manufacturing Execution System (MES) or focusing on value stream collaboration. These manufacturers are beginning to use alerts, create reports and understand what the exact problem is related to a machine and the actions needed.

The automated factory is already using robotics and robotic processes; more sophisticated, leveraging elements of machine learning and understanding predictive maintenance. Here companies can answer the why: Why is my machine going down at a specific time every day? Or, if I make this change, what happens next? The ability to ascertain when something needs to be ordered, or when something will fail, creates new subscription-based business models that are a value-add for OEMs to create intelligence around solutions and create better solutions for customers to maximize the use of equipment and uptime.


Maintenance is an integral part of keeping a plant online. Over time it has migrated from reactive to preventative. The next stage is predictive where real-time condition monitoring makes it possible to optimize your repair schedule and reduce unplanned downtime.

Just like the peeling of an onion, engaging with disengagement deals with reactive or proactive divided into preventative, predictive, and prescriptive types of maintenance. Reactive fixing happens after the failure has occurred, while proactive involves constant monitoring to ascertain the health of the equipment. Preventative maintenance is a type of maintenance that is done to keep the equipment in good working order and to prevent damage from occurring. Predictive maintenance is a type of maintenance that is done to predict when problems will occur and to take steps to prevent them from happening. Prescriptive maintenance is a type of maintenance that is done proactively to precent.

  • Preventative maintenance is a type of maintenance that is used to prevent equipment failures or malfunctions. This type of maintenance typically includes inspecting, cleaning, and lubricating equipment.
  • Predictive maintenance is a type of maintenance that uses data analytics to predict when equipment will fail or malfunction. This type of maintenance typically includes replacing or repairing equipment before it fails.
  • Prescriptive maintenance is a type of maintenance that is used to proactively monitor the equipment and take corrective actions way before failure.

Modern maintenance, reliability, and operations teams need technology that works the way they work—in the field. However, most software systems are still desktop-based, even though their end users never sit in front of a desk. This makes it difficult for teams to perform their best work because they’re tied to software that’s disjointed, outdated, and hard to use. Asset Operations is purpose-built to bring together maintenance, operations, and reliability data to make important business decisions, with full visibility across the entire life cycle of maintenance, asset management, and operations. Asset Operations Management threads together an organization’s technician services, passive and active data, and unique operational blueprint to make it easier and faster for every employee to get what they need to do their jobs successfully.


Intelligent factories are experimenting with design optimization and have moved from predictive to prescriptive insights and actions. This includes real-time adaptive analytics, self-learning and correcting machines, and data-driven machines and processes. The resilient factories are autonomous in nature and use technology to take an average worker and turn them into an advanced worker.


In our last article, we talked about the need for Automation and how it should be looked upon as a tool to help us optimize our processes. We also discussed the need for proper planning and execution in order for it to be effective. In this article, we will discuss what should happen in conjunction with Automation within our development process, production and for our connected products.

An enterprise is an aggregation of people, processes and platforms that come together to provide for the basis for the company. There are many reasons to automate and many areas throughout a business in which to automate. While production lines naturally lend themselves to automation, ERPs and other software tools add tremendous value by removing time-consuming and low-value tasks such as data collection and report generation. To get the best return, we look for areas that remove effort, take out risk, improve machine in-cycle time and/or reduce variances.

The following are key items that we need to account for in the various stages of solutioning and productization:

  • In the development process, it is important that the various stages of design, prototyping and testing are done in a coordinated and synchronized manner. Automation can be used to help with this by ensuring that the various steps are done in a consistent and repeatable manner.
  • In production, the use of Automation can help to speed up the manufacturing process and improve the quality of the products. It can also help to identify and correct any problems that may occur during the production process.
  • For our connected products, Automation can be used to help manage and monitor the products and the various systems that they are connected to. It can also be used to help collect and analyze the data from the products and the systems they are connected to. This data can then be used to improve the performance of the products and the systems they are connected to.

Here are the high-level attributes within each of the streams:

Upstream Development Process Midstream Production Processes Downstream Production Processes
  1. Planning – Planning can be correctly coordinated by automating the generation of test data and test cases.
  2. Design – Automation can help with the design process by automating the generation of code skeletons and testing scripts.
  3. Implementation – Automation can help with the implementation process by automating the generation of source code and test data.
  4. Verification – Automation can help with the verification process by automating the generation of test reports.
  1. Procurement – Material and machine procurement and the creation of the right workforce
  2. Assembly – Automation can help with the assembly process by automating the assembly of products.
  3. Testing – Automation can help with the testing process by automating the testing of products.
  4. Packaging – Automation can help with the packaging process by automating the packaging of products.
  5. Shipping – Automation can help with the shipping process by automating the shipping of products.
  1. Configuration – Automation can help with the configuration of products by automating the configuration of products.
  2. Activation – Automation can help with the activation of products by automating the activation of products.
  3. Monitoring – Automation can help with the monitoring of products by automating the monitoring of products.
  4. Maintenance – Automation can help with the maintenance of products by automating the maintenance of products.

For coordinated functioning between the three streams, we have created a reference model that is a composition of a design philosophy for innovation (MANTHANTM), a process manager for IoT and Industry and Services 5.0 (Digital TwineTM), and a Systems Interaction Protocol (Tendril ConnectorTM), as shown below:

Encompassing the three tenets, will be our fourth – The TAU Codex TransformerTM – that will coordinate intelligence using a multi-modal basis for composing behavior based on different perceptions.

The following sub-sections will relate to blueprints for implementations that we are developing to create robust processes to account for understandings done during disengagements.

The Premise: A Product Value chain encompasses all aspects of product from cradle to grave. It goes before and beyond PLM activities by including innovation and ideation upstream, and warranty & maintainability downstream including sustainability and recycling considerations.  PLM & Blockchain combined are synergetic for product innovation. With PLM and Blockchain, it becomes possible to trace a product at every stage of its lifecycle and with full transparency, providing valuable information such as the origin of raw materials, its processing, shipment details, tracking log and usability.  Where PLM can gather data on raw materials, suppliers, screen regulations and in the end help with the overall approval of the product, blockchain allows each step of production to be safely mapped and saved.

In Use Case 10, we start the basis by creating a blockchain based distributed ledger that manages master data and all transactions so that they are non-duplicatable and immutable. This will be based on genesis, master data, BOM, meta data, regulatory compliance and dealing with warranty and maintainability all of which will be maintained in a distributable ledger. As other enterprise streams come into play, the basis formed by this stream will enable a concerted development and implementation of all aspects of the product and solution as shown in the dynamic diagram below. This enables us to have accountability at every step of the process.

The entities in the distributed ledger will form the basis of the disengaged system that is still connected to the hip with the active processes, validating, assuring, and enabling them with minimal friction. Albeit the current emphasis is on crypto currency – bitcoin, NFTs and such – we believe that the true value of blockchain will be asserted in managing product information, smart contracts between parties and in having a single record of truth. We intend to be the forerunner in developing and utilizing this capability in building products and services of the future that are smart, connected, and sustainable.

The Premise: Security is critical to the success of Industry 4.0 adoption. The resilience of production processes strongly depends on the manufacturing companies’ awareness of the current threat landscape and the employed security framework for protecting against attacks. However, now that information technology (IT), operational technology (OT), and intellectual property (IP) assets are being integrated, a whole new range of security issues also arises. One thing that makes cybersecurity so challenging for manufacturers is that companies are only as strong as the weakest link in their supply chains. As organizations digitize and become more interconnected, they must be constantly vigilant with not only their own security practices but also with those of their suppliers. 

The IoT security problem can be summed up in one word – fragmentation. Unlike the early days of the internet, when all devices were connected to a single network, the IoT is made up of many different networks, each with its own standards, protocols, and devices. This fragmentation makes it difficult for companies to create a single, unified security strategy. To address the IoT security problem, companies must take a holistic approach to security that encompasses all devices and networks. This includes developing a security strategy that accounts for the various fragmentation issues, as well as the increasing number of devices and networks.

In this Use Case (#12) as depicted above, we blueprint the different needs of providing robust fail proof mechanisms to not only manage privacy and access but also enable security of automated devices and account for rules and regulations so that products and services are not adversely impacted by unwelcome intrusions. Starting with profiles – Identity and Access management (and all that it entails), IP protection, managing and securing operations to building products and services that conform to regulations is key to managing businesses of today and tomorrow. With breaches, hacking and ransomware on the rise it is imperative that a holistic approach to managing security, governing operations and accounting for regulations be in place and part of the ecosystem.

The Premise: A core value of Industry 4.0 is that factory equipment will become smarter in how it is monitored and maintained using IIoT (Industrial Internet of Things) and intelligent sensors, raising the bar on asset performance requirements. Today, most preventative maintenance is performed using a schedule-based maintenance system which is based on statistical analysis of historical data such as MTBF (Mean Time Between Failure) to determine when a piece of equipment should be serviced. However, with the advances in sensor technologies and predictive software solutions that are currently in the market there is now a demand in the manufacturing industry for upgrading legacy preventative maintenance systems with a more proactive approach using either Condition Based Maintenance or Artificial Intelligence (AI) Based Maintenance.

These new methodologies can help provide foresight to operations leaders on when equipment will need to be serviced which avoids unnecessary repairs, costly shutdowns from failures, and degradation of operating efficiency. In addition, other key indicators such as output, or machine performance can be benchmarked to create thresholds for prompting maintenance activity or investigation. While these new methodologies can provide numerous benefits, many manufacturers lack the resources to effectively select, implement, and operate these new systems.

Reactive -> Proactive -> Prescriptive -> Predictive
In Use Case 15, we start with creating a basis for predictive maintenance by first monitoring the equipment either by retrofitting it with sensors (brown-field) or utilizing edge services in terms of new equipment (green-field and blue-sky) to manage and store pertinent transient information that would be analyzed to provide for conditional rule-based or neural-net trained behavior to detect faults before they happen.

We will coordinate with our Engineering Systems partner, Hexagon Manufacturing Intelligence to utilize their set of tools for monitoring, simulating and intelligently suggesting maintenance schedules so that downtime can be minimized. All of the data provisioning will happen in Microsoft’s Cloud for Manufacturing ecosystem so that data management, security and identity-based access is seamlessly governed.

One important aspect of disengagement is training our neural networks in a unique construct called the Dream State. In the Dream State, the neural network will imagine scenarios to iterate on possibilities, thus reinforcing its behavioral habits. This will help to ensure that the neural network behaves in a desired manner during re-engagement.

Google’s image recognition software, which can detect, analyze, and even auto-caption images, uses artificial neural networks to simulate the human brain. In a process they are calling “inceptionism”, Google engineers sought out to see what these artificial networks “dream” of—what, if anything, do they see in a nondescript image of clouds, for instance? What does a fake brain that’s trained to detect images of dogs, for example, see when it’s shown a picture of a knight?

In our progression of defining intelligent machines, we posit that for true understanding and behavior (sentience), we need to move beyond the current definition of Artificial Intelligence (AI) – Supervised, Unsupervised and Reinforced Learning – to a construct that evolves both physically and mentally using current and newly defined cybernetic mechanisms as the basis.

We will detail this further as we explore the Lacanian philosophy of the Imaginary, the Symbolic and the Real realms. Existential Intelligence will be that basis for our pivot to the fifth Industrial Revolution (Industry and Services 5.0) and beyond. Utilizing our construct called the 5th order of Cybernetics, we intend to build products and solutions that are actionable, pragmatic, contextual, and relational, reducing brittleness and imparting sentient qualities in them.

Return on Investments KPIs
The key, as with all Industry 4.0 technologies, is to make investments with an eye on the bigger picture. To get the most out of a predictive-analytics package, one must first pinpoint a problem to solve and have a robust IT backbone and automation capabilities to back it up. Maintenance has key performance indicators (KPIs) of overall equipment effectiveness and downtime percentage. Each should not necessarily be considered by itself but as part of a holistic review. It is easier to monitor and improve these key plant operation areas with a mobile solution that provides real-time data and connections to backend systems.

The following are key performance indicators (KPSs) and metrics that we should be aware of:

  • Budgeted maintenance cost vs. actual maintenance cost – This financial KPI is important to plant managers and top floor management. If the actual maintenance cost is higher than the forecast, it is time to determine the possible issues through analysis. Are emergency work orders happening more than scheduled ones? Do low inventory levels mean a lot of rushed replacement shipping? Is the staffing level and skill set the right mix or is there a reliance on outside vendors? Answers to these and other questions will guide managers to the necessary resolution.
  • Maintenance backlog – A list of work orders that are incomplete tasks past their due date. A small percentage is typically not a concern as it is considered normal. However, if the number grows over time, then it does become an issue and can lead to deferred maintenance that impacts operations. It is preferable to have a metric that is under a three-work week backlog per technician.
  • Schedule compliance – This performance metric is used to measure the effectiveness of scheduled maintenance. The percentage shows the number of work orders completed by their due date. It is typically for a set time period such as month or quarter. If schedule compliance is below the optimum 90%, analyze the work order process, skills gaps and available support and tools.
  • Service vs. failure repairs – An efficiency KPI that shows how many work orders were scheduled compared to emergency or breakdown reasons. If this ratio is low, then you’re spending more money and time on fighting fires rather than managing operations.
  • Work order completion – It measures a team’s productivity and is generally shown as a percentage. Operations strive for a high number, say 90%. However, this is not a standalone efficiency KPI because rushed, completed jobs can result in near-future failures.

Predictive analytics and machine learning enable manufacturers to identify patterns and reveal insights about all stakeholders, products and production that would otherwise remain hidden. By keying in on what sensor values equate to better production output, manufacturers reduce variance and increase productivity. When they are applied as part of a larger strategic plan, intelligence packages pay for themselves in months, not years, by revealing opportunities to run shops smarter, faster, and better.

The key, as with all Industry 4.0 technologies, is to make investments with an eye on the bigger picture. To get the most out of a predictive-analytics package, one must first pinpoint a problem to solve and have a robust IT backbone and automation capabilities to back it up.


Forecasting performance in our manufacturing facilities continues to improve, with sharper predictive-maintenance approaches enabling decision-makers to accurately see what’s about to happen and adjust strategies to capitalize on these insights.

Companies must also make sure that their products are secure from the outset, rather than waiting until after the product has been released to the market – tying into our theme of disengagement. Companies must also be prepared to respond to security breaches and attacks. This includes developing a plan for responding to attacks and repairing any damage that has been caused. They should also be able to predict maintenance to prevent failures and downtime.

The Defense Industrial Base (DIB) is the target of increasingly frequent and complex cyberattacks. To protect American ingenuity and to safeguard sensitive national security information, the Department of Defense (DoD) launched CMMC 2.0, a comprehensive framework to protect the defense industrial base from increasingly frequent and complex cyberattacks.

MxD, the Design for Manufacturing organization that we are members at is part of the National Center for Cybersecurity in Manufacturing. It uses its factory floor as a demonstration area for existing cybersecurity technology and works to develop new tools to address very specific pain points for manufacturers. It is working with industry and government to figure out how to get these tools to small and medium-sized manufacturers. Most of these companies do not even have in-house tech support, let alone cybersecurity expertise.

The Y2K problem was resolved by upgrading computer systems to handle four-digit years. The IoT security problem will be resolved by upgrading computer systems to handle billions of devices. The IoT is in its early stages and is still in the process of being developed. As a result, there are many unknowns about the future of the IoT. One thing is certain, however – the IoT will continue to evolve and grow. In order to take advantage of the full potential of the IoT, companies must embrace the changes and evolve with the technology. The IoT is a disruptive technology and will cause many changes in the way we live and work. The IoT is a disruptive technology that is causing changes in the way we live and work. In the future, we will see more and more devices becoming connected, and as a result, the IoT will become even more pervasive. In order to take advantage of the full potential of the IoT, companies must anticipate and embrace the changes and evolve with the technology.

No matter which of the stages of standard, automated, intelligent, or resilient a manufacturer falls under, it begins with transforming data into intelligent actions. You can make a factory smart with connected assets, predictive suggestions, strategic robots, and self-optimized actions, but the next iteration of manufacturing forward is creating a smarter supply chain, providing visibility, alerts, and predictive outcomes. When this is accomplished, companies increase productivity, create additional capacity, minimize unplanned downtime, and embody a mindset of making every minute count.

Disengagement for us means being proactive in pre-defining constructs prior to development and implementation. We need to think about our architectures and designs before coding even begins by creating models and prototypes. Disengagement is also about being able to quickly adapt to changes in the marketplace and our stakeholders’ needs. We need to be able to pivot when necessary and be ready to change our plans and strategies on the fly. This requires a high degree of flexibility and responsiveness, which we can achieve through good communication, collaboration, and coordination.

Our philosophy for design and innovation, and our reference architecture blueprints for process management are good examples of how we engage during disengagement. We also need to be very mindful of quality and testing throughout the disengagement process. We need to ensure that our code is written in a modular and cohesive manner, and that our tests are exhaustive. Finally, we need to be very careful about how we communicate our disengagement plans and progress to our stakeholders. We want to be sure that they understand our goals, and that we are taking the necessary steps to achieve them.

NI+IN UCHIL Founder, CEO & Technical Evangelist

, ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: