The DADA-X is our core event-driven Causal AI technology. Embedding into a chip will allow for autonomous management of security, operations, diagnostics, and self recovery/healing at run time.
Our chipset will include 4 chips that are integrated for controlling complex distributed AI systems:
DADA X (Agent) Chip
AI Discovery Chip
Run-time NLA / Dynamic Speech Recognition Chip
Run-time Neuromorphic Vision Chip
Decision-Zone’s platform builds natural language and visioning applications, based on causal understanding.We have already developed a learning algorithm which can generate UML automatically. For even greater capability, the significant use case is the production and use of DADA X in a chip because it’s auditing function capability is key for achieving AGI.
The Shift from Processing to Acquisition in AGI Development
Focus on learning process for language and visual pattern recognition
Traditional view: Perception and language as data processing problems
Our approach: Grounding via information acquisition
Language and vision acquisition are the learning process
AGI system understands, generates language, and recognizes visual patterns
Key Points:
Aim for an AGI system that’s continuously learning, innovating, and creating
Importance of first thought actualization through acquisition
Engaging with the event stream and prioritize acquisition over passive processing for deeper understanding
The Top-Level Agent Chip
DADA X technology is the key to building cognitive and intelligent systems using discovered or predefined behavior models.
It is the foundation of event-based AGI.
DADA X is fully built, patented, and ready to be implemented / embedded into a chip.
Works in real-time by monitoring events on the input side.
No limit on how many event sources can be used.
Ensures correct operation and secures the system.
Baseline Model Learning
Our proprietary causality and standard AI algorithm that dynamically extracts baseline behavior process models that capture the logic of a system and builds state machines.
Works in real-time by listening to the events during system execution.
Training occurs on events, not log data.
Creates UML process graph
Run-time NLA / Dynamic Speech Recognition
Our methodology for Natural Language Acquisition utilizes Grammar Causal Event patterns.
Natural Language Adapter: Transforms Unstructured Data into Structured Data by capturing the causal relationships between words in phrases, sentences and documents.
Each language requires event pattern libraries to be created.
Current LLMs can be enforced, aligning outputs to policies. A ChatGPT adapter / Java connector can be used.
Benefits: more privacy-friendly and secure alternative to traditional data-driven LLMs
Preserving Context
Our methodology for developing visioning systems is event-based. Image instruction sets based on patterns must be generated from observing the activities that connect states together. It works in real-time by observing complex event patterns.
First objective in development is to create the required libraries.
Benefits: improved object recognition, adaptability, robustness to noise and occlusions, enhanced situational awareness, predictive capabilities, and effective multi-object tracking.