Skip to content

kadubon/ASI

Repository files navigation

Recursive Self-Aware Predictive Agent

This project is an implementation of the "Recursive Self-Aware Predictive Agent," an autonomous agent founded on the core principles of recursive self-awareness and prediction error minimization. The system aims to achieve advanced cognitive capabilities—including introspection, empathy, and ethically-constrained behavior—by treating the self, others, and the environment within a single, unified generative model.

Project Status

This project has completed the implementation of its core modules across all four phases of the development roadmap, including a conceptual demonstration of autonomous operation. The integrated system demonstrates prediction, introspection, and ethically-constrained action selection.

Getting Started

  1. Clone the repository:

    git clone https://github.com/your-username/ASI.git
    cd ASI
  2. Create and activate the virtual environment:

    # Install uv if you haven't already
    pip install uv
    
    # Create the virtual environment
    uv venv
    
    # Activate the environment
    # On Windows
    .\.venv\Scripts\activate
    # On macOS/Linux
    source .venv/bin/activate
  3. Install dependencies:

    uv pip install -r requirements.txt

Development Roadmap

The development has successfully completed all four main phases:

  1. Phase 1: The Predictive Brain (Core Engine) - Implemented modules for prediction, world modeling, and memory.
  2. Phase 2: The Introspective Brain (Self-Awareness) - Integrated self-modeling and upgraded core modules to account for self-state.
  3. Phase 3: The Empathetic Brain (Social & Ethical Cognition) - Implemented empathic alignment and moral constraint mechanisms.
  4. Phase 4: Full System Integration & Scaling - Integrated all modules into an AgentOrchestrator for a unified cognitive cycle, and implemented action generation.

For a detailed breakdown of tasks, please see ToDo.md.

Interactive CLI

An interactive Command Line Interface (cli_app.py) is available to simulate observations and actions, and query the agent's internal state. This CLI now integrates a real text embedding model, allowing textual input for other agents' observations.

To run the CLI application:

.\.venv\Scripts\activate && python cli_app.py

Autonomous Operation

A conceptual autonomous operation simulation (autonomous_agent_sim.py) demonstrates the agent's ability to continuously observe, make decisions, and act within a simplified environment without direct human intervention.

To run the autonomous simulation:

.\.venv\Scripts\activate && python autonomous_agent_sim.py

Foundational Axioms

The system's architecture is derived from a set of foundational axioms that describe a reality model where the universe is a self-organizing, self-observing computational loop. For more details, refer to requirements_definition.md.

Licence

MIT https://mit-license.org/

About

ASI prototype

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages