Skip to main content

LoomOS Platform Architecture & Operations Guide

LoomOS is a comprehensive distributed AI runtime and orchestration platform designed for enterprise-scale machine learning workloads. It provides unified infrastructure for training, verifying, and deploying AI models with built-in safety, auditability, and horizontal scalability.

Executive Summary

LoomOS addresses critical challenges in production AI systems:
  • Scale: Distribute training across hundreds of nodes with automatic resource management
  • Safety: Built-in verification (Prism) for model safety, factuality, and compliance
  • Auditability: Complete event sourcing and lineage tracking via LoomDB
  • Flexibility: Modular architecture supporting multiple ML frameworks and cloud providers
  • Reliability: Fault-tolerant design with automatic failover and recovery

Core Architecture

LoomOS follows a distributed, microservices-based architecture with five primary subsystems that work together to provide a complete AI operations platform.

1. Nexus - Distributed Coordination Layer

The Nexus system provides cluster-wide coordination and resource management: Master Nodes handle:
  • Job scheduling and resource allocation
  • Cluster state management and coordination
  • API endpoints for client interactions
  • Health monitoring and automatic failover
Worker Nodes execute:
  • Training and inference workloads
  • Model deployment and serving
  • Data preprocessing and validation
  • Distributed computation tasks
Coordination Protocol:
# Nexus Master Configuration
from core.nexus import MasterCoordinator, ClusterConfig, ConsensusConfig

cluster_config = ClusterConfig(
    cluster_id="production-cluster",
    region="us-west-2",
    availability_zones=["us-west-2a", "us-west-2b", "us-west-2c"]
)

consensus_config = ConsensusConfig(
    algorithm="raft",
    election_timeout_ms=5000,
    heartbeat_interval_ms=1000,
    log_replication_batch_size=100
)

master = MasterCoordinator(
    config=cluster_config,
    consensus=consensus_config,
    bind_address="0.0.0.0:8080"
)

# Start master with high availability
await master.start(enable_ha=True)

2. LoomDB - Event Sourcing & Audit System

LoomDB provides comprehensive event sourcing and audit capabilities for complete system observability: Event Store Features:
  • Immutable append-only event log
  • Horizontal partitioning for scale
  • Real-time event streaming
  • Complex event processing (CEP)
Audit & Compliance:
  • Complete model lineage tracking
  • User action auditing
  • Data access logging
  • Compliance reporting (SOC2, HIPAA, GDPR)
# Comprehensive Event Logging Example
from core.loomdb import LoomDB, EventType, AuditContext, ComplianceLevel

db = LoomDB(
    connection_string="postgresql://user:pass@localhost/loomdb",
    event_store_config={
        "partition_strategy": "time_based",
        "retention_days": 2555,  # 7 years for compliance
        "compression": "lz4",
        "replication_factor": 3
    }
)

# Create audit context for traceability
context = AuditContext(
    user_id="data_scientist_123",
    session_id="sess_456",
    request_id="req_789",
    compliance_level=ComplianceLevel.SOC2_TYPE2
)

# Log training lifecycle events
await db.log_event(
    event_type=EventType.MODEL_TRAINING_START,
    data={
        "model_name": "gpt-custom-v2",
        "dataset_id": "healthcare_data_2024",
        "dataset_size_gb": 150.5,
        "hyperparameters": {
            "learning_rate": 0.0001,
            "batch_size": 64,
            "num_epochs": 10,
            "optimizer": "adamw",
            "warmup_steps": 1000
        },
        "hardware_config": {
            "node_count": 8,
            "gpu_per_node": 4,
            "gpu_type": "A100",
            "total_gpu_memory_gb": 320
        }
    },
    context=context,
    tags=["healthcare", "phi_data", "production"]
)

# Query events for compliance audits
compliance_events = await db.query_events(
    event_types=[
        EventType.DATA_ACCESS,
        EventType.MODEL_TRAINING_START,
        EventType.MODEL_DEPLOYMENT
    ],
    time_range=(
        datetime(2024, 1, 1),
        datetime(2024, 12, 31)
    ),
    filters={
        "tags": ["phi_data"],
        "compliance_level": ComplianceLevel.SOC2_TYPE2
    },
    include_metadata=True
)

# Generate compliance report
report = await db.generate_compliance_report(
    report_type="SOC2_TYPE2",
    time_period="2024-Q1",
    include_user_actions=True,
    include_data_lineage=True
)

3. Distributed Scheduler - Resource & Job Management

Advanced job orchestration with intelligent resource allocation: Scheduling Features:
  • Multi-tenant resource isolation
  • Gang scheduling for distributed jobs
  • Preemption and priority handling
  • Auto-scaling based on queue depth
Resource Management:
  • GPU topology awareness
  • Memory-optimized placement
  • Network bandwidth allocation
  • Storage I/O optimization
# Advanced Scheduling Configuration
from core.scheduler import (
    Scheduler, JobSpec, ResourceRequirements, 
    SchedulingPolicy, AutoScalingConfig, QoSLevel
)

# Configure advanced scheduler
scheduler = Scheduler(
    scheduling_policy=SchedulingPolicy.FAIR_SHARE,
    preemption_enabled=True,
    gang_scheduling=True,
    resource_overcommit_ratio=1.2
)

# Auto-scaling configuration
autoscaling = AutoScalingConfig(
    min_nodes=5,
    max_nodes=100,
    scale_up_threshold=0.8,   # CPU/GPU utilization
    scale_down_threshold=0.3,
    scale_up_delay_seconds=300,
    scale_down_delay_seconds=600
)

await scheduler.configure_autoscaling(autoscaling)

# Define complex resource requirements
resources = ResourceRequirements(
    # Compute resources
    cpu_cores=32,
    memory_gb=256,
    gpu_count=8,
    gpu_type="A100",
    gpu_memory_gb=80,
    
    # Storage requirements
    local_ssd_gb=1000,
    shared_storage_gb=5000,
    io_operations_per_sec=10000,
    
    # Network requirements
    network_bandwidth_gbps=25,
    infiniband_required=True,
    
    # Placement constraints
    node_affinity={"datacenter": "us-west"},
    node_anti_affinity={"maintenance": "scheduled"},
    
    # QoS and priority
    qos_level=QoSLevel.GUARANTEED,
    priority=85,
    preemptible=False
)

# Create sophisticated job specification
job_spec = JobSpec(
    name="large_language_model_training",
    algorithm="weave_distributed",
    
    # Container configuration
    container_image="loomos/training:v1.2.3",
    command=["python", "train_llm.py"],
    working_directory="/workspace",
    
    # Resource allocation
    resources=resources,
    
    # Job dependencies and workflow
    dependencies=[
        JobDependency(
            job_name="data_preprocessing",
            type=DependencyType.COMPLETION,
            output_artifacts=["preprocessed_data"]
        ),
        JobDependency(
            job_name="checkpoint_validation",
            type=DependencyType.SUCCESS,
            timeout_seconds=1800
        )
    ],
    
    # Environment and configuration
    environment_variables={
        "CUDA_VISIBLE_DEVICES": "0,1,2,3,4,5,6,7",
        "NCCL_DEBUG": "INFO",
        "PYTORCH_CUDA_ALLOC_CONF": "max_split_size_mb:512"
    },
    
    # Checkpoint and recovery
    checkpoint_config=CheckpointConfig(
        enabled=True,
        interval_minutes=30,
        max_checkpoints=10,
        storage_backend="s3://checkpoints/llm-training/"
    ),
    
    # Monitoring and notifications
    monitoring_config=MonitoringConfig(
        metrics_interval_seconds=60,
        log_level="INFO",
        alert_on_failure=True,
        notification_channels=["slack://ai-team", "email://[email protected]"]
    ),
    
    # Execution constraints
    max_runtime_hours=72,
    retry_limit=3,
    failure_tolerance=0.1  # Allow 10% task failures
)

# Submit job with advanced options
job_handle = await scheduler.submit_job(
    job_spec=job_spec,
    dry_run=False,
    wait_for_resources=True,
    priority_boost=True
)

# Monitor job execution with detailed metrics
async for update in scheduler.stream_job_updates(job_handle.job_id):
    print(f"Job {update.phase}: {update.progress}% complete")
    print(f"Resource utilization: CPU {update.cpu_usage}%, GPU {update.gpu_usage}%")
    print(f"Estimated completion: {update.eta}")
    
    # Handle job state transitions
    if update.phase == JobPhase.FAILED:
        # Analyze failure and potentially retry
        failure_analysis = await scheduler.analyze_job_failure(job_handle.job_id)
        if failure_analysis.retryable:
            await scheduler.retry_job(job_handle.job_id)

4. Reinforcement Learning Infrastructure

Comprehensive RL training system with advanced algorithms: Supported Algorithms:
  • Proximal Policy Optimization (PPO)
  • Deep Q-Networks (DQN) and variants
  • WEAVE (proprietary multi-agent algorithm)
  • Custom algorithm integration
Training Features:
  • Distributed experience collection
  • Asynchronous policy updates
  • Multi-environment training
  • Hierarchical reinforcement learning

5. Blocks & Adapters - Integration Ecosystem

Modular integration system supporting diverse ML ecosystems: Model Adapters:
  • OpenAI API integration
  • Hugging Face model hub
  • Custom model backends
  • Multi-modal model support
Infrastructure Adapters:
  • Kubernetes orchestration
  • Cloud provider integration (AWS, GCP, Azure)
  • On-premises deployment
  • Hybrid cloud configurations

System Requirements & Sizing

Development Environment

Minimum Requirements:
  • CPU: 4 cores (Intel/AMD x86_64)
  • Memory: 8GB RAM
  • Storage: 50GB SSD
  • Network: 100Mbps connection
  • OS: Linux (Ubuntu 20.04+), macOS 11+, Windows 10 with WSL2
Recommended Development Setup:
  • CPU: 8-16 cores
  • Memory: 32GB RAM
  • Storage: 500GB NVMe SSD
  • GPU: NVIDIA RTX 3080 or better (for local training)
  • Network: 1Gbps connection

Production Environment

Small Production Cluster (10-50 nodes):
  • CPU: 32+ cores per node (Intel Xeon or AMD EPYC)
  • Memory: 256GB+ RAM per node
  • Storage: 2TB+ NVMe SSD per node
  • GPU: 4-8x NVIDIA A100 or H100 per training node
  • Network: 25Gbps with RDMA support
  • Redundancy: 3x master nodes, N+2 worker redundancy
Large Production Cluster (100+ nodes):
  • CPU: 64+ cores per node
  • Memory: 512GB+ RAM per node
  • Storage: 10TB+ NVMe SSD with 100K+ IOPS
  • GPU: 8x NVIDIA H100 per training node
  • Network: 100Gbps InfiniBand fabric
  • Redundancy: 5x master nodes across availability zones

Installation Guide

Quick Start (Development)

For evaluation and development purposes:
# 1. Clone repository and setup environment
git clone https://github.com/loomos/loomos.git
cd loomos

# Create and activate virtual environment
python3.11 -m venv venv
source venv/bin/activate  # Linux/macOS
# OR: venv\Scripts\activate  # Windows

# 2. Install dependencies
pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-dev.txt
pip install -e .

# 3. Start local development cluster
docker-compose -f docker-compose.dev.yml up -d

# 4. Initialize database and run migrations
python scripts/init_database.py
python scripts/run_migrations.py

# 5. Run comprehensive demo
python examples/quick_demo.py

# Expected output:
# [INFO] LoomOS cluster started successfully
# [INFO] Submitting test job...
# [INFO] Job 'test_training' accepted with ID: job_12345
# [INFO] Training progress: 100% complete
# [INFO] Model artifacts saved to: ./artifacts/model_v1.pt
# [INFO] Demo completed successfully

Verification Steps

# Check service health
curl http://localhost:8080/health
# Response: {"status": "healthy", "version": "1.2.3", "uptime": "5m32s"}

# Test job submission
curl -X POST http://localhost:8080/api/v1/jobs \
  -H "Content-Type: application/json" \
  -d '{"name": "test_job", "algorithm": "ppo", "resources": {"gpu_count": 1}}'

# Monitor cluster status
python scripts/cluster_status.py

Production Installation

Infrastructure Prerequisites

# 1. Install Docker and container runtime
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker $USER

# 2. Install NVIDIA Container Toolkit (for GPU nodes)
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
  sudo tee /etc/apt/sources.list.d/nvidia-docker.list

sudo apt-get update
sudo apt-get install -y nvidia-docker2
sudo systemctl restart docker

# 3. Verify GPU access
docker run --rm --gpus all nvidia/cuda:12.0-base nvidia-smi

Database Setup (Production)

# PostgreSQL for event store and metadata
sudo apt-get install postgresql-14 postgresql-contrib
sudo -u postgres psql -c "CREATE DATABASE loomdb;"
sudo -u postgres psql -c "CREATE USER loomuser WITH ENCRYPTED PASSWORD 'secure_random_password';"
sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE loomdb TO loomuser;"

# Redis for caching and job queues
sudo apt-get install redis-server
sudo systemctl enable redis-server
sudo systemctl start redis-server

# Configure Redis for production
sudo sed -i 's/# maxmemory <bytes>/maxmemory 4gb/' /etc/redis/redis.conf
sudo sed -i 's/# maxmemory-policy noeviction/maxmemory-policy allkeys-lru/' /etc/redis/redis.conf
sudo systemctl restart redis-server

LoomOS Configuration

# config/production.yml
cluster:
  name: "production-cluster"
  region: "us-west-2"
  environment: "production"
  
database:
  host: "loomdb.internal"
  port: 5432
  name: "loomdb"
  user: "loomuser"
  password: "${LOOM_DB_PASSWORD}"  # From environment or secrets manager
  ssl_mode: "require"
  connection_pool_size: 50
  max_overflow: 100

redis:
  host: "redis.internal"
  port: 6379
  db: 0
  password: "${REDIS_PASSWORD}"
  ssl: true
  connection_pool_size: 20

security:
  tls:
    enabled: true
    cert_file: "/etc/ssl/certs/loomos.pem"
    key_file: "/etc/ssl/private/loomos.key"
    ca_file: "/etc/ssl/certs/ca.pem"
  
  authentication:
    method: "oauth2"
    provider: "keycloak"
    issuer_url: "https://auth.company.com/realms/loomos"
  
  authorization:
    rbac_enabled: true
    default_role: "user"
    admin_roles: ["admin", "platform-engineer"]

monitoring:
  prometheus:
    enabled: true
    port: 9090
    path: "/metrics"
  
  jaeger:
    enabled: true
    agent_host: "jaeger-agent"
    agent_port: 6831
  
  logging:
    level: "INFO"
    format: "json"
    output: ["stdout", "file"]
    file_path: "/var/log/loomos/loomos.log"
    max_file_size_mb: 100
    max_backup_count: 10

scheduler:
  max_concurrent_jobs: 500
  job_timeout_hours: 168  # 7 days
  resource_overcommit_ratio: 1.1
  
autoscaling:
  enabled: true
  min_workers: 10
  max_workers: 1000
  scale_up_threshold: 0.8
  scale_down_threshold: 0.2
  scale_up_delay_minutes: 5
  scale_down_delay_minutes: 15

Security & Compliance

Transport Layer Security

# Generate TLS certificates for production
openssl genrsa -out loomos.key 4096
openssl req -new -key loomos.key -out loomos.csr \
  -subj "/C=US/ST=CA/L=San Francisco/O=Company/CN=loomos.company.com"
openssl x509 -req -in loomos.csr -signkey loomos.key -out loomos.crt -days 365

Role-Based Access Control (RBAC)

# Define custom roles and permissions
from core.security import RBACManager, Role, Permission

rbac = RBACManager()

# Define permissions
permissions = [
    Permission("jobs:create", "Create new training jobs"),
    Permission("jobs:read", "View job status and logs"),
    Permission("jobs:cancel", "Cancel running jobs"),
    Permission("models:deploy", "Deploy models to production"),
    Permission("cluster:admin", "Administer cluster resources")
]

# Define roles
data_scientist = Role(
    name="data_scientist",
    permissions=["jobs:create", "jobs:read", "jobs:cancel"]
)

ml_engineer = Role(
    name="ml_engineer", 
    permissions=["jobs:create", "jobs:read", "models:deploy"]
)

platform_admin = Role(
    name="platform_admin",
    permissions=["*"]  # All permissions
)

# Assign roles to users
await rbac.assign_role("user123", "data_scientist")
await rbac.assign_role("user456", "ml_engineer")

Audit & Compliance

# Generate compliance reports
from core.compliance import ComplianceManager, ReportType

compliance = ComplianceManager()

# SOC 2 Type II compliance report
soc2_report = await compliance.generate_report(
    report_type=ReportType.SOC2_TYPE2,
    time_period="2024-Q1",
    include_controls=[
        "CC6.1",  # Logical access controls
        "CC6.2",  # Authentication and authorization
        "CC7.1",  # System monitoring
    ]
)

# GDPR data processing report
gdpr_report = await compliance.generate_report(
    report_type=ReportType.GDPR,
    time_period="2024-Q1",
    include_data_subjects=True,
    include_processing_activities=True
)

Monitoring & Observability

Metrics Collection

LoomOS exposes comprehensive metrics via Prometheus:
# Custom application metrics
from core.monitoring import MetricsCollector, MetricType

metrics = MetricsCollector()

# Counter metrics
metrics.increment(
    name="jobs_submitted_total",
    tags={"algorithm": "weave", "priority": "high"},
    value=1
)

# Histogram metrics for latency
metrics.record_histogram(
    name="job_execution_duration_seconds",
    value=3600.5,  # 1 hour execution
    tags={"status": "completed", "node_type": "gpu"}
)

# Gauge metrics for current state
metrics.set_gauge(
    name="active_workers_count",
    value=42,
    tags={"datacenter": "us-west-2"}
)

Health Checks & Alerts

# Comprehensive health check endpoints
curl http://localhost:8080/health/detailed

# Response includes:
# {
#   "status": "healthy",
#   "timestamp": "2024-01-15T10:30:00Z",
#   "version": "1.2.3",
#   "uptime_seconds": 86400,
#   "components": {
#     "database": {"status": "healthy", "latency_ms": 5.2},
#     "redis": {"status": "healthy", "memory_usage": "45%"},
#     "scheduler": {"status": "healthy", "active_jobs": 23},
#     "workers": {"status": "healthy", "available": 45, "total": 50}
#   },
#   "resource_usage": {
#     "cpu_percent": 35.2,
#     "memory_percent": 67.8,
#     "disk_percent": 45.1
#   }
# }

Performance Optimization

Database Tuning

-- PostgreSQL optimization for high-throughput workloads
ALTER SYSTEM SET shared_buffers = '8GB';
ALTER SYSTEM SET effective_cache_size = '24GB';
ALTER SYSTEM SET work_mem = '256MB';
ALTER SYSTEM SET maintenance_work_mem = '2GB';
ALTER SYSTEM SET checkpoint_completion_target = 0.9;
ALTER SYSTEM SET wal_buffers = '64MB';
ALTER SYSTEM SET default_statistics_target = 500;

-- Reload configuration
SELECT pg_reload_conf();

-- Create optimized indexes for LoomDB
CREATE INDEX CONCURRENTLY idx_events_timestamp_type 
ON events (timestamp DESC, event_type) 
INCLUDE (data);

CREATE INDEX CONCURRENTLY idx_events_user_session 
ON events (user_id, session_id, timestamp DESC);

GPU Memory Optimization

# Advanced GPU memory management
from core.gpu import GPUManager, MemoryPoolConfig

gpu_manager = GPUManager()

# Configure memory pools for efficient allocation
memory_config = MemoryPoolConfig(
    initial_pool_size_gb=16,
    max_pool_size_gb=64,
    growth_factor=1.5,
    enable_unified_memory=True,
    fragmentation_threshold=0.3
)

await gpu_manager.configure_memory_pools(memory_config)

# Enable memory optimization features
gpu_manager.set_memory_growth(enabled=True)
gpu_manager.set_memory_limit_per_process(limit_gb=32)
gpu_manager.enable_memory_defragmentation(interval_minutes=30)

# Monitor GPU utilization
gpu_stats = await gpu_manager.get_utilization_stats()
print(f"GPU Memory Usage: {gpu_stats.memory_used_gb:.1f}GB / {gpu_stats.memory_total_gb:.1f}GB")
print(f"GPU Utilization: {gpu_stats.gpu_utilization_percent:.1f}%")

Troubleshooting Guide

Common Issues & Solutions

1. Job Scheduling Failures

Symptoms:
  • Jobs stuck in “pending” state
  • Resource allocation errors
  • Scheduling timeout errors
Diagnosis:
# Check scheduler logs
docker logs loomos-scheduler

# Check resource availability
curl http://localhost:8080/api/v1/cluster/resources

# Examine job requirements vs available resources
loomos jobs describe job_12345 --verbose
Solutions:
# Adjust resource requirements
resources = ResourceRequirements(
    cpu_cores=16,        # Reduced from 32
    memory_gb=64,        # Reduced from 128
    gpu_count=2,         # Reduced from 4
    priority=50          # Lower priority
)

# Or enable resource overcommit
scheduler_config = SchedulerConfig(
    resource_overcommit_ratio=1.3,  # Allow 30% overcommit
    preemption_enabled=True
)

2. GPU Memory Exhaustion

Symptoms:
  • CUDA out of memory errors
  • Training job failures
  • GPU utilization drops to zero
Diagnosis:
# Monitor GPU memory usage
nvidia-smi -l 1

# Check LoomOS GPU metrics
curl http://localhost:8080/metrics | grep gpu_memory
Solutions:
# Implement gradient accumulation
training_config = TrainingConfig(
    batch_size=32,           # Reduced batch size
    gradient_accumulation_steps=4,  # Accumulate gradients
    mixed_precision=True,    # Use FP16
    memory_efficient_attention=True
)

# Enable memory optimization
from core.optimization import MemoryOptimizer

optimizer = MemoryOptimizer()
optimizer.enable_gradient_checkpointing()
optimizer.enable_cpu_offloading()
optimizer.set_memory_fraction(0.8)  # Use 80% of GPU memory

3. Network Connectivity Issues

Symptoms:
  • Worker nodes disconnecting
  • Slow data transfer between nodes
  • Training synchronization failures
Diagnosis:
# Test network connectivity between nodes
loomos cluster test-connectivity

# Check bandwidth and latency
iperf3 -s  # On one node
iperf3 -c <server_ip> -t 30  # On another node

# Monitor network metrics
curl http://localhost:8080/metrics | grep network
Solutions:
# Optimize network configuration
network:
  backend: "nccl"
  interface: "ib0"  # Use InfiniBand if available
  compression: true
  async_error_handling: true
  
  # Tune NCCL parameters
  nccl_settings:
    NCCL_IB_DISABLE: "0"
    NCCL_IB_HCA: "mlx5_0,mlx5_1"
    NCCL_SOCKET_IFNAME: "ib0"
    NCCL_DEBUG: "INFO"

Disaster Recovery & Backup

Backup Strategy

#!/bin/bash
# Automated backup script for LoomOS

# Backup LoomDB (PostgreSQL)
pg_dump -h localhost -U loomuser -d loomdb | \
  gzip > "loomdb_backup_$(date +%Y%m%d_%H%M%S).sql.gz"

# Backup Redis state
redis-cli --rdb redis_backup_$(date +%Y%m%d_%H%M%S).rdb

# Backup model artifacts and checkpoints
aws s3 sync s3://loomos-artifacts/ ./artifacts_backup/ --delete

# Backup configuration files
tar -czf config_backup_$(date +%Y%m%d_%H%M%S).tar.gz \
  /etc/loomos/ ~/.loomos/ ./config/

Recovery Procedures

# Restore from backup
psql -h localhost -U loomuser -d loomdb < loomdb_backup_20240115_120000.sql.gz
redis-cli --rdb < redis_backup_20240115_120000.rdb
aws s3 sync ./artifacts_backup/ s3://loomos-artifacts/

Next Steps & Advanced Topics

After completing the platform setup, explore these advanced topics:
  1. Core Modules: Deep dive into LoomDB, Scheduler, and Security
  2. RL System: Advanced reinforcement learning capabilities
  3. Nexus System: Distributed coordination and cluster management
  4. SDK & CLI: Programmatic access and automation
  5. Deployment Guide: Production deployment patterns
This platform overview provides comprehensive guidance for getting started with LoomOS. For production deployments, ensure you implement proper security measures, monitoring, and backup procedures as outlined in the respective sections.