Skip to content

Security: AUTOGIO/NeuralEngineOptimizer

Security

docs/security.md

Security Features

NeuralEngineOptimizer includes comprehensive security features to protect your system and ensure safe operation. This document outlines the security features and best practices.

Input Validation

All input to the Neural Engine is validated to prevent security issues:

Prompt Validation

The validate_prompt method in the NeuralEngineSecurity class performs the following checks:

  1. Type Checking: Ensures the prompt is a string.
  2. Length Validation: Checks that the prompt doesn't exceed the maximum length.
  3. Empty Check: Verifies the prompt isn't empty.
  4. Harmful Pattern Detection: Scans for potentially harmful patterns.
from src.security import get_security_manager

security = get_security_manager()
is_valid, message, details = security.validate_prompt(prompt, user_id)

if is_valid:
    sanitized_prompt = details["sanitized_prompt"]
    # Process the sanitized prompt
else:
    # Handle validation failure
    print(f"Validation failed: {message}")

Parameter Validation

Generation parameters are also validated:

params_valid, param_message = security.validate_generation_params(max_tokens, temperature)

if not params_valid:
    # Handle parameter validation failure
    print(f"Parameter validation failed: {param_message}")

Input Sanitization

The security module automatically sanitizes input to remove potentially harmful content:

  1. Null Byte Removal: Removes null bytes from the input.
  2. Whitespace Normalization: Normalizes whitespace characters.
  3. Control Character Removal: Removes control characters.
  4. Special Character Limiting: Limits consecutive special characters.

Sanitization is enabled by default and can be configured in the config.yaml file:

security:
  input_sanitization: true

Harmful Pattern Detection

The security module checks for potentially harmful patterns in the input:

  1. System Commands: Detects system command patterns.
  2. Command Separators: Detects command separators like ;, &, |, and `.
  3. Variable Substitution: Detects variable substitution patterns.
  4. Script Tags: Detects HTML script tags.
  5. JavaScript Protocol: Detects JavaScript protocol in URLs.
  6. Excessive Special Characters: Detects a high ratio of special characters.

When a harmful pattern is detected, a warning is logged, but the request is not automatically blocked. You can implement custom handling based on the warnings:

is_valid, message, details = security.validate_prompt(prompt, user_id)

if is_valid and details["warnings"]:
    # Handle warnings
    for warning in details["warnings"]:
        print(f"Warning: {warning}")

Rate Limiting

The security module includes rate limiting to prevent abuse:

  1. Per-User Tracking: Tracks requests per user.
  2. Configurable Limits: Allows setting the maximum requests per minute.
  3. Automatic Cleanup: Removes old rate limit data.

Rate limiting is enabled by default and can be configured in the config.yaml file:

security:
  rate_limiting:
    enabled: true
    max_requests_per_minute: 60

Security Logging

All security events are logged to provide an audit trail:

security.log_security_event("custom_event", {
    "user_id": user_id,
    "details": "Custom security event"
})

The security module automatically logs the following events:

  1. Config Load Errors: Errors loading the security configuration.
  2. Harmful Pattern Detection: Detection of harmful patterns in input.
  3. Rate Limit Exceeded: When a user exceeds the rate limit.

Security Statistics

You can get security statistics to monitor the system:

stats = security.get_security_stats()

print(f"Active users: {stats['active_users']}")
print(f"Blocked IPs: {stats['blocked_ips']}")
print(f"Rate limiting enabled: {stats['rate_limit_enabled']}")
print(f"Max requests per minute: {stats['max_requests_per_minute']}")
print(f"Input sanitization enabled: {stats['input_sanitization_enabled']}")
print(f"Max prompt length: {stats['max_prompt_length']}")

Request Tracking

Each request is assigned a unique hash for tracking:

request_hash = security.generate_request_hash(prompt, user_id, timestamp)

This hash can be used to correlate requests across logs and track specific requests for troubleshooting.

Security Best Practices

1. Keep the Model Local

One of the key security benefits of NeuralEngineOptimizer is that it runs locally on your machine. This ensures that your data never leaves your system.

2. Use User IDs

Always provide a user ID when making requests to enable proper rate limiting and tracking:

result = neural_engine.neural_engine_generate(
    prompt="Your prompt",
    user_id="unique_user_id"
)

3. Implement Additional Authentication

For production use, implement additional authentication layers:

@app.route('/ai/generate', methods=['POST'])
def generate_text():
    # Check authentication
    if not authenticate_request(request):
        return jsonify({"error": "Unauthorized", "status": "error"}), 401
    
    # Process request
    # ...

4. Regular Security Audits

Regularly review the security logs to identify potential issues:

# View security events
grep "security_" logs/neural_engine.log

5. Keep Dependencies Updated

Regularly update dependencies to ensure security patches are applied:

pip install --upgrade -r requirements.txt

6. Secure API Endpoints

When exposing the API, use HTTPS and proper authentication:

from flask_httpauth import HTTPBasicAuth

app = Flask(__name__)
auth = HTTPBasicAuth()

@auth.verify_password
def verify_password(username, password):
    # Verify username and password
    # ...

@app.route('/ai/generate', methods=['POST'])
@auth.login_required
def generate_text():
    # Process request
    # ...

7. Monitor System Resources

Monitor system resources to prevent denial-of-service attacks:

def check_system_resources():
    # Check CPU usage
    # Check memory usage
    # Check disk usage
    # ...

8. Implement Request Timeouts

Implement timeouts for requests to prevent long-running operations:

import signal

class TimeoutError(Exception):
    pass

def timeout_handler(signum, frame):
    raise TimeoutError("Request timed out")

def generate_with_timeout(prompt, timeout=5):
    # Set timeout
    signal.signal(signal.SIGALRM, timeout_handler)
    signal.alarm(timeout)
    
    try:
        result = neural_engine.neural_engine_generate(prompt)
        signal.alarm(0)  # Reset timeout
        return result
    except TimeoutError:
        return {"error": "Request timed out", "status": "error"}

There aren’t any published security advisories