NeuralEngineOptimizer includes comprehensive security features to protect your system and ensure safe operation. This document outlines the security features and best practices.
All input to the Neural Engine is validated to prevent security issues:
The validate_prompt method in the NeuralEngineSecurity class performs the following checks:
- Type Checking: Ensures the prompt is a string.
- Length Validation: Checks that the prompt doesn't exceed the maximum length.
- Empty Check: Verifies the prompt isn't empty.
- Harmful Pattern Detection: Scans for potentially harmful patterns.
from src.security import get_security_manager
security = get_security_manager()
is_valid, message, details = security.validate_prompt(prompt, user_id)
if is_valid:
sanitized_prompt = details["sanitized_prompt"]
# Process the sanitized prompt
else:
# Handle validation failure
print(f"Validation failed: {message}")Generation parameters are also validated:
params_valid, param_message = security.validate_generation_params(max_tokens, temperature)
if not params_valid:
# Handle parameter validation failure
print(f"Parameter validation failed: {param_message}")The security module automatically sanitizes input to remove potentially harmful content:
- Null Byte Removal: Removes null bytes from the input.
- Whitespace Normalization: Normalizes whitespace characters.
- Control Character Removal: Removes control characters.
- Special Character Limiting: Limits consecutive special characters.
Sanitization is enabled by default and can be configured in the config.yaml file:
security:
input_sanitization: trueThe security module checks for potentially harmful patterns in the input:
- System Commands: Detects system command patterns.
- Command Separators: Detects command separators like
;,&,|, and`. - Variable Substitution: Detects variable substitution patterns.
- Script Tags: Detects HTML script tags.
- JavaScript Protocol: Detects JavaScript protocol in URLs.
- Excessive Special Characters: Detects a high ratio of special characters.
When a harmful pattern is detected, a warning is logged, but the request is not automatically blocked. You can implement custom handling based on the warnings:
is_valid, message, details = security.validate_prompt(prompt, user_id)
if is_valid and details["warnings"]:
# Handle warnings
for warning in details["warnings"]:
print(f"Warning: {warning}")The security module includes rate limiting to prevent abuse:
- Per-User Tracking: Tracks requests per user.
- Configurable Limits: Allows setting the maximum requests per minute.
- Automatic Cleanup: Removes old rate limit data.
Rate limiting is enabled by default and can be configured in the config.yaml file:
security:
rate_limiting:
enabled: true
max_requests_per_minute: 60All security events are logged to provide an audit trail:
security.log_security_event("custom_event", {
"user_id": user_id,
"details": "Custom security event"
})The security module automatically logs the following events:
- Config Load Errors: Errors loading the security configuration.
- Harmful Pattern Detection: Detection of harmful patterns in input.
- Rate Limit Exceeded: When a user exceeds the rate limit.
You can get security statistics to monitor the system:
stats = security.get_security_stats()
print(f"Active users: {stats['active_users']}")
print(f"Blocked IPs: {stats['blocked_ips']}")
print(f"Rate limiting enabled: {stats['rate_limit_enabled']}")
print(f"Max requests per minute: {stats['max_requests_per_minute']}")
print(f"Input sanitization enabled: {stats['input_sanitization_enabled']}")
print(f"Max prompt length: {stats['max_prompt_length']}")Each request is assigned a unique hash for tracking:
request_hash = security.generate_request_hash(prompt, user_id, timestamp)This hash can be used to correlate requests across logs and track specific requests for troubleshooting.
One of the key security benefits of NeuralEngineOptimizer is that it runs locally on your machine. This ensures that your data never leaves your system.
Always provide a user ID when making requests to enable proper rate limiting and tracking:
result = neural_engine.neural_engine_generate(
prompt="Your prompt",
user_id="unique_user_id"
)For production use, implement additional authentication layers:
@app.route('/ai/generate', methods=['POST'])
def generate_text():
# Check authentication
if not authenticate_request(request):
return jsonify({"error": "Unauthorized", "status": "error"}), 401
# Process request
# ...Regularly review the security logs to identify potential issues:
# View security events
grep "security_" logs/neural_engine.logRegularly update dependencies to ensure security patches are applied:
pip install --upgrade -r requirements.txtWhen exposing the API, use HTTPS and proper authentication:
from flask_httpauth import HTTPBasicAuth
app = Flask(__name__)
auth = HTTPBasicAuth()
@auth.verify_password
def verify_password(username, password):
# Verify username and password
# ...
@app.route('/ai/generate', methods=['POST'])
@auth.login_required
def generate_text():
# Process request
# ...Monitor system resources to prevent denial-of-service attacks:
def check_system_resources():
# Check CPU usage
# Check memory usage
# Check disk usage
# ...Implement timeouts for requests to prevent long-running operations:
import signal
class TimeoutError(Exception):
pass
def timeout_handler(signum, frame):
raise TimeoutError("Request timed out")
def generate_with_timeout(prompt, timeout=5):
# Set timeout
signal.signal(signal.SIGALRM, timeout_handler)
signal.alarm(timeout)
try:
result = neural_engine.neural_engine_generate(prompt)
signal.alarm(0) # Reset timeout
return result
except TimeoutError:
return {"error": "Request timed out", "status": "error"}