SemanticAPITestAutomation is an innovative .NET multi-purpose test automation project that demonstrates traditional API testing, API mocking unit tests, and AI-powered API test automation using Microsoft Semantic Kernel and OpenAI integration.
- Traditional API Testing: Complete CRUD operations testing with comprehensive validation
- API Mocking Unit Tests: Isolated unit testing using Moq for dependency control
- AI-Powered Test Automation: Intelligent test strategy determination and execution planning
- Repository Pattern: Clean separation of API concerns through
IFakeStoreApiRepository - Dependency Injection: Comprehensive DI container setup with
ServiceCollectionExtensions - Test Base Pattern: Centralized test infrastructure through
TestBaseabstract class - Service Layer: AI-powered API test automation encapsulated in dedicated service layer
- Single Responsibility: Each class has a focused purpose (Repository for API calls, Service for AI analysis)
- Open/Closed: Extensible through interfaces and dependency injection
- Liskov Substitution: Interface-based design allows seamless implementation swapping
- Interface Segregation: Focused interfaces like
IAIApiTestAutomationService - Dependency Inversion: High-level modules depend on abstractions, not concretions
- .NET 9.0 with latest C# language features
- NUnit 4.2.2 for test framework
- Microsoft.Extensions.* for DI, Logging, and Hosting
- RestSharp 112.1.0 for HTTP client operations
- Fake Store API as the target API for testing
- Moq 4.20.72 for unit test mocking and dependency isolation
- Microsoft Semantic Kernel 1.59.0 for AI orchestration
- OpenAI GPT-4 integration for intelligent test planning
- KernelFunction attributes for AI-discoverable test methods
- KernelFunction attributes for AI-discoverable test methods
Purpose: Comprehensive traditional API testing demonstrating standard test automation practices
Key Features:
- Full CRUD operation coverage (Create, Read, Update, Delete)
- Positive and negative test scenarios
- Comprehensive validation and assertion patterns
- Structured logging with visual test execution tracking
- Error handling and edge case coverage
API Coverage:
- Products: Complete product lifecycle testing
- Carts: Shopping cart functionality validation
- Users: User management operations testing
- Authentication: JWT token-based login verification
Purpose: Isolated unit testing using mocking frameworks to test business logic without external dependencies
Key Features:
- Dependency Isolation: Complete separation from external API dependencies
- Moq Framework Integration: Professional-grade mocking using industry-standard patterns
- Comprehensive Mock Scenarios: Positive paths, error conditions, and complex interaction sequences
- Verification Testing: Ensures methods are called with correct parameters and frequencies
- Exception Testing: Validates proper error handling when dependencies fail
Mock Testing Coverage:
- Product Operations: GetAllProducts, GetProductById (valid/invalid), CreateProduct
- User Management: GetAllUsers with mocked user collections
- Authentication: Login scenarios with both success and failure paths
- Complex Scenarios: Multi-step workflows combining authentication and product operations
- Error Handling: Repository failure scenarios and exception propagation
Mocking Patterns:
- Setup/Verify Pattern: Configure mock behavior and verify interaction
- Parameter Matching: Flexible parameter validation using
It.Is<T>()andIt.IsAny<T>() - Exception Simulation: Testing error paths by configuring mocks to throw exceptions
- Sequence Testing: Validating correct order of operations in complex workflows
Purpose: Intelligent test strategy determination using natural language instructions for API test automation
Key Features:
- AI-powered analysis of user requirements
- Dynamic test execution strategy generation
- Structured prompt engineering with comprehensive test function mapping
- Robust error handling and validation
- Extensive logging with visual separators
AI-Powered Capabilities:
- Analyzes natural language instructions for API testing needs
- Maps requirements to specific API test functions
- Provides execution sequencing recommendations
- Includes risk assessment and mitigation strategies
API Coverage (based on Fake Store API):
- Products: CRUD operations (Create, Read, Update, Delete)
- Carts: Shopping cart management
- Users: User management operations
- Authentication: JWT token-based login
Design Strengths:
- Consistent error handling patterns
- Async/await throughout
- Immutable return types (
IReadOnlyList<T>) - Null-safe operations for optional returns
Modern C# Features:
- Record types for immutable data structures
- Primary constructors for concise syntax
- Readonly collections for thread safety
public record Product(int Id, string Title, decimal Price, string Description, string Category, string Image);Dependency Injection Integration:
- Service provider lifecycle management
- Scoped service resolution per test
- Centralized logging configuration
- Extensible service configuration through abstract methods
Tests are decorated with [KernelFunction] attributes, making them discoverable by the AI for automated test planning:
[Test, KernelFunction, Description("Retrieve a list of all available products")]
public async Task GetAllProducts_ShouldReturnProducts()The AI-powered test automation service can:
- Parse natural language requirements for API testing
- Map to specific API test functions
- Determine optimal execution sequences
- Provide risk assessments for test scenarios
- Generate comprehensive API test strategies
Sophisticated prompt structure includes:
- Available test function catalog
- Analysis framework (Requirements β Selection β Sequence)
- Structured response format
- Risk assessment guidelines
The framework implements a sophisticated kernel function invocation tracking service to provide concrete evidence that AI reasoning translates to actual test execution. This goes beyond text-based strategy validation to verify that the AI actually calls the correct test functions based on natural language instructions.
public interface IKernelFunctionTrackingService
{
void RecordInvocation(string functionName, Dictionary<string, object>? parameters = null);
IReadOnlyList<FunctionInvocation> GetInvocations();
bool WasInvoked(string functionName);
InvocationValidationResult ValidateExpectedInvocations(IEnumerable<string> expectedFunctions);
}- AI Strategy Generation - AI analyzes natural language instructions
- Function Auto-Invocation - AI automatically calls relevant
[KernelFunction]methods - Tracking Recording - Each function call is captured with timestamp and metadata
- Expected vs Actual Comparison - Validates planned strategy matches executed functions
- Concrete Validation Results - Provides detailed analysis of AI decision accuracy
// AI determines test strategy from natural language
string strategy = await aiService.DetermineTestExecutionStrategyAsync(
"Test all product operations including error scenarios");
// Validate that AI actually invoked expected functions
var expectedFunctions = new[] {
"GetAllProducts_ShouldReturnProducts",
"GetProductById_WithValidId_ShouldReturnProduct",
"GetProductById_WithInvalidId_ShouldReturnNull",
"CreateProduct_ShouldReturnCreatedProduct"
};
InvocationValidationResult result = trackingService.ValidateExpectedInvocations(expectedFunctions);
// Concrete validation of AI behavior
Assert.That(result.IsValid, Is.True);
Assert.That(result.ExpectedButNotInvoked.Count, Is.EqualTo(0));
Assert.That(result.SuccessfullyInvoked.Count, Is.EqualTo(4));- β Expected vs Actual Validation - Confirms AI calls all required functions
β οΈ Gap Detection - Identifies missing or unexpected function invocations- π Invocation Analytics - Detailed metrics on AI decision patterns
- π Temporal Analysis - Execution timing and sequence validation
- πΎ Thread-Safe Recording - Concurrent tracking with
ConcurrentBag<T> - π Comprehensive Reporting - Detailed validation results with actionable insights
- Comprehensive logging with structured, visual output
- Async/await patterns throughout
- Modern C# features (records, nullable reference types, latest language version)
- Robust error handling with detailed exception information
- Clean separation of concerns
- Extensive test coverage including edge cases
- Positive path testing: Valid scenarios with expected outcomes
- Negative path testing: Invalid inputs and error conditions
- Edge case coverage: Boundary conditions and error handling
- CRUD operation validation: Complete lifecycle testing
- Dependency Isolation: Testing business logic without external API dependencies
- Mock Verification: Ensuring correct method calls and parameter validation
- Error Simulation: Testing error handling paths through controlled mock failures
- Complex Workflow Testing: Multi-step scenarios with orchestrated mock interactions
- Performance Isolation: Fast, reliable tests independent of network conditions
- Natural language processing: Testing AI's ability to understand requirements
- Test strategy generation: Validating AI-generated test plans
- Function mapping: Ensuring AI correctly maps requirements to test functions
- Error handling: AI service resilience testing (empty inputs, cancellation)
- Function invocation tracking: Concrete validation that AI calls expected test functions
- Visual separators with emoji indicators for test phases
- Phase-based logging (Arrange β Act β Assert)
- Performance metrics (response times, data sizes)
- Structured error reporting with stack traces
- Mock setup and configuration tracking
- Verification result logging
- Exception simulation and validation
- Complex scenario step-by-step execution tracking
- Input validation for test requirements
- Prompt construction for AI-driven test planning
- AI analysis execution tracking for test automation
- Response validation and test strategy metrics
- Function invocation tracking with timestamps and validation results
- Comprehensive API coverage across all major endpoints
- Reliable test automation with robust error handling
- Maintainable test architecture through clean design patterns
- Production-ready testing with enterprise-grade practices
- Fast execution: Tests run independently of external services
- Reliable isolation: Consistent results regardless of external API state
- Comprehensive error testing: Controlled simulation of failure scenarios
- Cost-effective: No external API calls required during development/CI
- Natural language test planning: Non-technical stakeholders can describe API testing needs
- Intelligent API test selection: AI determines optimal test coverage based on requirements
- Dynamic strategy adaptation: AI can adjust API testing approach based on changing requirements
- Knowledge transfer: AI captures and codifies API testing expertise
- Concrete AI validation: Function tracking ensures AI reasoning leads to actual test execution
This project represents a sophisticated multi-purpose demonstration of traditional API testing methodologies, professional API mocking unit tests, and cutting-edge AI-powered test automation capabilities. The architecture demonstrates excellent software engineering principles while providing both a foundation for reliable API testing and a showcase of intelligent, adaptive test automation that can understand and respond to natural language requirements.
The combination of robust traditional testing implementation, comprehensive mock-based unit testing, innovative AI-powered automation, comprehensive logging, modern C# features, clean architectural separation, and concrete AI validation through function invocation tracking makes this a standout example of next-generation multi-purpose test automation frameworks.