feat: enhance documentation and add project description

This commit is contained in:
Praveen Thirumurugan
2025-10-18 01:25:39 +05:30
parent 6839ae6f6a
commit 55e834161a
2 changed files with 104 additions and 13 deletions

116
README.md
View File

@@ -1,8 +1,8 @@
# LLM Proxy Server
A simple OpenAI-compatible proxy server that forwards requests to any LLM endpoint with logging and cost tracking. The logs are stored in a PostgreSQL database. It's not possible to ignore the logging at the moment.
A lightweight, production-ready OpenAI-compatible proxy server that seamlessly forwards LLM API requests to any endpoint with comprehensive logging, cost tracking, and PostgreSQL integration. Perfect for monitoring API usage, calculating costs, and maintaining audit trails for your AI applications.
## Configuration
## ⚙️ Configuration
| Environment Variable | Description | Default Value |
|----------------------|-------------|-----------------|
@@ -13,7 +13,7 @@ A simple OpenAI-compatible proxy server that forwards requests to any LLM endpoi
### Cost Calculation
The cost is calculated based on the model and the usage.
The cost is calculated based on the model and token usage with configurable pricing per model.
You'll need to add the cost configuration for your models in the `cost.ts` file. The default cost configuration in the project (with sample values from `z.ai` models) is:
@@ -25,9 +25,9 @@ export const MODEL_COSTS: Record<string, CostConfig> = {
};
```
You can add more models to the `MODEL_COSTS` object.
You can add more models to the `MODEL_COSTS` object to support your specific LLM providers.
## PostgreSQL Table Schema
## 📊 PostgreSQL Table Schema
```sql
CREATE TABLE IF NOT EXISTS <DATABASE_TABLE> (
@@ -50,7 +50,37 @@ CREATE TABLE IF NOT EXISTS <DATABASE_TABLE> (
CREATE INDEX IF NOT EXISTS idx_<DATABASE_TABLE>_timestamp ON <DATABASE_TABLE> (timestamp);
```
## Usage
## 🚀 Quick Start
### Installation
```bash
npm install
```
### Configuration
Set your environment variables:
```bash
export PORT=3007
export UPSTREAM_URL="https://api.example.com/v1"
export DATABASE_URL="postgresql://user:password@localhost:5432/llm_logs"
export DATABASE_TABLE="llm_proxy"
```
### Running
```bash
# Development mode with auto-reload
npm run dev
# Production build
npm run build
npm start
```
## 💻 Usage
The proxy works with any OpenAI-compatible endpoint. Just point your client to the proxy:
@@ -64,11 +94,71 @@ curl -X POST http://localhost:3007/chat/completions \
}'
```
## Features
### Example Response with Cost Tracking
- ✅ OpenAI-compatible API
- ✅ Streaming support
- ✅ PostgreSQL logging
- ✅ Cost tracking
- ✅ CORS enabled
- ✅ Simple configuration - just set your endpoint URL
All responses are logged to PostgreSQL with detailed usage and cost information:
```json
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "glm-4.5-air",
"usage": {
"prompt_tokens": 20,
"completion_tokens": 30,
"total_tokens": 50,
"prompt_tokens_details": {
"cached_tokens": 5
}
},
"choices": [...]
}
```
The corresponding database entry will include:
- Token usage breakdown
- Calculated cost based on your model pricing
- Response time metrics
- Complete request/response bodies for audit purposes
## 🔧 Advanced Features
### Custom Cost Models
Extend the `cost.ts` file to support your specific pricing models:
```typescript
export const MODEL_COSTS: Record<string, CostConfig> = {
"gpt-4": { input: 0.03, cached: 0.015, output: 0.06 },
"claude-3": { input: 0.025, cached: 0.0125, output: 0.125 },
"custom-model": { input: 0.01, cached: 0.005, output: 0.02 },
};
```
### Database Integration
The proxy automatically logs all requests to your PostgreSQL database with comprehensive metadata for analysis and reporting.
## 🛡️ Security
- Bearer token authentication required
- CORS headers configured for cross-origin requests
- No sensitive data stored in logs (authentication headers are not logged)
- Input validation and error handling
## 📈 Monitoring
Monitor your API usage through the PostgreSQL logs:
- Track costs across different models
- Analyze response times and performance
- Identify usage patterns and optimize costs
- Maintain compliance with audit requirements
## 🤝 Contributing
Feel free to submit issues and enhancement requests!
## 📄 License
This project is open source and available under the MIT License.

View File

@@ -1,6 +1,7 @@
{
"name": "llm-proxy-server",
"version": "1.0.0",
"description": "A lightweight, production-ready OpenAI-compatible proxy server that seamlessly forwards LLM API requests to any endpoint with comprehensive logging, cost tracking, and PostgreSQL integration. Perfect for monitoring API usage, calculating costs, and maintaining audit trails for your AI applications.",
"main": "proxy.ts",
"scripts": {
"dev": "ts-node-dev --respawn --transpile-only proxy.ts",