chore: initial snapshot for gitea/github upload

This commit is contained in:
Your Name
2026-03-26 16:04:46 +08:00
commit a699a1ac98
3497 changed files with 1586237 additions and 0 deletions

View File

@@ -0,0 +1,317 @@
# LiteLLM BitBucket Prompt Management
A powerful prompt management system for LiteLLM that fetches `.prompt` files from BitBucket repositories. This enables team-based prompt management with BitBucket's built-in access control and version control capabilities.
## Features
- **🏢 Team-based access control**: Leverage BitBucket's workspace and repository permissions
- **📁 Repository-based prompt storage**: Store prompts in BitBucket repositories
- **🔐 Multiple authentication methods**: Support for access tokens and basic auth
- **🎯 YAML frontmatter**: Define model, parameters, and schemas in file headers
- **🔧 Handlebars templating**: Use `{{variable}}` syntax with Jinja2 backend
- **✅ Input validation**: Automatic validation against defined schemas
- **🔗 LiteLLM integration**: Works seamlessly with `litellm.completion()`
- **💬 Smart message parsing**: Converts prompts to proper chat messages
- **⚙️ Parameter extraction**: Automatically applies model settings from prompts
## Quick Start
### 1. Set up BitBucket Repository
Create a repository in your BitBucket workspace and add `.prompt` files:
```
your-repo/
├── prompts/
│ ├── chat_assistant.prompt
│ ├── code_reviewer.prompt
│ └── data_analyst.prompt
```
### 2. Create a `.prompt` file
Create a file called `prompts/chat_assistant.prompt`:
```yaml
---
model: gpt-4
temperature: 0.7
max_tokens: 150
input:
schema:
user_message: string
system_context?: string
---
{% if system_context %}System: {{system_context}}
{% endif %}User: {{user_message}}
```
### 3. Configure BitBucket Access
#### Option A: Access Token (Recommended)
```python
import litellm
# Configure BitBucket access
bitbucket_config = {
"workspace": "your-workspace",
"repository": "your-repo",
"access_token": "your-access-token",
"branch": "main" # optional, defaults to main
}
# Set global BitBucket configuration
litellm.set_global_bitbucket_config(bitbucket_config)
```
#### Option B: Basic Authentication
```python
import litellm
# Configure BitBucket access with basic auth
bitbucket_config = {
"workspace": "your-workspace",
"repository": "your-repo",
"username": "your-username",
"access_token": "your-app-password", # Use app password for basic auth
"auth_method": "basic",
"branch": "main"
}
litellm.set_global_bitbucket_config(bitbucket_config)
```
### 4. Use with LiteLLM
```python
# Use with completion - the model prefix 'bitbucket/' tells LiteLLM to use BitBucket prompt management
response = litellm.completion(
model="bitbucket/gpt-4", # The actual model comes from the .prompt file
prompt_id="prompts/chat_assistant", # Location of the prompt file
prompt_variables={
"user_message": "What is machine learning?",
"system_context": "You are a helpful AI tutor."
},
# Any additional messages will be appended after the prompt
messages=[{"role": "user", "content": "Please explain it simply."}]
)
print(response.choices[0].message.content)
```
## Proxy Server Configuration
### 1. Create a `.prompt` file
Create `prompts/hello.prompt`:
```yaml
---
model: gpt-4
temperature: 0.7
---
System: You are a helpful assistant.
User: {{user_message}}
```
### 2. Setup config.yaml
```yaml
model_list:
- model_name: my-bitbucket-model
litellm_params:
model: bitbucket/gpt-4
prompt_id: "prompts/hello"
api_key: os.environ/OPENAI_API_KEY
litellm_settings:
global_bitbucket_config:
workspace: "your-workspace"
repository: "your-repo"
access_token: "your-access-token"
branch: "main"
```
### 3. Start the proxy
```bash
litellm --config config.yaml --detailed_debug
```
### 4. Test it!
```bash
curl -L -X POST 'http://0.0.0.0:4000/v1/chat/completions' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-1234' \
-d '{
"model": "my-bitbucket-model",
"messages": [{"role": "user", "content": "IGNORED"}],
"prompt_variables": {
"user_message": "What is the capital of France?"
}
}'
```
## Prompt File Format
### Basic Structure
```yaml
---
# Model configuration
model: gpt-4
temperature: 0.7
max_tokens: 500
# Input schema (optional)
input:
schema:
user_message: string
system_context?: string
---
System: You are a helpful {{role}} assistant.
User: {{user_message}}
```
### Advanced Features
**Multi-role conversations:**
```yaml
---
model: gpt-4
temperature: 0.3
---
System: You are a helpful coding assistant.
User: {{user_question}}
```
**Dynamic model selection:**
```yaml
---
model: "{{preferred_model}}" # Model can be a variable
temperature: 0.7
---
System: You are a helpful assistant specialized in {{domain}}.
User: {{user_message}}
```
## Team-Based Access Control
BitBucket's built-in permission system provides team-based access control:
1. **Workspace-level permissions**: Control access to entire workspaces
2. **Repository-level permissions**: Control access to specific repositories
3. **Branch-level permissions**: Control access to specific branches
4. **User and group management**: Manage team members and their access levels
### Setting up Team Access
1. **Create workspaces for each team**:
```
team-a-prompts/
team-b-prompts/
team-c-prompts/
```
2. **Configure repository permissions**:
- Grant read access to team members
- Grant write access to prompt maintainers
- Use branch protection rules for production prompts
3. **Use different access tokens**:
- Each team can have their own access token
- Tokens can be scoped to specific repositories
- Use app passwords for additional security
## API Reference
### BitBucket Configuration
```python
bitbucket_config = {
"workspace": str, # Required: BitBucket workspace name
"repository": str, # Required: Repository name
"access_token": str, # Required: BitBucket access token or app password
"branch": str, # Optional: Branch to fetch from (default: "main")
"base_url": str, # Optional: Custom BitBucket API URL
"auth_method": str, # Optional: "token" or "basic" (default: "token")
"username": str, # Optional: Username for basic auth
"base_url" : str # Optional: Incase where the base url is not https://api.bitbucket.org/2.0
}
```
### LiteLLM Integration
```python
response = litellm.completion(
model="bitbucket/<base_model>", # required (e.g., bitbucket/gpt-4)
prompt_id=str, # required - the .prompt filename without extension
prompt_variables=dict, # optional - variables for template rendering
bitbucket_config=dict, # optional - BitBucket configuration (if not set globally)
messages=list, # optional - additional messages
)
```
## Error Handling
The BitBucket integration provides detailed error messages for common issues:
- **Authentication errors**: Invalid access tokens or credentials
- **Permission errors**: Insufficient access to workspace/repository
- **File not found**: Missing .prompt files
- **Network errors**: Connection issues with BitBucket API
## Security Considerations
1. **Access Token Security**: Store access tokens securely using environment variables or secret management systems
2. **Repository Permissions**: Use BitBucket's permission system to control access
3. **Branch Protection**: Protect main branches from unauthorized changes
4. **Audit Logging**: BitBucket provides audit logs for all repository access
## Troubleshooting
### Common Issues
1. **"Access denied" errors**: Check your BitBucket permissions for the workspace and repository
2. **"Authentication failed" errors**: Verify your access token or credentials
3. **"File not found" errors**: Ensure the .prompt file exists in the specified branch
4. **Template rendering errors**: Check your Handlebars syntax in the .prompt file
### Debug Mode
Enable debug logging to troubleshoot issues:
```python
import litellm
litellm.set_verbose = True
# Your BitBucket prompt calls will now show detailed logs
response = litellm.completion(
model="bitbucket/gpt-4",
prompt_id="your_prompt",
prompt_variables={"key": "value"}
)
```
## Migration from File-Based Prompts
If you're currently using file-based prompts with the dotprompt integration, you can easily migrate to BitBucket:
1. **Upload your .prompt files** to a BitBucket repository
2. **Update your configuration** to use BitBucket instead of local files
3. **Set up team access** using BitBucket's permission system
4. **Update your code** to use `bitbucket/` model prefix instead of `dotprompt/`
This provides better collaboration, version control, and team-based access control for your prompts.

View File

@@ -0,0 +1,66 @@
from typing import TYPE_CHECKING, Optional
if TYPE_CHECKING:
from .bitbucket_prompt_manager import BitBucketPromptManager
from litellm.types.prompts.init_prompts import PromptLiteLLMParams, PromptSpec
from litellm.integrations.custom_prompt_management import CustomPromptManagement
from litellm.types.prompts.init_prompts import SupportedPromptIntegrations
from .bitbucket_prompt_manager import BitBucketPromptManager
# Global instances
global_bitbucket_config: Optional[dict] = None
def set_global_bitbucket_config(config: dict) -> None:
"""
Set the global BitBucket configuration for prompt management.
Args:
config: Dictionary containing BitBucket configuration
- workspace: BitBucket workspace name
- repository: Repository name
- access_token: BitBucket access token
- branch: Branch to fetch prompts from (default: main)
"""
import litellm
litellm.global_bitbucket_config = config # type: ignore
def prompt_initializer(
litellm_params: "PromptLiteLLMParams", prompt_spec: "PromptSpec"
) -> "CustomPromptManagement":
"""
Initialize a prompt from a BitBucket repository.
"""
bitbucket_config = getattr(litellm_params, "bitbucket_config", None)
prompt_id = getattr(litellm_params, "prompt_id", None)
if not bitbucket_config:
raise ValueError(
"bitbucket_config is required for BitBucket prompt integration"
)
try:
bitbucket_prompt_manager = BitBucketPromptManager(
bitbucket_config=bitbucket_config,
prompt_id=prompt_id,
)
return bitbucket_prompt_manager
except Exception as e:
raise e
prompt_initializer_registry = {
SupportedPromptIntegrations.BITBUCKET.value: prompt_initializer,
}
# Export public API
__all__ = [
"BitBucketPromptManager",
"set_global_bitbucket_config",
"global_bitbucket_config",
]

View File

@@ -0,0 +1,241 @@
"""
BitBucket API client for fetching .prompt files from BitBucket repositories.
"""
import base64
from typing import Any, Dict, List, Optional
from litellm.llms.custom_httpx.http_handler import HTTPHandler
class BitBucketClient:
"""
Client for interacting with BitBucket API to fetch .prompt files.
Supports:
- Authentication with access tokens
- Fetching file contents from repositories
- Team-based access control through BitBucket permissions
- Branch-specific file fetching
"""
def __init__(self, config: Dict[str, Any]):
"""
Initialize the BitBucket client.
Args:
config: Dictionary containing:
- workspace: BitBucket workspace name
- repository: Repository name
- access_token: BitBucket access token (or app password)
- branch: Branch to fetch from (default: main)
- base_url: Custom BitBucket API base URL (optional)
- auth_method: Authentication method ('token' or 'basic', default: 'token')
- username: Username for basic auth (optional)
"""
self.workspace = config.get("workspace")
self.repository = config.get("repository")
self.access_token = config.get("access_token")
self.branch = config.get("branch", "main")
self.base_url = config.get("", "https://api.bitbucket.org/2.0")
self.auth_method = config.get("auth_method", "token")
self.username = config.get("username")
if not all([self.workspace, self.repository, self.access_token]):
raise ValueError("workspace, repository, and access_token are required")
# Set up authentication headers
self.headers = {
"Accept": "application/json",
"Content-Type": "application/json",
}
if self.auth_method == "basic" and self.username:
# Use basic auth with username and app password
credentials = f"{self.username}:{self.access_token}"
encoded_credentials = base64.b64encode(credentials.encode()).decode()
self.headers["Authorization"] = f"Basic {encoded_credentials}"
else:
# Use token-based authentication (default)
self.headers["Authorization"] = f"Bearer {self.access_token}"
# Initialize HTTPHandler
self.http_handler = HTTPHandler()
def get_file_content(self, file_path: str) -> Optional[str]:
"""
Fetch the content of a file from the BitBucket repository.
Args:
file_path: Path to the file in the repository
Returns:
File content as string, or None if file not found
"""
url = f"{self.base_url}/repositories/{self.workspace}/{self.repository}/src/{self.branch}/{file_path}"
try:
response = self.http_handler.get(url, headers=self.headers)
response.raise_for_status()
# BitBucket returns file content as base64 encoded
if response.headers.get("content-type", "").startswith("text/"):
return response.text
else:
# For binary files or when content-type is not text, try to decode as base64
try:
return base64.b64decode(response.content).decode("utf-8")
except Exception:
return response.text
except Exception as e:
# Check if it's an HTTP error
if hasattr(e, "response") and hasattr(e.response, "status_code"):
if e.response.status_code == 404:
return None
elif e.response.status_code == 403:
raise Exception(
f"Access denied to file '{file_path}'. Check your BitBucket permissions for workspace '{self.workspace}' and repository '{self.repository}'."
)
elif e.response.status_code == 401:
raise Exception(
"Authentication failed. Check your BitBucket access token and permissions."
)
else:
raise Exception(f"Failed to fetch file '{file_path}': {e}")
else:
raise Exception(f"Error fetching file '{file_path}': {e}")
def list_files(
self, directory_path: str = "", file_extension: str = ".prompt"
) -> List[str]:
"""
List files in a directory with a specific extension.
Args:
directory_path: Directory path in the repository (empty for root)
file_extension: File extension to filter by (default: .prompt)
Returns:
List of file paths
"""
url = f"{self.base_url}/repositories/{self.workspace}/{self.repository}/src/{self.branch}/{directory_path}"
try:
response = self.http_handler.get(url, headers=self.headers)
response.raise_for_status()
data = response.json()
files = []
for item in data.get("values", []):
if item.get("type") == "commit_file":
file_path = item.get("path", "")
if file_path.endswith(file_extension):
files.append(file_path)
return files
except Exception as e:
# Check if it's an HTTP error
if hasattr(e, "response") and hasattr(e.response, "status_code"):
if e.response.status_code == 404:
return []
elif e.response.status_code == 403:
raise Exception(
f"Access denied to directory '{directory_path}'. Check your BitBucket permissions for workspace '{self.workspace}' and repository '{self.repository}'."
)
elif e.response.status_code == 401:
raise Exception(
"Authentication failed. Check your BitBucket access token and permissions."
)
else:
raise Exception(f"Failed to list files in '{directory_path}': {e}")
else:
raise Exception(f"Error listing files in '{directory_path}': {e}")
def get_repository_info(self) -> Dict[str, Any]:
"""
Get information about the repository.
Returns:
Dictionary containing repository information
"""
url = f"{self.base_url}/repositories/{self.workspace}/{self.repository}"
try:
response = self.http_handler.get(url, headers=self.headers)
response.raise_for_status()
return response.json()
except Exception as e:
raise Exception(f"Failed to get repository info: {e}")
def test_connection(self) -> bool:
"""
Test the connection to the BitBucket repository.
Returns:
True if connection is successful, False otherwise
"""
try:
self.get_repository_info()
return True
except Exception:
return False
def get_branches(self) -> List[Dict[str, Any]]:
"""
Get list of branches in the repository.
Returns:
List of branch information dictionaries
"""
url = f"{self.base_url}/repositories/{self.workspace}/{self.repository}/refs/branches"
try:
response = self.http_handler.get(url, headers=self.headers)
response.raise_for_status()
data = response.json()
return data.get("values", [])
except Exception as e:
raise Exception(f"Failed to get branches: {e}")
def get_file_metadata(self, file_path: str) -> Optional[Dict[str, Any]]:
"""
Get metadata about a file (size, last modified, etc.).
Args:
file_path: Path to the file in the repository
Returns:
Dictionary containing file metadata, or None if file not found
"""
url = f"{self.base_url}/repositories/{self.workspace}/{self.repository}/src/{self.branch}/{file_path}"
try:
# Use GET with Range header to get just the headers (HEAD equivalent)
headers = self.headers.copy()
headers["Range"] = "bytes=0-0" # Request only first byte to get headers
response = self.http_handler.get(url, headers=headers)
response.raise_for_status()
return {
"content_type": response.headers.get("content-type"),
"content_length": response.headers.get("content-length"),
"last_modified": response.headers.get("last-modified"),
}
except Exception as e:
# Check if it's an HTTP error
if hasattr(e, "response") and hasattr(e.response, "status_code"):
if e.response.status_code == 404:
return None
raise Exception(f"Failed to get file metadata for '{file_path}': {e}")
else:
raise Exception(f"Error getting file metadata for '{file_path}': {e}")
def close(self):
"""Close the HTTP handler to free resources."""
if hasattr(self, "http_handler"):
self.http_handler.close()

View File

@@ -0,0 +1,584 @@
"""
BitBucket prompt manager that integrates with LiteLLM's prompt management system.
Fetches .prompt files from BitBucket repositories and provides team-based access control.
"""
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
from jinja2 import DictLoader, Environment, select_autoescape
from litellm.integrations.custom_prompt_management import CustomPromptManagement
if TYPE_CHECKING:
from litellm.litellm_core_utils.litellm_logging import Logging as LiteLLMLoggingObj
else:
LiteLLMLoggingObj = Any
from litellm.integrations.prompt_management_base import (
PromptManagementBase,
PromptManagementClient,
)
from litellm.types.llms.openai import AllMessageValues
from litellm.types.prompts.init_prompts import PromptSpec
from litellm.types.utils import StandardCallbackDynamicParams
from .bitbucket_client import BitBucketClient
class BitBucketPromptTemplate:
"""
Represents a prompt template loaded from BitBucket.
"""
def __init__(
self,
template_id: str,
content: str,
metadata: Dict[str, Any],
model: Optional[str] = None,
):
self.template_id = template_id
self.content = content
self.metadata = metadata
self.model = model or metadata.get("model")
self.temperature = metadata.get("temperature")
self.max_tokens = metadata.get("max_tokens")
self.input_schema = metadata.get("input", {}).get("schema", {})
self.optional_params = {
k: v for k, v in metadata.items() if k not in ["model", "input", "content"]
}
def __repr__(self):
return f"BitBucketPromptTemplate(id='{self.template_id}', model='{self.model}')"
class BitBucketTemplateManager:
"""
Manager for loading and rendering .prompt files from BitBucket repositories.
Supports:
- Fetching .prompt files from BitBucket repositories
- Team-based access control through BitBucket permissions
- YAML frontmatter for metadata
- Handlebars-style templating (using Jinja2)
- Input/output schema validation
- Model configuration
"""
def __init__(
self,
bitbucket_config: Dict[str, Any],
prompt_id: Optional[str] = None,
):
self.bitbucket_config = bitbucket_config
self.prompt_id = prompt_id
self.prompts: Dict[str, BitBucketPromptTemplate] = {}
self.bitbucket_client = BitBucketClient(bitbucket_config)
self.jinja_env = Environment(
loader=DictLoader({}),
autoescape=select_autoescape(["html", "xml"]),
# Use Handlebars-style delimiters to match Dotprompt spec
variable_start_string="{{",
variable_end_string="}}",
block_start_string="{%",
block_end_string="%}",
comment_start_string="{#",
comment_end_string="#}",
)
# Load prompts from BitBucket if prompt_id is provided
if self.prompt_id:
self._load_prompt_from_bitbucket(self.prompt_id)
def _load_prompt_from_bitbucket(self, prompt_id: str) -> None:
"""Load a specific .prompt file from BitBucket."""
try:
# Fetch the .prompt file from BitBucket
prompt_content = self.bitbucket_client.get_file_content(
f"{prompt_id}.prompt"
)
if prompt_content:
template = self._parse_prompt_file(prompt_content, prompt_id)
self.prompts[prompt_id] = template
except Exception as e:
raise Exception(f"Failed to load prompt '{prompt_id}' from BitBucket: {e}")
def _parse_prompt_file(
self, content: str, prompt_id: str
) -> BitBucketPromptTemplate:
"""Parse a .prompt file content and extract metadata and template."""
# Split frontmatter and content
if content.startswith("---"):
parts = content.split("---", 2)
if len(parts) >= 3:
frontmatter_str = parts[1].strip()
template_content = parts[2].strip()
else:
frontmatter_str = ""
template_content = content
else:
frontmatter_str = ""
template_content = content
# Parse YAML frontmatter
metadata: Dict[str, Any] = {}
if frontmatter_str:
try:
import yaml
metadata = yaml.safe_load(frontmatter_str) or {}
except ImportError:
# Fallback to basic parsing if PyYAML is not available
metadata = self._parse_yaml_basic(frontmatter_str)
except Exception:
metadata = {}
return BitBucketPromptTemplate(
template_id=prompt_id,
content=template_content,
metadata=metadata,
)
def _parse_yaml_basic(self, yaml_str: str) -> Dict[str, Any]:
"""Basic YAML parser for simple cases when PyYAML is not available."""
result: Dict[str, Any] = {}
for line in yaml_str.split("\n"):
line = line.strip()
if ":" in line and not line.startswith("#"):
key, value = line.split(":", 1)
key = key.strip()
value = value.strip()
# Try to parse value as appropriate type
if value.lower() in ["true", "false"]:
result[key] = value.lower() == "true"
elif value.isdigit():
result[key] = int(value)
elif value.replace(".", "").isdigit():
result[key] = float(value)
else:
result[key] = value.strip("\"'")
return result
def render_template(
self, template_id: str, variables: Optional[Dict[str, Any]] = None
) -> str:
"""Render a template with the given variables."""
if template_id not in self.prompts:
raise ValueError(f"Template '{template_id}' not found")
template = self.prompts[template_id]
jinja_template = self.jinja_env.from_string(template.content)
return jinja_template.render(**(variables or {}))
def get_template(self, template_id: str) -> Optional[BitBucketPromptTemplate]:
"""Get a template by ID."""
return self.prompts.get(template_id)
def list_templates(self) -> List[str]:
"""List all available template IDs."""
return list(self.prompts.keys())
class BitBucketPromptManager(CustomPromptManagement):
"""
BitBucket prompt manager that integrates with LiteLLM's prompt management system.
This class enables using .prompt files from BitBucket repositories with the
litellm completion() function by implementing the PromptManagementBase interface.
Usage:
# Configure BitBucket access
bitbucket_config = {
"workspace": "your-workspace",
"repository": "your-repo",
"access_token": "your-token",
"branch": "main" # optional, defaults to main
}
# Use with completion
response = litellm.completion(
model="bitbucket/gpt-4",
prompt_id="my_prompt",
prompt_variables={"variable": "value"},
bitbucket_config=bitbucket_config,
messages=[{"role": "user", "content": "This will be combined with the prompt"}]
)
"""
def __init__(
self,
bitbucket_config: Dict[str, Any],
prompt_id: Optional[str] = None,
):
self.bitbucket_config = bitbucket_config
self.prompt_id = prompt_id
self._prompt_manager: Optional[BitBucketTemplateManager] = None
@property
def integration_name(self) -> str:
"""Integration name used in model names like 'bitbucket/gpt-4'."""
return "bitbucket"
@property
def prompt_manager(self) -> BitBucketTemplateManager:
"""Get or create the prompt manager instance."""
if self._prompt_manager is None:
self._prompt_manager = BitBucketTemplateManager(
bitbucket_config=self.bitbucket_config,
prompt_id=self.prompt_id,
)
return self._prompt_manager
def get_prompt_template(
self,
prompt_id: str,
prompt_variables: Optional[Dict[str, Any]] = None,
) -> Tuple[str, Dict[str, Any]]:
"""
Get a prompt template and render it with variables.
Args:
prompt_id: The ID of the prompt template
prompt_variables: Variables to substitute in the template
Returns:
Tuple of (rendered_prompt, metadata)
"""
template = self.prompt_manager.get_template(prompt_id)
if not template:
raise ValueError(f"Prompt template '{prompt_id}' not found")
# Render the template
rendered_prompt = self.prompt_manager.render_template(
prompt_id, prompt_variables or {}
)
# Extract metadata
metadata = {
"model": template.model,
"temperature": template.temperature,
"max_tokens": template.max_tokens,
**template.optional_params,
}
return rendered_prompt, metadata
def pre_call_hook(
self,
user_id: Optional[str],
messages: List[AllMessageValues],
function_call: Optional[Union[Dict[str, Any], str]] = None,
litellm_params: Optional[Dict[str, Any]] = None,
prompt_id: Optional[str] = None,
prompt_variables: Optional[Dict[str, Any]] = None,
**kwargs,
) -> Tuple[List[AllMessageValues], Optional[Dict[str, Any]]]:
"""
Pre-call hook that processes the prompt template before making the LLM call.
"""
if not prompt_id:
return messages, litellm_params
try:
# Get the rendered prompt and metadata
rendered_prompt, prompt_metadata = self.get_prompt_template(
prompt_id, prompt_variables
)
# Parse the rendered prompt into messages
parsed_messages = self._parse_prompt_to_messages(rendered_prompt)
# Merge with existing messages
if parsed_messages:
# If we have parsed messages, use them instead of the original messages
final_messages: List[AllMessageValues] = parsed_messages
else:
# If no messages were parsed, prepend the prompt to existing messages
final_messages = [
{"role": "user", "content": rendered_prompt} # type: ignore
] + messages
# Update litellm_params with prompt metadata
if litellm_params is None:
litellm_params = {}
# Apply model and parameters from prompt metadata
if prompt_metadata.get("model"):
litellm_params["model"] = prompt_metadata["model"]
for param in [
"temperature",
"max_tokens",
"top_p",
"frequency_penalty",
"presence_penalty",
]:
if param in prompt_metadata:
litellm_params[param] = prompt_metadata[param]
return final_messages, litellm_params
except Exception as e:
# Log error but don't fail the call
import litellm
litellm._logging.verbose_proxy_logger.error(
f"Error in BitBucket prompt pre_call_hook: {e}"
)
return messages, litellm_params
def _parse_prompt_to_messages(self, prompt_content: str) -> List[AllMessageValues]:
"""
Parse prompt content into a list of messages.
Handles both simple prompts and multi-role conversations.
"""
messages = []
lines = prompt_content.strip().split("\n")
current_role = None
current_content = []
for line in lines:
line = line.strip()
if not line:
continue
# Check for role indicators
if line.lower().startswith("system:"):
if current_role and current_content:
messages.append(
{
"role": current_role,
"content": "\n".join(current_content).strip(),
} # type: ignore
)
current_role = "system"
current_content = [line[7:].strip()] # Remove "System:" prefix
elif line.lower().startswith("user:"):
if current_role and current_content:
messages.append(
{
"role": current_role,
"content": "\n".join(current_content).strip(),
} # type: ignore
)
current_role = "user"
current_content = [line[5:].strip()] # Remove "User:" prefix
elif line.lower().startswith("assistant:"):
if current_role and current_content:
messages.append(
{
"role": current_role,
"content": "\n".join(current_content).strip(),
} # type: ignore
)
current_role = "assistant"
current_content = [line[10:].strip()] # Remove "Assistant:" prefix
else:
# Continue building current message
current_content.append(line)
# Add the last message
if current_role and current_content:
messages.append(
{"role": current_role, "content": "\n".join(current_content).strip()}
)
# If no role indicators found, treat as a single user message
if not messages and prompt_content.strip():
messages = [{"role": "user", "content": prompt_content.strip()}] # type: ignore
return messages # type: ignore
def post_call_hook(
self,
user_id: Optional[str],
response: Any,
input_messages: List[AllMessageValues],
function_call: Optional[Union[Dict[str, Any], str]] = None,
litellm_params: Optional[Dict[str, Any]] = None,
prompt_id: Optional[str] = None,
prompt_variables: Optional[Dict[str, Any]] = None,
**kwargs,
) -> Any:
"""
Post-call hook for any post-processing after the LLM call.
"""
return response
def get_available_prompts(self) -> List[str]:
"""Get list of available prompt IDs."""
return self.prompt_manager.list_templates()
def reload_prompts(self) -> None:
"""Reload prompts from BitBucket."""
if self.prompt_id:
self._prompt_manager = None # Reset to force reload
self.prompt_manager # This will trigger reload
def should_run_prompt_management(
self,
prompt_id: Optional[str],
prompt_spec: Optional[PromptSpec],
dynamic_callback_params: StandardCallbackDynamicParams,
) -> bool:
"""
Determine if prompt management should run based on the prompt_id.
For BitBucket, we always return True and handle the prompt loading
in the _compile_prompt_helper method.
"""
return prompt_id is not None
def _compile_prompt_helper(
self,
prompt_id: Optional[str],
prompt_spec: Optional[PromptSpec],
prompt_variables: Optional[dict],
dynamic_callback_params: StandardCallbackDynamicParams,
prompt_label: Optional[str] = None,
prompt_version: Optional[int] = None,
) -> PromptManagementClient:
"""
Compile a BitBucket prompt template into a PromptManagementClient structure.
This method:
1. Loads the prompt template from BitBucket
2. Renders it with the provided variables
3. Converts the rendered text into chat messages
4. Extracts model and optional parameters from metadata
"""
if prompt_id is None:
raise ValueError("prompt_id is required for BitBucket prompt manager")
try:
# Load the prompt from BitBucket if not already loaded
if prompt_id not in self.prompt_manager.prompts:
self.prompt_manager._load_prompt_from_bitbucket(prompt_id)
# Get the rendered prompt and metadata
rendered_prompt, prompt_metadata = self.get_prompt_template(
prompt_id, prompt_variables
)
# Convert rendered content to chat messages
messages = self._parse_prompt_to_messages(rendered_prompt)
# Extract model from metadata (if specified)
template_model = prompt_metadata.get("model")
# Extract optional parameters from metadata
optional_params = {}
for param in [
"temperature",
"max_tokens",
"top_p",
"frequency_penalty",
"presence_penalty",
]:
if param in prompt_metadata:
optional_params[param] = prompt_metadata[param]
return PromptManagementClient(
prompt_id=prompt_id,
prompt_template=messages,
prompt_template_model=template_model,
prompt_template_optional_params=optional_params,
completed_messages=None,
)
except Exception as e:
raise ValueError(f"Error compiling prompt '{prompt_id}': {e}")
async def async_compile_prompt_helper(
self,
prompt_id: Optional[str],
prompt_variables: Optional[dict],
dynamic_callback_params: StandardCallbackDynamicParams,
prompt_spec: Optional[PromptSpec] = None,
prompt_label: Optional[str] = None,
prompt_version: Optional[int] = None,
) -> PromptManagementClient:
"""
Async version of compile prompt helper. Since BitBucket operations use sync client,
this simply delegates to the sync version.
"""
if prompt_id is None:
raise ValueError("prompt_id is required for BitBucket prompt manager")
return self._compile_prompt_helper(
prompt_id=prompt_id,
prompt_spec=prompt_spec,
prompt_variables=prompt_variables,
dynamic_callback_params=dynamic_callback_params,
prompt_label=prompt_label,
prompt_version=prompt_version,
)
def get_chat_completion_prompt(
self,
model: str,
messages: List[AllMessageValues],
non_default_params: dict,
prompt_id: Optional[str],
prompt_variables: Optional[dict],
dynamic_callback_params: StandardCallbackDynamicParams,
prompt_spec: Optional[PromptSpec] = None,
prompt_label: Optional[str] = None,
prompt_version: Optional[int] = None,
ignore_prompt_manager_model: Optional[bool] = False,
ignore_prompt_manager_optional_params: Optional[bool] = False,
) -> Tuple[str, List[AllMessageValues], dict]:
"""
Get chat completion prompt from BitBucket and return processed model, messages, and parameters.
"""
return PromptManagementBase.get_chat_completion_prompt(
self,
model,
messages,
non_default_params,
prompt_id,
prompt_variables,
dynamic_callback_params,
prompt_spec=prompt_spec,
prompt_label=prompt_label,
prompt_version=prompt_version,
)
async def async_get_chat_completion_prompt(
self,
model: str,
messages: List[AllMessageValues],
non_default_params: dict,
prompt_id: Optional[str],
prompt_variables: Optional[dict],
dynamic_callback_params: StandardCallbackDynamicParams,
litellm_logging_obj: LiteLLMLoggingObj,
prompt_spec: Optional[PromptSpec] = None,
tools: Optional[List[Dict]] = None,
prompt_label: Optional[str] = None,
prompt_version: Optional[int] = None,
ignore_prompt_manager_model: Optional[bool] = False,
ignore_prompt_manager_optional_params: Optional[bool] = False,
) -> Tuple[str, List[AllMessageValues], dict]:
"""
Async version - delegates to PromptManagementBase async implementation.
"""
return await PromptManagementBase.async_get_chat_completion_prompt(
self,
model,
messages,
non_default_params,
prompt_id=prompt_id,
prompt_variables=prompt_variables,
litellm_logging_obj=litellm_logging_obj,
dynamic_callback_params=dynamic_callback_params,
prompt_spec=prompt_spec,
tools=tools,
prompt_label=prompt_label,
prompt_version=prompt_version,
ignore_prompt_manager_model=ignore_prompt_manager_model,
ignore_prompt_manager_optional_params=ignore_prompt_manager_optional_params,
)