Cache
Cache module for GLLM Datastore.
Cache(data_store, eviction_manager=None, matching_strategy=MatchingStrategy.EXACT, eviction_config=None, max_locks=100)
Bases: BaseCache
Cache interface that uses a data store for storage and retrieval.
Attributes:
| Name | Type | Description |
|---|---|---|
data_store |
BaseDataStore
|
The data store to use for storage. |
eviction_manager |
BaseEvictionManager | None
|
The eviction manager to use for cache eviction. |
matching_strategy |
MatchingStrategy
|
The strategy to use for matching keys. |
eviction_config |
dict[str, Any] | None
|
Configuration parameters for eviction strategies. |
max_locks |
int
|
Maximum number of locks to keep in memory for race condition mitigation. |
Initialize the data store cache.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data_store |
BaseDataStore
|
The data store to use for storage. Must have fulltext capability registered. Vector capability required only for semantic matching. |
required |
eviction_manager |
BaseEvictionManager | None
|
The eviction manager to use for cache eviction. Defaults to None. If None, no eviction will be performed. |
None
|
matching_strategy |
MatchingStrategy
|
The strategy to use for matching keys. Defaults to MatchingStrategy.EXACT. |
EXACT
|
eviction_config |
dict[str, Any] | None
|
Configuration parameters for eviction strategies. Defaults to None, which means no specific configuration is provided. |
None
|
max_locks |
int
|
Maximum number of locks to keep in memory. When exceeded, least recently used locks are automatically evicted. Defaults to 100. |
100
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If data_store doesn't have fulltext capability. |
ValueError
|
If semantic matching requested without vector capability. |
cache(key_func=None, name='', matching_strategy=None, eviction_config=None)
Decorator for caching function results.
This decorator caches the results of the decorated function using this cache storage. The cache key is generated using the provided key function or a default key generation based on the function name and arguments.
Synchronous and asynchronous functions are supported.
Example
- Basic usage:
def get_user_cache_key(user_id: int) -> str:
return f"user:{user_id}"
@cache_store.cache(key_func=get_user_cache_key)
async def get_user(user_id: int) -> User:
return await db.get_user(user_id)
# will use/store cache with key "user:1"
user1 = await get_user(1)
- Using eviction config:
@cache_store.cache(eviction_config={"ttl": "1h"})
async def get_user(user_id: int) -> User:
return await db.get_user(user_id)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
key_func |
Callable | None
|
A function to generate the cache key. Defaults to None, in which case the function name and arguments will be used to generate the cache key. |
None
|
name |
str
|
The name of the cache. This can be used to identify the cache in logs or metrics. Defaults to an empty string. |
''
|
matching_strategy |
MatchingStrategy | None
|
The strategy to use for matching keys. Defaults to None, in which case the class-level matching strategy will be used. |
None
|
eviction_config |
dict[str, Any] | None
|
Configuration parameters for eviction strategies. Defaults to None, in which case the class-level eviction config will be used. |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
Callable |
Callable
|
A decorator function. |
clear()
async
Clear all cached results based on the matching strategy.
Example
await cache.clear()
delete(key, filters=None)
async
Delete the cached result based on the key and matching strategy.
Example
# Using QueryFilter for multiple conditions
await cache.delete(
"my_key",
filters=F.and_(F.eq("metadata.category", "ML"), F.eq("metadata.subcategory", "AI"))
)
# Using FilterClause directly
await cache.delete("my_key", filters=F.eq("metadata.category", "ML"))
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
key |
str | list[str]
|
The cache key to delete. |
required |
filters |
FilterClause | QueryFilter | None
|
Optional filters to apply to the search. FilterClause objects are automatically converted to QueryFilter internally. Defaults to None. |
None
|
retrieve(key, matching_strategy=None, filters=None, max_distance=2, min_similarity=0.8)
async
Retrieve the cached result based on the key and matching strategy.
This method supports different matching strategies with strategy-specific parameters: 1. EXACT: Exact key matching. 2. FUZZY: Fuzzy matching. 3. SEMANTIC: Semantic similarity matching.
Example
from gllm_datastore.core.filters import filter as F
# Exact match
result = await cache.retrieve("my_key", MatchingStrategy.EXACT)
# Direct FilterClause usage
result = await cache.retrieve(
"my_key",
MatchingStrategy.EXACT,
filters=F.eq("metadata.category", "ML")
)
# Fuzzy match with custom max_distance
result = await cache.retrieve("my_key", MatchingStrategy.FUZZY, max_distance=3)
# Semantic match with custom min_similarity and filters
result = await cache.retrieve(
"my_key",
MatchingStrategy.SEMANTIC,
min_similarity=0.9,
filters=F.and_(F.eq("metadata.category", "ML"), F.eq("metadata.subcategory", "AI"))
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
key |
str
|
The cache key to retrieve. |
required |
matching_strategy |
MatchingStrategy | None
|
The strategy to use for matching keys. Defaults to None, in which case the class-level matching strategy will be used. |
None
|
filters |
FilterClause | QueryFilter | None
|
Query filters to apply. FilterClause objects are automatically converted to QueryFilter internally. Defaults to None. |
None
|
max_distance |
int
|
Maximum edit distance for fuzzy matching. Only used for FUZZY strategy. Defaults to 2. |
2
|
min_similarity |
float
|
Minimum similarity score for semantic matching. Only used for SEMANTIC strategy. Defaults to 0.8. |
0.8
|
Returns:
| Type | Description |
|---|---|
Any | None
|
Any | None: The cached result if found, otherwise None. |
store(key, value, metadata=None, **kwargs)
async
Store the cached result based on the key and matching strategy.
Example
await cache.store("my_key", "my_value", metadata={"category": "ML", "subcategory": "AI"}, ttl="1h")
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
key |
str
|
The cache key to store. |
required |
value |
str
|
The value to store in the cache. |
required |
metadata |
dict[str, Any] | None
|
Metadata to store with the cache. Defaults to None. |
None
|
**kwargs |
Additional keyword arguments to pass to the eviction strategy (e.g. ttl). |
{}
|
MatchingStrategy
Bases: StrEnum
Defines how keys should be matched during retrieval.