Conversation
- Optimize the `isize` to `u64` strategy - Now we don't ignore the errors white doing equal operations
Fix some bugs Write tests for FIFOCache and Cache Optimize some operations Write FIFOCache
- The `n` parameter of the `LRUCache.least_recently_used` method has been removed - The strictness in `__eq__` methods was reduced
|
According to my benchmarks, this update makes all cache classes a bit slower, but it remains the fastest caching library in Python. The added features and resolved issues justify the trade-off ... |
Optimize some operations in LRUCache
* Now the LFUCache using VecDeque instead of Vec * Update docstrings * Add more strictness for loading pickle objects
* Remove subclass flag of core classes * Set __slots__ for cache classes
* Do some runtime optimizations * Add some new methods to VTTLCache: `expire`, `items_with_expire`, `items` * Optimize update methods
* Update docstrings
|
@awolverp Great work! I'm a big fan of the library. I use it in many personal projects and at work. Recently I had to scale up one system at work to many processes, so I had to abandon the library as it doesn't have Redis support. I imagine it isn't the scope and main focus of this library, but do you plan adding support for a Redis backend? It would be awesome! Currently there's no cache library in python with Redis support that handle both sync and async fuctions. |
Thank you very much for your support ❤️. I will add support for Redis but it will take some time, I won't be able to add it soon. |
In this update I focused on new features and fixing some known issues:
Example
__eq__operations - This might slow things down a bit but needs to be fixedExample
isizetou64strategy in Rust - This can make things faster