Latte Cache implementation (redis, memcached or apc)
- Nassim
- Member | 4
Hi,
I'm trying to find a way to replace latte cache system (on filesystem) by another implementation (like redis, memcached or apc)
Here is my interface
interface CacheServiceInterface
{
/**
* Clear all cached values
*
* @return void
*/
public function clearAll();
/**
* Get a value
*
* @param string|array|int $key
* @return void
*/
public function get($key);
/**
* Test if a key exist
* @param string|array|int $key
* @return bool
*/
public function hasKey($key): bool;
/**
* Remove a key
* @param string|array|int $key
*/
public function remove($key);
/**
* Set a value
* @param string|array|int $key
* @param mixed $value
*/
public function set($key, $values);
}
I find a way to replace Loaders\FileLoader by my own implementation of Latte\Loader. But i think filesystem cache is hard coded in Latte\Engine in many privates methods.
Does somebody has a solution, that doesn't require to rewrite Latte\Engine ?
Thanks !
I'm using latte in standalone.
Last edited by Nassim (2020-07-18 16:47)
- David Grudl
- Nette Core | 8218
I don't want to add support for other cache storages to Latte, because nothing else is as fast as file storage + opcode cache.
- Nassim
- Member | 4
Ok @DavidGrudl!
I'm not really sure that file storage is the fastest choice, comparatively to
ram storage. To be benchmarked :)
In addition how can you prevent your template directory 's size to become huge
?
Redis provide for example LRU policy (List recently used) witch is very powerful
for this kind of usage.
For now, i find a way to implement caching (Russian doll caching) in
templates using macro, but the templates files are still stored on
filesystem.
I'll post soon my implementations && macro here for anyone
who wants.
- jiri.pudil
- Nette Blogger | 1029
Latte templates are compiled into PHP code and stored as PHP files. So while the filesystem is definitely not the fastest cache storage on its own, it's opcache that makes the difference: with opcache, the template's PHP code is further precompiled into bytecode (thus it doesn't have to be interpreted every time it is executed), which is then stored in shared memory.
I'm looking forward to your benchmarks though, maybe you'll find something faster :)