-
-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v1.0 API #73
Comments
|
Is it possible to make this configurable? I don't want |
What exactly do you mean? |
I think |
@medikoo would you need some help on this? I can particularly help a bit on the LRU part. I noticed the use of objects and loops while Maps and linked list may be better suited for the task. |
@fazouane-marouane great thanks for that initiative! However situation is a bit difficult as this new version is about complete rewrite Anyway you mentioned you can help a bit on LRU part. Currently our LRU handling depends on lru-queue package, which as I tested it (a longer while ago), was not as efficient as e.g other popular library lru-cache. |
@medikoo a quick update on the subject, we'll have soon a speedup of x18 on integer keys and between 2.5x and 5x on string keys. It'll be a dropin replacement for the current lru-queue implementation. I'll propose a pullrequest soon. I'll test https://www.npmjs.com/package/hashtable first, as I suspect it'll bring even more speedup for string keys. |
@medikoo Will the |
@joanned Unfortunately it's not available now, and it's hard to sneak it into current version, due to its design limitations. It's scheduled for v1 |
Is this not going ahead? Seems like this project is complete and not planning on this new V1 direction |
@andymac4182 It's still in plans, but as otherwise I have a full time job, and other things have higher priorities I simply do not find time to handle that. It's possible that later this year I will have some time for that, but again there's nothing certain :) |
That is good to hear :) I totally understand the full time job part. I have the same time constraints :) Is it worth spinning up a V1 branch and people can contribute to the direction of V1 so you don't need to do it all? |
@andymac4182 it's a good question. I see v1 as complete rewrite, so it's hard to just set an empty branch and tell users to continue. Also I have some ideas on tackling that, so was thinking at least starting this work and then eventually let others to follow up with proper guidance. |
No worries :) Happy to help where possible. |
A live (updated in place) proposal for v1.0 API:
Signature of main memoizee function will remain same:
memoizee(fn[, options])
Supported
options
:contextMode
, possible values:'function'
(default) target of memoization is regular function'method'
target of memoization is a method'weak'
target of memoization is regular function which takes an object as first argument, and we don't want to lock those objects from gcresolutionMode
, possible values:'sync'
(default for non-native async functions) target of memoization is synchronous function'callback'
target of memoization is Node.js style, an asynchronous, callback taking function.'async'
(forced for native async functions) target of memoization is an asynchronous function that returns promise. ES2017 async functions will be detected automatically (setting this option to any other value will have no effect).serialize
null
(default), cache ids are resolved against object/value instances directly, therefore cache cannot be persisted in physical layer. StillO(1)
time complexity will be ensured within cache id resolution algorithm (this is not the case right now, in equivalent object mode)true
, cache ids are resolved against serialized values (e.g. two different plain objects of exactly same structure will map to same cache id). This mode, will allow to persist cache in persistent layer between process runs. Default serialization function will be a smarter version ofJSON.stringify
<function> serialize(value)
, custom value serializer. Whether it'll be persistence friendly will be up to the developerlength
Will work nearly exactly same as in current version. One difference would be that dynamic length intention will have to be indicated through
-1
and notfalse
normalizers
Arguments normalizers, it's what's represented now by
resolvers
, otherwise it will work exactly samettl
(previouslymaxAge
)Will represent same feature as in current version, with following changes, improvements:
maxAge: 0
should be supported in case ofasync
orcallback
- it will result with prevention of having multiple requests for same result, but otherwise no caching implied (seemax: 0
doesn't disable cache. #145)resolutionMode
of'async'
or'callback'
, it'll come with prefetch functionality, which could be customized via following options passed withttl
option (e.g.ttl: { value: 3600, prefetchSpan: 0.5 }
):prefetchSpan
(default:0.3
). Assuming following variables:S
represents last time when result was requested and cachedE
represents time when currently cached value becomes invalidated (is reached by TTL setting)Then if invocation occurs in a period between
E - (E - S) * prefetchSpan
andE
, cached result is returned, but behind the scenes it is refreshed with updated value.recoverySpan
(default:0.3
) if invocation occurs in a period betweenE
andE + (E - S) * recoverySpan
, and request for updated result value fails, then we return previously cached (stale) result.Additionally:
setTimeout
calls that invalidate the values, as that's inefficient approach. Cached values should be invalidated at moment of consecutive invocation where we discover cached value is stale. (and we should not care of eventual large number of stale values being stored, that can eventually be tuned with othermax
option).max
Will work same way as now. Still performance of
lru-queue
will have to be revised, we should not drag behindlru-cache
.Additionally, in async case, the setting should be effect at the invocation, and not at the resolution as it's currently (see #131)
refCounter
Will work same way as it is now.
Memoize configuration objects
Each memoized function will expose memoization object, which will provide access to events and methods which will allow to access and operate on cache manually
It will be either instance of
Memoizee
(exposed onmemoizedFn.memoizee
property) or instance ofMemoizeeFactory
(exposed onmemoizedFn.memoizeeFactory
property).Memoizee
It's instance will be exposed on
memoizedFn.memoizee
when memoization is configured with'function'
contextMode
(that's the default).Methods
getId(...args)
- Resolve cache id for given argshas(id)
- Whether we have cached value for given cache idget(id)
- Get value for given cache idset(id, result)
- Cache value for given cache iddelete(id)
- Delete value for given cache idclear()
- Clear cacheforEach(cb)
- Iterate over all cached values (alternatively some other mean of access to full cached object can be provided)Events
hit(id, args)
- At any memoized function invocationset(id, args, result)
- When result value is cachedpurge(id, result)
- When value is removed from cache (users ofdispose
option, will now have to rely on this event)MemoizeeFactory
Its instance will be exposed on
memoizedFn.memoizeeFactory
when memoization is configured with'weak'
or'method'
contextMode
.It will produce different
Memoizee
instances, e.g. in case of'method'
for each different context differentMemoizee
instance will be created. Same with'weak'
contextMode, whenlength > 1
. In case oflength === 1
and'weak'
there'll either be other dedicated class, orMemoizeeFactory
instance will not produce anyMemoizee
instances (it will be just handling context objects).Methods
In below methods value can be a: memoized method (in case of
'method'
), aMemoizee
instance (in case of'weak'
withlength > 1
), or cached value (in case of'weak'
andlength === 1
)has(context)
- Whether we have value initialized for given contextget(context)
- Get value for given context (create value if not yet created)delete(context)
- Delete value for given contextclear()
- Clear cache (as we do not store already handled contexts it clears cache on already visited context only if they're processed again).There will be no mean to iterate over all contexts for which values have been resolved, as we will not keep handles to processed contexts in a factory (it's to avoid blocking them from gc).
Events
set(context, value)
- Initialization of value for given contextpurge(context, result)
- When value for context is cleared. (not be invoked forclear()
)This is just rough proposal, it's important that performance is at least maintained and at best improved (where possible). Therefore some deviations from above are possible.
It might be good to also consider:
resolutionMode: 'sync', length: 0
.primitive
version or serializer, to support fastest possible memoization which is based on id's resolved purely via stringification of arguments@memoizee
decorator that is ES draft compliant, and can be used to memoize class methodsThe text was updated successfully, but these errors were encountered: