-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add async thread pool for generating diskann cache and catch unexpected return. #226
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: cqy123456 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
cd3637e
to
1c4a180
Compare
|
d83e056
to
dd61b4b
Compare
knowhere/common/ThreadPool.cpp
Outdated
std::shared_ptr<ThreadPool> | ||
ThreadPool::GetGlobalAsyncThreadPool() { | ||
if (global_thread_pool_size_ == 0) { | ||
std::lock_guard<std::mutex> lock(global_thread_pool_mutex_); | ||
if (global_thread_pool_size_ == 0) { | ||
global_thread_pool_size_ = std::thread::hardware_concurrency(); | ||
} | ||
} | ||
uint32_t async_thread_pool_size = int(std::ceil(global_thread_pool_size_ / 2.0)); | ||
LOG_KNOWHERE_WARNING_ << "async thread pool size with thread number:" << async_thread_pool_size; | ||
static auto async_pool = std::make_shared<ThreadPool>(async_thread_pool_size); | ||
return async_pool; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not for general purpose, we need to make a static instance in DiskANN instead of here.
19a5de0
to
90c9ada
Compare
…ed return. Signed-off-by: cqy123456 <[email protected]>
90c9ada
to
f6cfc64
Compare
/lgtm |
/kind improvement |
Note: This PR is to solve slow loading time for DiskANN. It lets segments use a shared global threadpool to make cache. |
No description provided.