You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
std::vector<Route> routes = {
{
"/v1/chat/completions",
HttpMethod::METHOD_POST,
std::bind(&handleCompletionsRequest, std::placeholders::_1, &api)
},
{
"/v1/models",
HttpMethod::METHOD_GET,
std::bind(&handleModelsRequest, std::placeholders::_1)
}
};
in ddlama api at master branch we have only 2 routes /v1/chat/completions and /v1/models but some model looks like llama3:8b has embedding functionality. Can add you add new route for /api/embeddings ?
The text was updated successfully, but these errors were encountered:
Hello @testing0mon21 I'm not too familiar with embedding, but if I see correctly, llama.cpp supports it. This is not a priority for me, but contributions are welcome.
The text was updated successfully, but these errors were encountered: