You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. Use multiple data sources like PDF, Text and Audio files from insurance domain for chat with pdf 2. Change title as suggested in mail. - Teradata Enterprise Vector Store : vectorizing PDFs 3. For chunking of pdf text, can you do in-db STO with python. - It seems complex and time consuming 4. Use HF models for create embeddings via BYOM approach (parallel CPU inferencing) 5. Use 3rd party LLM (OpenAI/Bedrock/Gemini) for final answer 6. You will have the use HF model also for question --> embeddings
7. Also make some visualization (embedding to 2D) to show the selected chunk based on questions . I think scatter plot could be good which shows all chunks, question, and selected chunk
8. Store PDFs in object store or Vantage Table (pointing to object store)
9. No needs to add chat UI, Create pre-defined questions in a dropdown, and it can answer based on question selected.
New changes:
1. Use multiple data sources like PDF, Text and Audio files from insurance domain for chat with pdf
2. Change title as suggested in mail. - Teradata Enterprise Vector Store : vectorizing PDFs
3. For chunking of pdf text, can you do in-db STO with python. - It seems complex and time consuming
4. Use HF models for create embeddings via BYOM approach (parallel CPU inferencing)
5. Use 3rd party LLM (OpenAI/Bedrock/Gemini) for final answer
6. You will have the use HF model also for question --> embeddings
7. Also make some visualization (embedding to 2D) to show the selected chunk based on questions . I think scatter plot could be good which shows all chunks, question, and selected chunk
8. Store PDFs in object store or Vantage Table (pointing to object store)
9. No needs to add chat UI, Create pre-defined questions in a dropdown, and it can answer based on question selected.
PR# #752
The text was updated successfully, but these errors were encountered: