Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama3 #44

Open
gabyavra opened this issue Apr 21, 2024 · 3 comments
Open

llama3 #44

gabyavra opened this issue Apr 21, 2024 · 3 comments

Comments

@gabyavra
Copy link

trying to use it with llama3 but it does not reply to my messages

image Screenshot 2024-04-21 at 18 13 27

Any ideea how to debut this?

Also when I try to run it from docker (although there is an .env file with the api key for telegram)

image

Thank you.

@gabyavra
Copy link
Author

It worked eventually with llama3, but I left llama-2 in .env file.
maybe here a try catch would be useful:

IMG_4250
IMG_4251
IMG_4252

@lnrssll
Copy link

lnrssll commented Apr 30, 2024

The error you're getting in the message is already the result of a try-catch:

ollama-telegram/bot/run.py

Lines 256 to 261 in 279fac5

except Exception as e:
await bot.send_message(
chat_id=message.chat.id,
text=f"""Error occurred\n```\n{traceback.format_exc()}\n```""",
parse_mode=ParseMode.MARKDOWN_V2,
)

The model response is streamed to the user and chunks are constructed into the message response -- you likely received a complete response, in addition to the error message, which was caught on a single chunk.

If you don't want to see error messages in the telegram chat, you should be able to replace lines 257-261 with a log or print statement (anything other than a message) and restart your container

@gabyavra
Copy link
Author

next step for this would be RAG support via telegram :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants