Replies: 1 comment
-
You would need historical data to train on, otherwise the model has no idea what sentiment is or how it tends to effect teams or game outcomes. You could rate news or make an LLM score and give it to the model as an integer, but you would have to get accurate data for training game's, which is about 20k. LLM's work by doing next token prediction. If you ask it questions like who will win tonight's game, it will reply with what it thinks is the most likely reply. This is different from most likely outcome, or an actual prediction of game, it only predicted what to say, not what will happen... Fundamentally you can ask an LLM what ever question you want and it can only respond with tokens that it predicts should go next. |
Beta Was this translation helpful? Give feedback.
-
can you streamline parameters for additional data to the xgb-boost model? for example, if i had an llm providing a sentiment score on news reports from bleacherreport between games for a team (1 continuous additional input). I have done this privately, but I think many of your users would also benefit from this schema
Beta Was this translation helpful? Give feedback.
All reactions