Thinking about this implementation that we are working on - I am trying to stay hands off till help is needed. I can’t really talk about it, I take confidentiality very seriously, especially amongst people I know.
There are a lot of ways that I can see that are being used/reused to run LLM algorithms on to make LLMs more efficient. For every problem, there are many ways to dice it.
Rounding back to the problem at hand, it is mostly a categorization issue but with a twist, it is to use AI algorithm to try to predict trends based on the data we have at hand. I think this is a beginning of what will be normalized later. I have always wanted to have every keystroke logged and categorized on my personal machine, then when I go to bed, run a rag training algorithm to “update” my personal AI. At some point, it should be able to respond like me and have a store of what I would do. Maybe a good business to start with children of today, we would have digital robotic replacements of every action a human takes in their lifetime, enabling the living to transition to a trained version of a person if they so chose. My mind goes scary places, the recupercussions of that will take me a few days to think through.
This takes it a few steps further of using a research paper published about predicting trends.
One of the ways is chunking data. I have been thinking about this for a while and I am not sure if it is the best way to do it.