The author is optimistic about the progress and maturity of these technologies and expects a clear reference stack to emerge in a few months. They are particularly interested in the assistant stack and are looking to maximize the return on investment from the resources they have put into self-hosting and feeding their data into the system. They are seeking feedback from others who may be using a similar setup.
Key takeaways:
- The author is considering using LLM daily and training it on their personal data like emails, notes, and chats.
- They plan to have the LLM draft replies which they will edit as needed, allowing the LLM to learn from this process.
- They have downloaded all the necessary open source tools and have been experimenting with interactive chat and semantic search.
- They are looking forward to a clear reference stack emerging in a few months and are interested in maximizing the ROI of their self-hosted assistant stack.