The author further criticizes the idea of using bots to spew niceties without substance, suggesting it would be more effective to provide examples of respectful behavior in a sticky post. The author also questions the trust in local LLMs over OpenAI's LLMs. The author requests specific examples of what the bots would say and the effect they would have on the subreddit, expressing overall bafflement with the plan and its reasoning.
Key takeaways:
- The author is confused about the idea of using bots to set a positive tone in a subreddit, questioning the effectiveness and purpose of such a strategy.
- The author expresses skepticism about the concern of OpenAI going rogue, suggesting that it's unlikely and that the focus should be on potential hallucinations that accidentally disobey instructions.
- The author questions the need for a backup plan in case of OpenAI going rogue, suggesting that it's more important to focus on the potential failure of the subreddit due to bot spam.
- The author is critical of the idea of using bots to filter out negative words, arguing that it's misguided to trust local language models more than OpenAI's current models.