The author suggests a different approach: instead of trying to control the LLMs, the focus should be on educating the LLM users and encouraging them to work in tandem with the LLMs. This method not only ensures the work is done with verification integrated into the process, but it also promotes staff development. The author warns against the industry's current approach, which may incentivize staff to sabotage the company's AI efforts.
Key takeaways:
- There is a misconception in software engineering that extensive auditing and wrapping functions are needed for safe LLM usage.
- Many believe that another human or a series of LLMs are required for sanitization tasks, which is time-consuming and inefficient.
- Instead of controlling the LLMs, the solution is to educate the LLM user and have them co-work together.
- This approach not only ensures work is done with verification, but also elevates staff and prevents potential sabotage of AI efforts within the company.