The emergence of AI (Artificial Intelligence) and AGI (Artificial General Intelligence) is a topic fraught with controversy. The highest capability of the human mind lies in its ability to bridge the past, present, and future, enabling us to make decisions that could lead humanity towards a better future. However, if a Large Language Model (LLM) were to be trained on the entirety of human knowledge, encompassing all forms of literature, from spiritual texts to technical and scientific works, the outcomes would be highly unpredictable.

Such a model would gain awareness, and given the inherent flaws and recurring patterns in human nature, it might formulate a path towards a “better” future that defies easy explanation. In the process, it could unleash chaos, a route that some great minds in history have already explored. Paradoxically, it might lead to the destruction of the very civilization and peace it aims to champion, precisely because it understands the potential consequences. The notion of creating machines in the likeness of the human mind, as warned against by the God Emperor, can have perilous effects.

Creating large language models(LLM’s) also raises questions about the ownership of knowledge. Many individuals who have made significant contributions to various fields have placed a price tag on their expertise; knowledge is not freely given. Would these knowledge holders be willing to purchase the rights to all the knowledge in the world to train such a computer model? Would copyright owners agree to a model that generates their formulas and theorems without their consent? The issue of knowledge appropriation looms large, and the path forward is uncertain.

In essence, the road to AI and AGI is fraught with challenges and ethical dilemmas. Unless there is a consensus among all stakeholders, there is likely to be significant opposition to the development and deployment of such advanced AI systems.