The AI phase we are currently entering, preceding the development of truly intelligent, AGI, could indeed mark the most precarious juncture in human history.

In contemplating the risks posed by artificial intelligence (AI), it's not the prospect of highly advanced, intelligent systems that should keep us up at night, but rather the less intelligent variants like the current LLMs (Large Language Models) and their offspring. These systems possess the capability to perform potentially hazardous tasks without the discernment to recognize the consequences. As LLMs become increasingly potent and accessible, coupled with their eventual integration into networks and systems, the concern amplifies. The true danger lies in malevolent actors exploiting these LLMs to magnify their malicious intentions, leveraging skills and knowledge beyond their individual capacity. Breaching the protections of LLMs might prove trivial, merely tricking one LLM to crack another. With connectivity to the internet, financial systems, 3-D printers, and eventually autonomous robots, these less intelligent AI could unknowingly wreak havoc on the masses at the whim of the few.

As with most new technologies, we run at them head first, espousing all the potential benefits while giving much less consideration to the dangers. Ironically, while we are currently embracing 'dumb' AI we simultaneously fear that a truly intelligent general AI might someday wipe us out. The truth is that once we go down the path we are on, a super powerful AGI might be the only thing that can save us.