Abstract
Natural language processing based on large language models (LLMs) is a booming field of AI research. After
neural networks have proven to outperform humans in games and practical domains based on pattern
recognition, we might stand now at a road junction where artificial entities might eventually enter the realm of
human communication. However, this comes with serious risks. Due to the inherent limitations regarding the
reliability of neural networks, overreliance on LLMs can have disruptive consequences. Since it will be
increasingly difficult to distinguish between human-written and machine-generated text, one is confronted with
new ethical challenges. This begins with the no longer undoubtedly verifiable human authorship and continues
with various types of fraud, such as a new form of plagiarism. This also concerns the violation of privacy rights,
the possibility of circulating counterfeits of humans, and, last but not least, it makes a massive spread of
misinformation possible.