The scope of AI is very broad. Today, the LLM (Large Language
Model) that is being talked about all over the world is only one
of them, not all of them. The boundaries of language are fuzzy.
Craftsmen use this fuzzy boundary to automatically generate
multiple words with fuzzy boundaries for the context of a
certain topic, and then capture words by giving appropriate or
necessary weight in ratios (parameters). These words are enough
to meet or even exceed the scope of human imagination and
consciousness, which amazes most people and makes the current
large language model more competent for humans themselves.
Competence is a success. Allowing LLM to develop without
constraints and convergence can guide the direction of human
thinking. The author of the movie "Future World" did not mention
the reason why robots control the world? Perhaps the reason is
the current LLM.
The concept of fuzzy boundaries that can be traced began four or
five decades ago. At that time, all computer screens in the
world were character terminals, not like today's computers are
all image terminals, but with the development of hardware
technology, image terminals began to appear. In order to solve
the jagged defects of the surface/line of the continuous
function displayed on the digital image terminal, mathematicians
have cited a variety of function transformations to make up for
the jagged defects, and the quadratic surface interpolation
principle is one of them. There is improvement, but it still
cannot deal with the jaggedness caused by image enlargement and
reduction. Finally, the craftsmen used fuzzy boundaries
(eye-catching tricks) to make up for the appreciation of human
vision. For example, the empirical parameters of the
second-order matrix (the result of convolution processing) is
one of them, but it has nothing to do with quadratic surfaces.
The boundaries of the 2D and 3D images we see now are the
digital processing of the pixel gradients of the blurred
boundaries (also called convolution processing). These pixels
and gradients (pixels, gradients) can form approximate various
surfaces/lines to improve the jaggedness of the original
boundaries. The algorithm can determine that the approximate
pattern of its own jaggedness should be connected with them by a
certain surface/line according to the position of each jagged
shape and several adjacent jagged shapes, so as to achieve an
ideal smooth transition. These parameters are the earliest
prototypes of the current large language model's empirical
parameters. Two adjacent (several) jagged edges are equivalent
to the context of the current large language model. The
processing of fuzzy boundaries began to successfully serve
humans and was accepted.
Later, NVIDIA made game cards and used the above-mentioned fuzzy
boundary (empirical parameter) technology on the Thread/Process
for concurrent processing of multiple dynamic images, and
developed GPUs, including the basic layer and its interface
provided to the game development platform. The fuzzy processing
has been encapsulated in the bottom layer and GPU. The 2D/3D
boundaries of the images in the game are blurred to create a
good visual effect. Blurred, the slightest bit can not hinder
people's pleasure in the game process. The fuzzy boundary and
its processing have been successful again. So far, the fuzzy
boundary has only processed two- and three-dimensional space.
With the support of the game market, the GPU market has
flourished, and product upgrades have continued. It has laid a
powerful mechanism for multi-dimensional concurrent processing,
which is more suitable for the execution of neural networks.
However, the greater success is the OpenAI team, who understood
the principle of fuzzy processing technology in the digitization
process of continuous functions, and generalized and extended it
to other fields with fuzzy boundaries. They adopted NVIDIA's GPU
and related platforms and applied fuzzy processing to language
and video. The boundaryless characteristics of language and
video can be decomposed and create empirical parameters with
fuzzy boundaries. A word, phrase, sentence, or action can have
multiple interpretations if it is associated with a context. The
weight ratio of the parameter can influence the result of the
context. The fuzzy boundary of language is not like two- or
three-dimensional space, it has n disordered dimensions. A
simple example: a sentence with n contexts of a topic can be
decomposed into m directions of meaning, and each direction can
be further decomposed... It reveals that the fuzzy boundary of n
dimensions is already something that does not exist in human
consciousness. Craftsmen give different weight ratios to fuzzy
boundaries, which can produce different results. Therefore, as
long as appropriate distributed processing (neural network) can
be used to satisfy human beings in addition to visual effects,
another more interesting cultural and consciousness effect. The
biggest cause of interest is something beyond what people can
imagine. LLM was born and began to develop. Before this,
empirical parameters were only obtained through some simple
methods, but the combination of language and vision can be
infinite. To attract people's curiosity, more empirical
parameters and more advanced methods are needed. The topology of
convolution in n dimensions has surpassed the previous simple
algorithms (two- and three-dimensional space) and developed
towards n dimensions. GPU neural networks provide
n𠆤-dimensional algorithm hard environments, and the rest are
only soft methods, so the LLM movement began. So far, fuzzy
boundaries have been generalized, and craftsmen have tried to
use them in various fields.
The biggest difference between deep learning and ordinary
learning is that it is creative. The essence of deep learning is
to build on the basis of words that have already had a
contextual association, and then make a new association between
the new context and the previous one. If the latter is regarded
as the context of the former, it can still be summarized as the
association of context. Since the existence of human
civilization, language has been a string of symbols that records
human consciousness (including observation of things, natural
laws, etc.). These recorded strings of symbols should be
summarized and decomposed into contexts of certain meanings,
disciplines, or other purposes, and under the action of
appropriate weight ratios, the following text will always have a
deeper content generation (creativity) for the previous text.
The manifestation of depth is not only to trace back past
events, content, phenomena, things, etc., but also to reveal
more things with the development of the times. In the process,
the results generated by the context will be inherited.
Inheritance can continue to be associated for subsequent needs.
The creativity of deep learning, if not rigorous and convergent,
is allowed to develop, or may lead the direction of human
consciousness. Learning is endless, and learning should be
endless. This is the deep learning in the broad sense of
Form-World. Form-World Relational Transformation is an abstract
concept (2007, US patent US8051107), and the transformation
meets the mathematical definition of transformation. The concept
is used in an implementation carrier, named Form-World platform,
and it is used in various industries, mainly dealing with data
association, including data automatic processing. The platform
can associate the content generated before and after, and the
associated relationship can still be inherited.
Undoubtedly, LLM provides great convenience for mankind, and
people began to follow it. When most people recognize a thing at
the same time, it can become the truth. Science and inference
are sometimes established in this way. For example: the apple
fell that year, and the general theory of relativity later,
etc., all had a group of people standing for their own teams,
although the two conclusions were different.
I am afraid that only mathematics can conduct rigorous reasoning
and inference from axioms and definitions. Compared with
mathematics, it is obviously nonsense to use LLM for reasoning
and inference. It is just out of fuzziness for fuzziness.
Mathematics is a rigorous system of reasoning built upon axioms,
whereas large language models (LLMs) generate outputs
probabilistically based on fuzzy contexts. The "reasoning" of
LLMs resembles the extraction of possibilities from vast,
ambiguous experiences, rather than deriving logical necessities.
This also reveals that much of human cognition in everyday life
is inherently fuzzy, rather than mathematically precise. This is
not a denial of the value of LLMs, but a reminder: their
"truths" are products of collective consensus, not outcomes of
logical proof. The boundaries of human consciousness are always
fuzzy. Relying on the group may improve the fuzzy consciousness,
but it is still fuzzy. LLM can list a lot of answers for you
(also applicable to the plural), and its content has surpassed
the scope of group consciousness. People are of course amazed,
but those answers are also fuzzy when they think about it
carefully. In engineering, humans have exhausted all kinds of
disciplines to prove, calculate, design... to verify the most
rigorous scientific methods. But the actual engineering
implementation still adds a lot of safety factors. The safety
factor is a component added by people due to the uncertainty of
environmental factors, and uncertainty is the specific
manifestation of cognitive fuzziness. Humans always desire
rigorous reasoning and inference, and constantly explore and
evolve themselves, but people spend more of their lives in
fuzziness. Fortunately, LLM can always provide group
consciousness with clearer content.
LLM is more like a dynamic unstructured retrieval system, which
is very different from the traditional static structured
retrieval system (to be continued).