T H E O R E M S
B A T C H 1
The newest theorems
Written by Aristo Tacoma. Each can be further
distributed when copyright license Yoga4d CFDL
(as on yoga4d.org/cfdl.txt) is respected.
This is the newest batch of theorems. For
the foundation batch, click here.
Still, the bottom part of this batch #1 is also
regarded as fairly foundational -- the line of *'s
divided the bottom part, which is stable, from the
newest theorems, which is above that asterix line!
**************************************************
UNDERNEATH THIS LINE: FAIRLY FOUNDATIONAL THEOREMS
AND THUS A STABLE PART OF THIS BATCH #1
Theorem: Sort isn't a first-hand concept
1::B::2013::11::30
Background of terminology: In this section
with theorems, we use the words with a
precise enough meaning within this context,
which is a context informed by an understanding
that the infinite is a subtle concept. Given
the work on such as essence numbers, it is
a context in which it is clear that in a
vast perspective, anything said about the
so-called finite must be imagined as taking
place inside an infinite realm -- rather as
a "contact zone" between two forms of
pulsating infinities. It follows from this
that the infinite is irreducible to finite
concepts and that it is out of the question
to "build" the concept of the infinite by
some attempt at extrapolation from the
finite. The infinite can be said to "behave"
differently in key ways (and as such, the
concepts sought to have been used in e.g.
the 20th century -- such as "approximating
a limit" -- are found to be inadequate to
sum up a coherent clear set of ideas about
the infinite).
This leads to an understanding of words
such as definition, theorem, corollary,
deduction and such in which we will normally
not assume that it is practically possible
to make explicit, at least not in the form
of one article or even one book, all the
boolean-logic type of strict inferences
necessary to fully provide a logical proof
of anything such as a general or metaphysical
point about infinity-related concepts. And
even if we are presented with the whole
explicit schema of inferences, these in
turn start with definitions and axioms
that require a perception into the infinite
to make any much sense. Instead, when we use
the word 'theorem' here, we intend by it to
say that we regard it as within the
"landscape" of correct deductions but where
the mind of the reader must engage at an
intuitive not merely logical level in order
to apperceive this reality.
Background of the particular theorem: In the
works such as in the 20th century connected
to algorithms, in particular computer
algorithms, a recurrent theme is that of
imposing what is called a "sort" on an
unorganised list, perhaps a list of names or
numbers, which can be very large indeed.
Algorithms have been proposed with some sort
of subdivision of lists such as "quicksort",
which, after many permutations, puts one name
after another so that all those e.g. with letter
a come first, then the letter b, and so on
up to z, and at the second position of
letters after the first, the same sequence
is imposed also, and so on all the way until
the length of the name in each case is reached.
Before the era of interactive personal
computers, humanity relied to a large extent
on printed-out paper catalogues sorted by such
a principle in order to retrieve data from
large collections. (A computer may mimick such
a search by what is called "binary search".)
However, there is no necessity of employing
this type of organising principle in order
to obtain a quick look-up using a personal
computer. Nevertheless, the notion of sort
of large lists has come to dominate 20th
century computer science to an extent where
it has been, by many, regarded as part of
its essence. In contrast to sorting of large
lists, which is a complicated process, the
sorting of very small lists is intuitively
simple by the completely different algorithm
often named "bubblesort" (and similar such),
and which form a typical sub-component of
the larger and involved sorts.
Theorem: Sorting a significantly sized list into
such as an alphabetical or numerical sequence
isn't in general a core part of first-hand
computer programming.
Deduction: Our informal deduction of this theorem
will then be an attempt to summarise the "milestones"
in the "landscape" of definitions, axioms,
deductions, sub-theorems and corollaries to these
sub-theorems, all needed to erect this theorem as
a proven statement. This summary will be extremely
brief and it will be readable only in an intuitive
sense, but it is proposed that a full deductive
analysis COULD be made.
First of all, in watching how humans are naturally
organising information, it is clear at once that what
is grouped and the sequence in which things are grouped
relies typically more on meaning, "fields of semantics",
rather than on names or numbers. In watching fifty girl
names and ten names of machines, the natural
groupings are those fifty girl names in contrast to the
group of the names of machines, regardless of the
characters in those names. Similarly, for people who
regularly work with numbers, the patterns of digits
inside a number evoke various meanings and the instant
recognition of such patterns in the numbers presented
then become a natural way to group them. These groups
may then loosely have a sequence such as from left to
right which may not have much to do with the sizes of
the numbers.
Similarly, a computer, having a range of algorithms
available, and also a set amount of ram addresses
with the psychological meaningful 32-bit data range,
will not necessarily store incoming data in a sorted
way when we speak of large lists. For example, in some
contexts it may be more natural for the computer to do
a rough grouping -- e.g., it can be derived from a
hash number principle. The hashing algorithm -- which
also can be said to generate an 'implicate key' from
the 'explicate key' or 'explicate data' given to it,
-- can be proposed to be a KIND of perceptive or
semantic ingredience, to some extent. And it is very
clear that the sequence of the hash numbers typically
have little to do with the sizes of the incoming
numbers, or with the alphabetical sorting if what is
hashed over are names.
This is not to say that size of numbers, or
alphabetical sequence -- two concepts that really, at
the level of data, are the same -- is not sometimes
having a component of meaning. And meaningful
relationship to data is indeed a core part of
first-handedness. But the principle of first-handedness
is that meaningful relationship to data is put first
and foremost, and it is typically not so that for any
wide spectre of data, size is adequate measurement of
meaning.
Even in a beauty contest, where length of legs may
be said to be a size of great meaning, it will still
only be meaningful to talk of length of legs in such
a context given that one also talk of a number of
other factors which interplay to create beauty --
including but not limited to length of torso (so that
the proportion of length of legs to length of torso
is big, rather than the absolute length of the legs),
smooth symmetry and sensuality of face, shapeliness of
feet, massiveness and shine of hair, glow of skin,
firmness of body, shapeliness of muscles, and so on --
all in all factors that lead the child beauty girl and
the adult beauty girl to be sexually fairly much the
same, as enlightened observers of human sexuality
has noted, but which has been combatted in the
anti-sexual culture that quasi-religious semi-moralists
have purported to be 'healthy attractiveness'. In the
totality of meaningfulness in such a context, only when
other factors of meaning are the same (which they nearly
never are except for very small amounts of data), can
size alone take on the chief meaningful organizing
principle of the data.
More generally, it is by semantic ordering that one
in general finds it possible to program computers
well, given a finite region of ram. The creation of
a sorted list is an imposition that generally must
come after the data has been stored in some other way,
and then only by relatively heavy use of computational
resources -- which, all taken into consideration --
are not well-spent computational resources unless the
aim is to created printed catalogues for human manual
searching. But it would be more satisfactory -- and,
thus, meaningful -- to produce an ordering that comes
naturally the first time data enters the computer,
suitable for the finite segements of ram set aside
for each semantic portion of data -- so that the
required data can be reproduced for the human
interactor without the artefact of going through a
sorting mechanism (whether implemented as batch or
in some way directly, such as by 'trees' of data).
Are there then no circumstances in which sorting
is semantically first-hand when it comes to data?
It is proposed that the only circumstances are
special cases which lends themselves to simplistic
sorting inside a prior, larger organizing principle
that doesn't entail use of any sorting algorithm.
For instance, a free sequence inside of an alphabetical
or numeral ordering can make sense in some contexts.
Perhaps the two first digits of a longer number can
provide an entry-point for which ram segment to store
the longer number in. On some occasions, the numbers
within a particular segment is called for, and then
in a sorted way: but this has then already been
strongly limited in size by the initial approach of
categorising these segments, so that any imposition
of a complicated sorting algorithm such as "quicksort"
should not be necessary in any first-hand data model.
Rather, the notion of "bubblesort" can then be used --
and indeed, studies of various ways of doing the
"quicksort" approach have found that once the quantity
of data is small, bubblesort is actually more
efficient than the complicated algorithm as a whole.
It is therefore not uncommon that when any large
"quicksort"-like algorithm has been implemented, there
is within it a check on how many numbers or names are
left to sort, and under a certain limit, the algorithm
calls on bubblesort as a sub-algorithm.
The bubblesort, used within a larger context OTHER
than a sorting algorithm, is then a first-hand approach.
This other context can be such as an instant decision
to use such and such segment of ram given such as a
hash or such as the first part of a name or number.
This instant decision is not a sorting algorithm, but
in some cases it paves the way for a meaningful use
of bubblesort, which then also is first-hand.
The theorem is, then, intuitively now regarded as
established. For completeness of the indication of
the concepts here used, let's briefly state the
procedure of the bubblesort.
The procedure, described in next paragraph, will sort
a list like 3 4 1 2 gradually, in several small steps.
In the first cycle, 3 4 1 2 becomes 3 1 4 2 then
3 1 2 4. In the second cycle, 3 1 2 4 becomes 1 3 2 4
then 1 2 3 4.
Each cycle consists of comparing two and two numbers
beginning on the left and proceeding to the right.
First, the first and second number is compared. If
these are in incorrect sequence, they are swapped.
THen the second and third number are compared. Again,
if these are in incorrect sequence, they are swapped.
A number entirely out of place is like a 'bubble'
going through the sequence, especially if read
vertically rather than horisontally. When the completing
pair of the numbers (or whatever) is compared and, if
need be, corrected, one cycle is complete. When a
cycle has completed in which no corrections were
necessary, the bubblesort algorithm has finished.
THEOREM: Sound, video and timing isn't properly part
of the philosophically important and clear concept of the
Personal Computer.
1::B::2014::07::18
INFORMAL PATHWAY OF DEDUCTION:
1. BACKGROUND
A computer, in its pure, philosophical form, is also a
limited, manifest instrument where performing an algorithm
and looking up data by this algorithm is assumed to take
some unspecified amount of time. Not much time, compared
to the psychological meaningful unit of time such as a
second -- we can speak of thousands even millions but
the performance speed has to make psychologically 1st-hand
good sense for the Personal Computer to be a coherent good
concept. This computer has limited RAM, limited speed,
it has a keyboard and a mouse as input, it has a disk
for more enduring and larger but slower storage than
RAM, it has a monitor like 1024 times 768 and it has
a variety of pixels, such as bright green to black in
a variation over e.g. about 60 tones. This is suitable
for a non-emulative non-imitative reproduction of
quality photography so that to the living mind, the
SENSE of reality is stimulated within, and the living
mind will provide a sense of colors, scents, motion and
so forth.
This is philosophically an important concept, and as
other theorems in this series of intuitively evaluated
informal theorems indicate (assumed to be possible to
bring forth deductively in a strict form only by masses
of books), a far more important concept than any
abstract notion of the computer where one imagines
that there aren't limitations here and there. For there
is a fundamental philosophical difference between a
structure which has definite limitations about it of a
psychologically meaningful kind, and a structure which
undergoes such dubvious 'extensions ad infinitum' as
e.g. in the geometry postulates of the ancient hellene
Euclid, and followed up in logical consequence, but with
incoherent results, by such as Georg Cantor, Bertrand
Russell and others laying the foundation for the type of
infinity-unaware 'mathematics' as came to be the fall
of physics, once it got wedded to this incoherent
approach. (The approach taken by here of essence
numbers, supermodel theory and the popularisation into
q-fields avoid such incoherent ideas connected to a
sloppy relationship to infinity.)
2. CONCRETELY
Considering the theorem above, it connects to three
themes -- timing, sound and video. It is clear that
whether or not video is regarded as with sound or
without sound, it is a concept which involves a
simulation of movement by fooling the eye by providing
changes of visual content faster than some 20, 25 times
pr second, and doing so over a period of time in a way
which depends on timing. Sound, such as the reproduction
of music or talk, obviously also depends on timing. We
can therefore concern ourselves primarely with the
concept of timing, and ask what relationship this
concept has to the concept of the Personal Computer as
indicated.
Let us, before we do this, allow ourselves to bring in
the notion of electronics, such as through our Elsketch
first-hand electronics approach, or through some form of
chip-making or another, eg ASIC, Application-Specific
Integrated Circuits, which is nothing but Elsketch on a
microscopic form, burned into thin layers of the same type
of silicone or such that would otherwise constitute such
as the transistors in the Elsketch format. It should be
clear that sound recording and replay, as well as
whatever visual means it makes sense to produce, can well
be done by means of the same type of electronics as also
can give rise to a Personal Computer. A Personal Computer
with many megabytes of RAM involve a lot of high-speed
components, a great quantity of them, but the principle
is by and large exactly what is needed also to produce
something like a music station. And one can also imagine
that the box called a 'Personal Computer' is equipped
with extra components beyond what is coherently called
for in a Personal Computer, and since the electronics
components are of the same type, this should provide no
practical problem at all!
As for the question of timing, it is clear that in the
usual design of a Personal Computer, there is such as a
computer clock -- an oscillator -- providing a sequence
counter for the instruction performance. It is not
thereby clear that doing a particular instruction takes
exactly a certain amount of microseconds or the like.
It is rather that the computer organises things around
the notion of ticks of this computer clock. This is
-- although the timing signals can be read of, to some
extent, by an instruction -- assumed to go on in the
background. It is not the same as to measure what time
the performance of each instruction takes, nor is it the
same as to promise that the performance of each
instruction cannot take place over more such clock ticks.
Some CPU instructions are complex, and others are simple;
the complex ones obviously take more time. Indeed the
whole programming context is such that IN ORDER TO DO
GOOD, MEANINGFUL AND COHERENT PROGRAMMING, ONE MUST
BE FREE TO LOOK ASIDE FROM THE QUESTION OF PRECISELY
HOW MUCH TIME IT TAKES TO DO EACH PROGRAMMING THING,
and merely have a sense of the overall duration questions
involved with the program relative to the human, living
context within which it is going to be used.
But this also means that while the program can read off
a timer, whether an external timer or the same used by
the CPU to sequence the instruction steps, the programming
langauge as such -- for the pure philosophical idea of the
Personal Computer -- isn't tied in sharply with any form
of manifest timing AT ALL. Thus, for instance, it matters
not so much exactly when a pixel is drawn on the monitor,
-- it can even be a quarter of a second late, if the
monitor electronics is doing other things -- as long as
the right pixel get up at the right place in the right
sequence. The visual aspect of the monitor, even with the
device of the mouse and the mouse pointer symbol, is
naturally tied up to the computer program. But such
freedom in connection with timing isn't the approach
that electronics dealing with sound or video or even
timing in general ought to take. A variation in when a
frequency is produced in the loudspeaker within something
like a quarter of a second is enough to make the sound
psychologically meaningless.
A first-hand computer, programmed in a first-hand way,
is an instrument which is then crafted out of a conscious
relationship to limitations such that the visual idea of
the monitor fits perfectly. There is no way in which the
idea of exact timing can fit equally perfectly. The 1st-
hand computer isn't supposed to be speeded up beyond the
natural speed where 1st-hand programming involves a
contact with the sense of duration for the program. But
sound-production properly involves another instrument
-- which can well be electronical and can be put into
the same box as the Personal Computer -- without having
the same affinity to the concept, and indeed also the
super model, or q-field, of the Personal Computer itself.
The same argument goes for simulation of movement by
rapid changes of the whole matrix of pixels many dozens
of times pr second, as video; and in both cases, it has
to do with the importance of letting time be a
deliberately somewhat 'indeterminate' variable for the
program.
*****
ATWLAH