Arguments against AI, against "machine intelligence" And the positive conceptualisation of anti-AI as a constructive concept in the future of technology Written by Stein Weber Document location: Written in 2016 as a summary of some of the arguments already given in other articles in the G15 PMN documents by same (other penname: Aristo Tacoma). Note that while this text has some spelling issues here and there, its conceptual content comes entirely clear through and there a few novel ways of showing some things about Goedel and so on here.
BACKGROUND For anyone who is interested, there's a whole set of well-written articles, some with great depth, and with much interesting disagreement, in the wake of Kurt Goedel's [see footnote] article from before World War II. During this war, in part as a result of the enterprise to decode encrypted messages, the abstract concept of 'the computer' as was developed by Alan Turing and others in the wake of Goedel's work, were given more concrete, albeit very ineffective forms. These more concrete forms were, after WWII, developed further in the following decades, not by changing the core ideas, but in terms of compression, speed and reduced power requirements, as well as more 'fluid' instruments for human interaction with that computer. For instance, the mouse pointer device, together with keyboard and the monitor capable of showing graphics with varying tones of intensity of the pixels came in and replaced the approach of interacting with the computer by such means as were typical in the 1960s. In the 1960s, printed text was the main human-visible output from academic computers, while typing on a machine that punched holes in cards, then fed to the computer, was the main input channel. Towards the completing years of the 20th century, important features of the arguments against AI were summarized and extended in a very specific direction in the Emperor's New Mind by Oxford mathematician, professor Roger Penrose, teacher of the well-known UK Cambridge physicist Stephen Hawking and co-author of the theory of black holes. A number of other writers contributed also significantly to the arguing against the notion of the possibility of having computerised minds or any real computerised intelligence in some way, including the legendary physicist David Bohm. In addition, philosophers such as John Searle argued against the notion of attributing 'meaningfulness' in any way whatsoever to computers. A number of other contributors were active; the contribution by famous Roger Penrose however stands out as one of the media-grabbing productions of the time. GOEDEL AND PROGRAMS: TRUTH GREATER THAN PROVABILITY Kurt Goedel's work seemed to show that any attempt to summarize all human mathematical knowledge in the type of axiom systems that people such as the Nobel Laurate Bertrand Russell had tried to do was not only fallacious in the concrete attempts made, but was doomed to fail in principle, and for all time. The magnitude of this theorem created depressions in mathematicans. The emotionality some people had relative to Goedel's Second Incompleteness theorem from the late 1920s was, however, intensely fruitfuly in producing scores of logical attempts to supercede the limitations of the mechanical that Goedel had pointed out, logical attempts that, in each case actually only strengthened Goedel's original argument. And so, one of those who strengthened Goedel's argument while trying to beat it was Alan Turing, who conceptualised the Computer as a person who was following rules in his logical metaphor in his paper. Having conceptualised the Computer as a person who followed rules, he went on to change this concept into a machine. Goedel's work, which took place before the advent of programming languages, could then be given entirely fresh perspectives. One of the key points of Goedel's work is to show that there are true but unprovable-within-the-system statements. Turing was able to play along with these ideas using his computer and computer program concepts. Other people then developed this into particular theorems (cfr eg "Church's theorem" and many more such). Once there's one unprovable statement within such a system, one can build an infinite number of them. (The use of the word 'infinite' in this article is the conventional, classical one; there are arguments in favour of being much more careful with the infinity concept and for this, cfr links also from the top of, also about L E J Brouwer. For instance, imagine, very informally and freely, that we have a programming language and we imagine we have a program in it called 'provablequit'. Suppose this language has something like 'until' in it, to make loops UNTIL something gets 'true', rather than 'false', and that it has a way of doing NOT so that 'true' and 'false' are converted into each other. Provablequit tells for programs something that in every case is either correct or wrong, namely whether the program quits or not--ever--given a certain input,--an input which for simplicity, in this case, we assert is the quote of the program itself, given to itself as a text string. We are going to investigate whether it can be truthfully said that 'provablequit' can ever be made so as to cover the whole range of all possible programs in this programming language. We are going to show, then, by 'reductio ad absurdum' that there are programs for which provablequit can't handle. In other words, there are programs which cannot be understood by one such master general perception program. In yet other words, it means that, within the system, we are interested in finding out whether there are statements that are obviously correct or incorrect, but which cannot in principle have their truth reduced an algorithm, into a program, into a rote procedure (something which can be misspelled into 'route procedure', but 'rote procedure' is the correct phrase--I myself have spelled it as 'route' in some other papers; but 'rote' means--mindlessly dead mechanical). So provablequit tells, for a range of programs, a bit more precisely: Does this program, as quoted in source, given this quote as input, ever exit properly, or does it fall into that which we call an infinite loop? The program must only output TRUE or FALSE for each question, and it must not output any third, alternative value. It will always and in every case produce an answer for all the programs it can produce answers for. And we are going to show that it cannot be made so that it covers all program. Put simply, there are truths not reachable by proofs. To repeat, we are going to assert that the program is given, as input, the text form of itself. This is a formality that makes it simpler to make the argument, but once it is done, one can generalize so that an infinitely larger set of programs can be made along the same approach. One clear-cut example is enough to set the case that there are incomputable questions. And this, clearly, must suggest to the reflective person that there's more to actual intelligence and general perception than that which can be made into a machine-like rote procedure; a point we return to. It's clear that, abstractly, a program either exits or not, when we apply common notions about programs and disregard whether it happens in a thousand seconds or as many millenia. This is not about computer speed, but abstract evaluation of a property of programs. So provablequit is a program that takes any program in this language as source form and always produces 'true' or 'false' and nothing but to this always-meaningful question, which includes a specific input to the program, connected to whether this program ever quits or not,--for all such programs that this can be worked out with certainty, or 'proved'. (We deliberately use a language that stays near to Goedel; note, however, there's a whole range of subtle differences between this contex.) Let's imagine that the language has an instruction such as source(prog1), which, instead of performing the program prog1, gives the source of it. In our simple language (vaguely like a form of Lisp), we can type in such as print(true); and the computer will print true and we can make a program like prog3 this way: prog3(x) = print(x); and run it by typing prog3('Hello world!'); It then responds with Hello world! The program provablequit is made so that it can be called in this way: provablequit(source(prog3)); provablequit will then look at how this behaves: prog3('prog3(x) = print(x);'); and prog3 would, if started in this way, simply prints out itself: prog3(x) = print(x); and this means that starting provablequit this way produces: true True, the program prog3 does exit with this input. Let us now make another function. Let's call it func5. It it's much the same but it calls the program provablequit. func5(x) = until(not(provablequit(x))); We can this this program first by calling it like this: func5(source(func3)); In this case, we would have the computer first to do what it did just above, namely provablequit(source(prog3)); which should produce, as we found, 'true', and then the not(..) sets into action and the output is reversed, the result being that the core within until(..) evaluates to 'false', and the program never exits. It leads the computer into an infinite loop. So far so good. To complete the reductio ad absurdum we now perform, of course, func5(source(func5)); and this should, in case provablequit is COMPLETE--and covers all programs-- lead the computer to evaluate provablequit(source(prog5)); and, of course, whatever the result is, the computer will act to do the opposite of what it is predicted that it will do. If the provablequit() in this case tells that the computer will exit, that's inversed by the not() and the until(false) type of loop will arise. If the provablequit(), on the other hand, tells that the computer won't ever exit from a loop, that too is inversed by the not() and the until(true) statement will be performed and the program does exit. In both cases this is then a false prediction by provablequit(). This leads to the following type of statement which is often found when one speaks about Goedelian stuff: blah blah is EITHER INCONSISTENT OR INCOMPLETE. Concretely, we have shown that * provablequit is either wrongly made, or its scope isn't complete An 'inconsistent system' has self-contradictions. Due to the nature of how logic is set up, even one inconsistency spreads to the whole system, making every false as well as every true theorem 'provable' and by it, the system is no longer a system, but ceases to exist; it has no information value. If it does have information value, ie, is not inconsistent, then, by what Goedel in his famous paper called 'a form of meta-reasoning', we can see that the system is not complete. Ie, it is incomplete; and this we have PROVED. So, we also shown that * given the assumption that we've a consistent system, with provablequit in it, we have found that func5 doesn't permit of an answer by provablequit. Since provablequit can only be consistent in such a case by refusing to produce an answer, we are led to the conclusion that provablequit doesn't exit in this case, in this context where true or false are the two acceptable output values of provablequit. But then we have also shown, implicitly, by a form of meta-reasoning: The program call func5(source(func5)); will not lead to an exit of the program. So, Goedel said: we are led, informally, by a form of meta-reasoning, to the point of view--by looking at what is said, and see what it implies--that we have a piece of knowledge that is not found within the range of what is provable within the system. And that knowledge is that func5(source(func5)); leads to an infinite loop. This knowledge is a truth, but it isn't provable. And, in this type of mechanical rote procedure logical systems, where the homomorphic mapping between abstract programs and axiomatic systems is a pet topic amongst logicians, once one thing isn't provable, then an infinity isn't provable. (Again, let us say that the use of the word 'infinite' and its more or less synonyms in this article is the typical, mainstream usage, and that there are entirely different approaches we discuss in links other documents, including in the completing pages in [^]). Let us bear in mind what has been said above: even one item of incompleteness becomes infinitely many, for these systems or machines. Even one item of inconsistency becomes infinitely many, when we speak of an axiomatic system, a rulebound system. And every computer algorithm is like this: it can manipulate a matrix of data that guides its own operation to some extent, such as in what is called, sometimes, 'connectionist' programming, or FCM, but programs, by nature, having no element of 'learning' by them. And, as such, they cannot handle inconsistencies in their core. FROM GOEDEL TO TURING: TO SYSTEMATIZE REQUIRES A LEAP OF INTUITION Now Alan Turing pondered on this. How come we have proved something that is unprovable within the system? By standing outside it and doing something clever. This cleverness Turing wanted to summarize as a machine,--and we can then say 'Computer'--and eventually we modernise the language and say 'a program can surely do this, no?'. But it isn't that simple. Turing speculated over this in connection with a paper on what he called ordinal logic. In these works, he mentioned that he wanted to do away with the need for intuition in human reasoning. So, by making a 'goedelisator program', and making a system for this, he can maybe circumvent the incompleteness. Now Penrose and others noted, precisely, that he only succeeded in strengthening Goedel's argument at this point. Penrose even suggested that the fact of proving something that a machine can't prove suggests that we, human beings, aren't machines. This is a particular approach to arguing against AI, we'll briefly summarize several more approaches, and note that Goedel's essential work lends itself to a variety of strong arguments against AI. Now, in order to make the setup above, in all concreteness, given a concrete system, including the provablequit, we must engage in a broad systematic generalization of what the programs are all about and how they behave. Once this has been done, it is, as Turing pointed out, possible to 'goedelize' and make of this a new, larger system that has the newly found unprovable truth as part of its broader set of truths. However: as soon as this new, larger system--or machine, if you like-- has been made, exactly the same type of reducio ad absurdum can be made around the NEW, LARGER SYSTEM, SO IT, TOO, HAS THE SAME PRINCIPLED INCOMPLETENESS. So Turing achieves, by the idea of going in for a systematization of Goedel's work, merely the implanting on more machinery on top of the machine, but it's still a machine, and is as vulnerable as the first machine to the exact same argument in absolutely unchanged form. Turing, who didn't give up easily, then suggested: well, then, let us do goedelization again and again, and make a machine even for this. Fine: but the machine again is just a machine--as a whole--and there will be, in each case, an infinity of truths not reachable about it. The expanded machine grabs merely something of that infinity, even if that may be an infinity of that infinity--and there's infinitely more to be fetched, in principle, all the time. But how can this be? How can it be that the procedure above, which does seem to be so clear that it can be considered a kind of rote procedure--ie, the procedure we employed to construct func5()-- cannot be made into a machine? Oh yes, it can be made into a machine, but, as we have already touched: this procedure can only be performed AFTER we have succeeded in producing a rigorous, systematic, consistent presentation of our present system. And we wish to expand the first machine with the goedelization approach, or procedure, then we have to do essential new systematization. This work isn't itself automatic. Even if one tries to make some attempts of making it automatic, this will as a whole still be a machine, which admits of a new level of goedelization, and is still incomplete. What this systematization is about, generally, is that any machine is, by its nature, finite in its construction, even as that which it aims to handle--the domain of provable stuff--is infinite. As soon as we have anything finite, it's possible to show that we have an infinity of truths not touched by it. To systematize requires a leap of intuition. And, let's bear in mind, there's no limit to how much systematization that's needed if we wish to connect something mechanical or machine-like to more of the ocean of that which is beyond the provable. FROM GOEDEL AND TURING TO SOME OF THE MAIN ARGUMENTS AGAINST AI As is known, Alan Turing wasn't only concerned with working, as he did so successfully, to strengthen Goedel's incompleteness postulates. In fact, that was rather a byproduct and his motivation was directly the opposite. While he never achieved anything even remotely like an argument in favour that human intelligence can be mechanised, he believed that 'thinking' not necessarily is beyond what a machine can do, and created an elaborate test involving computers in a telegram-style communicating with humans and humans doing the same in communicating with humans, where the computer aimed at fooling a human equally well as a human on the point of gender identity (which was a hot topic for gay Turing in a U.K. which had prosecuted Turing by hormone injections for being gay). Sometimes, one finds that people have popularized this notion into a 'Turing test' which is nought but the degree to which a machine can fool a human into thinking that the machine is human. There has never been any rigorous work anywhere to show that this is an adequate test for whether 'thinking' can be applied to an artefact. The Turing test is merely a rhetorical figure, unlike the previously mentioned work which led to an almost infinitely stronger version of Goedel's original incompleteness theorem. When we considers arguments against AI, we must also be aware that in a world in which there are huge industries aiming at selling their cunningly programmed machines and these huge industries at the same time are influencing both research institutions and dictionaries and people at large with their possible broad range of activities, which may include search engines and even manipulation of the roles of words in the English language, we must keep our sceptical awareness alive and consider this worthy of deeper explorations than that which perhaps is most easily found on today's internet. First of all, whether we approach the incompleteness of the idea of the machine from the point of view of Kurt Goedel's theorems, or from the more subtle point of view that infinity in any case is entirely beyond what machines and programs can ever touch, the core argument against the notion of artefacts equipped with intelligence, or artificial intelligence, is that * any such artefact, when made in some way as a machine, is infinitely incomplete no matter how cleverly it is set up to act correctly, when the domain is so broad it also includes its own activities In other words, * a machine can't 'read between the lines' when these lines include the machine itself And intelligence, indeed, can be considered also to be hinted at by means of its etymological roots, namely "to read between the lines", when we understand it as "inter-" and "-legere". We can also interpret it as to 'gather from in between the lines", with a different understanding of the second root. In yet other words, * a machine is in principle denied self-awareness, though it may have partial capacity to act on situations which include its own activity mapped in a certain way For by the word 'awareness' is meant something lively and, indeed, able to 'read or gather from in between the lines', and not limited in principle. Some, like the late 20th century writers Patricia Churchland and Margaret Boden, argued that there may be inconvenient for the emotional make-up of the human being to regard the human mind as machine but that isn't an argument against the possibility. Let us therefore look at this possibility and why it must be refuted. This possibility, that the human being and the mind is a machine, has long been nurtured in academia and is implicit in such as the Stanford-Binet's phrasology of 'Intelligent Quotient'. Since the simiple-minded followers of the more broad-minded Newton found it convenient to equate the human body including mind with a machine, a number of programmes have been erected and, despite the torrent of indications that the quantum nature of the essential matter properties point towards another type of worldview altogether than the newtonian (and einsteinian), the mechanical world-view has by some simple-minded thinkers been proposed to be identical with science and the scientific. This, however, is clearly not what science and the scientific is about, if we go to generally acceptable founders for scientific pure thought such as Karl R Popper (cfr also other documents by the under- signed of modifications of the popperian approach in what I call 'neopopperian', which, I argue, are even more worldview- neutral and compatible with a sophisticated interpretation of the vast portions of modern physicists that do not lend itself to a simple atheistic understanding). The semantic argument against the mind being a machine was fairly forcefully offered, in the last decades of the 20th century, by John Searle. His argument was simple, and more easily understood by people at large in the 21st century now that computers are much more common and many more have an understanding of the nature of a computer program. Searle's argument was called The Chinese Box. Very simplified, he argued that if a person who can't speak chinese goes into a box and has a number of rulebooks for how to handle a slip of paper with chinese on it so as to produce a new slip of paper with some other chinese on it, that person may do this work perfectly without ever having any notion whatsoever of what's going on. The MEANING isn't given to the machine, just the SYNTAX. Hence, he argued, the whole notion that the human mind, as vehicle also of meaning, can be a machine, can only be stated by someone who is severely irreflective. Against this, Churchland and others argued that complexities may have their own meaning. However, the semantic argument is pretty strong. It is an appeal to intuition. It isn't strange, however, that those whose whole definition even of 'intuition' falls back on an attempt to be reductive and mechanical about the human mind, aren't impressed by it. Penrose, whose important contributions include making Goedel more available and more famous and to relate Turing to Goedel's work so that many more can understand it, also suggested this, put very very simply: * since Goedel showed how we do something that machines can't do, we aren't machines Penrose also has worked with brain scientists to look into related questions, that of how consciousness and the sub- conscious can be connected to quantum reality. Against this, one may argue that it could have been a random incident, rather like giving typewriters to animals and assuming Shakespeare's collected works will pop up after adequate amoutn of time. Erwin Schroedinger, to some extent Niels Bohr, also Louis de Broglie in his latter decades, and very eminently David Bohm-- all legendary fathers of the big quantum physics branch of modern physics, to which but footnotes have been added after these--all put forth arguments which goes along this line: * if all of Nature dabbles in interrelated indeterminacies, it would be very strange indeed if the human body, with all its tremendous sophistication, including the human brain, in some way or other doesn't have some enabling also of these processes. Against this, one may argue that mainstream brain science isn't having much evidence of nonmechanical and also nonrandom processes. But in the view of at least some of these physicists, brain science will have to be expanded for many millenia before it has any chance of nearing completeness. There is a more deep argument against the notion that human intelligence can be mechanised, expounded with more clarity in the conversations between David Bohm and the indian thinker J. Krishnamurti, their book "The Ending of Time", from the late 1980s, and that is that the whole point of gaining the clarity and tranquility that can follow an ethical style of living with contact with nature and suitable time for contemplation is that the domination of 'memory' and other more mechanical features of the human mind and brain can be transcended in favour of a sense of feeling and intelligence and consciousness where the fullness of the human being is brought into play. This more deep argument connects to what I call the 'neopopperian' approach to science. The scientific attitude, that of preferring a contact--we might say a 'friendship'-- with reality in an as unbiased way as possible--pours forth by admitting the possibility of holistic perceptions of the human mind going beyond logic. A word for this could be intelligence, another word intuition, but whatever word we use, it would mean that it is a challenge not raise above the mechanicalness that easily impose itself on human living. This raising above the mechanicalness requires a number of things, including a degree of faith in the possibility of doing so rather than a dismissive declaration that it's 'impossible'. Second, it requires a number of clarifying insights into the nature of such as harmony, and that can also lead to explorations into the deeper meaning of what we can mean by 'coherence'--which in turn could point out new possibilities for engaging in brain science that could have an interpretative openess for empirics indicating that there's a reality to something near, or in, or even beyond the quantum level partaking in the functionality of the human brain. And this can, as David Bohm also pointed out, show something of the limitations with quantum theory, which (and here I agree with Ilya Prigogine, cfr an interview I once made with the latter, using my early pen name Henning Braten, in the context of a Norwegian magazine we called 'Flux') has to some extent some mechanical properties. One can then go on to employ this intelligence, or intuition, directly on the question: is the human mind, consciousness, feeling beyond that which can ever be considered a machine? And this is the ultimate take on it, of course: but done AFTER a great deal of unbiased, dispassionate exploration rather than as a red-hot statement of belief. This author is then finding that intuition supports the fact of intuition as something belonging to the human mind and not belonging to any artefact. Let us, just so it is said, clarify that some form of quantum phenomena are involved in all not just some energetic process in reality, and sometimes more directly and at other times less directly. Its role in semiconductor transistors and integrated chips and plates with them is somewhere in between strong and weak. It is a there a mechanical role. By strengthening the presence of quantum phenomena in artefacts some are hoping to produce what they call 'quantum computers', superceding the original machine idea and superceding classical physics and thus hoping that they make an artefact not suspectible to goedealian incompletness. This is, like many forms of attempts to produce free energy, science fiction and no artefact has been produced of this nature except by means of speculations that such and such might be applicable as possible pathway in making some such thing--I say this aware that some folks here and there are claiming otherwise. But even if more features of the quantum were put into this machine, or artefact, then there is no reason that anything essentially nonmechanical will emerge from it. The nonmechanical isn't properly dealt with in physical theorizing, in part because Einstein was ruthlessly against it, and he influenced physics more than any other individual since Newton, who also, in his younger years, were against it. "There's infinitely much about matter that physicist don't know," David Bohm once said. This signals a humility that we need to have in these questions. The theories don't cover the territorty when it comes to fine energetic processes. These theories are but sketches of that which is most easy to measure. Even with the domain pushed a little bit, it's fundamentally a childhood stage of science we are in. Some more arguments of the above sort exists. One of the ways one can try and argue that Goedel might not apply to computer programs in the real world, so to speak, is that his argument concerns abstract programs without a connection to the real world, performing only on their own data. Real programs are having what they call 'random generators' (which, however, usually is merely more of the same, an arithmetic formula feeding its previous input to itself and doing something a bit complicated with it), and in addition a lot of input and output the reality, from and to keyboards, monitors, and, with robots, cameras, motors, and more such. However, this sort of argument doesn't take away anything at all from the essential point: it merely asserts that the program exists in a complicated environment, where its infinite incompletenesses more than ever is likely to become evident and could cause things to come to harm unless one is aware of the limitations of all programs. The coherence of fluctuations speculated by some, including Penrose, that can be involved in the type of natural phenomena which exibit real consciousness and intelligence cannot be said to be anywhere even vaguely that which can be harnessed by human artefacts. This is something at the core of creation and it's a question of worldview, not for empirical scientists, whether the human being is anchored in something beyond matter. That this may be the case is a grand argument in favour of very strongly putting any notion that machines can mimick the minds of humans in parentheses. ANTI-AI AS A POSITIVE AND IMPORTANT VALUE IN HUMAN SOCIETY There are several uses of words beginning with "anti-" that has achieved status as reflecting significant human values and rights and that are regularly employed as conceptual tools, or slogans, in shaping both school curriculae and political approaches, radio programs, etc. It is clear to this writer that robots belong to all future of humanity and that this can and should be done in moderation and not take place on a wave of sloppy thinking that can lead to overapplication of robots. Robots, in design through and through, in all bits physical and in all bits logical (in software), must be made on the dictum to serve the best of humanity and neither make people into dummies nor fool them nor be made so that the machinery can be as nasty as software of certain virus-like types can be in computer networks. Software can kill harddisks; we must realize that these machines mustn't be made without an enormous ethical framework, or they can kill humanity--this is not a statement of fear, it's a simple cold flat fact. Robots design, also at software level, require a sensitivity for the inherent incompleteness in machine pattern matching. An Anti-AI valued approach speaks in nonpsychological terms, as much as possible: * "pattern matching" is good to say - not pattern recognition, which is for humans * "program analysis" or "algorithm" make sense to say - not machine intelligence or artificical intelligence * machines with programs make sense to say - not smart machines (the word 'smart' should be rescued despite it's sloppy use in contexts like 'smarthones') * "priorities" makes sense to say of a program in a robot - not, for a machine, values * "selections" makes sense to say of a robotic program - not, for a machine, decision or judgement * "relevant mapping" (and such) can be said of a robot - not 'awareness', 'intelligence', 'intellect' The anti-AI value involves honoring the possibility of the full potential of the human being, with faith in the greatness of this potential, and with an optimistic view of the albeit entirely limited role of the machine, such as the robotic machine, the vehicle steered by a computer, and so on. In physical design of robots given this valued approach, * a robot should look squarish rather than deceptively alive * vehicles controlled by computer algorithsm should be described as such rather than given confusing and humane descriptions like "self-driving" * a robot shouldn't be attached to fuzzy context-dependent algorithms for a broad range of pattern matching for facial expressions, voices and so on, for this excludes always too many possibilities, but rather have a strict, well-defined, and easy to reach panel for control over it * a robot shouldn't be put into contexts where the human pleasure of meeting service-minded human beings can do a better job * a robot shouldn't be in the service of a small rich group when employment for the many is at stake, but should be considered an official property to be used consciously so as to protect human widespread employment and presence * a robot shouldn't be dangerously equipped except for for concrete, limited, extremely well-defined actions by official personell * anything involving robot-like piloting shouldn't be put into action without also a statement of the limitations of this piloting process, and no computer network should have algorithms mimicking humans in the sly * a robot should be something that can be turned off easily, and that includes autopilots of all sorts; and cars where robots have, in design, melted with the chassis of the cars should be outlawed * a robot should be programmed so that it quickly turns itself off when its pattern matching indicates that its domain of operation, which always should be defined in a very narrow way (like cutting gras or cleaning the floor), is no longer well-defined. In other words, robots should be made so that we can get them away when they are in the way. And similar points like this and the set above them can be added. In the spirit of anti-AI, it is possible to make a set of algorithms of a pattern matching like type, able to treat fuzzy input data and to regulate such as motors for output, and with specific bouts of entrainment (a word we can use instead of 'teach') so as get the program to handle new domains, and this we can call other, less presumptous things than AI or machine intelligence. The proposal I often come with involves using the term "first-hand", to indicate that the human mind has control with the process, and that we limit the complexity to within what it is possible to have first-hand control with. In schools, media of various kinds, it should be clear that the cultivation of emotions of aversion against wrong things is currently going on relative to a lot of avenues, including, for instance, against the needless chopping down of trees so important for the protection of the air quality on planet Earth. It would be appropriate to suggest that similiar cultivation ought, in the same spirit, to take place when it comes to cultivating aversion to such phenomena of putting too much technology on the watch over human beings, and trusting technology as replacement for real communication any too much, for instance also connected to George Orwell's Big Brother concept. This, as a whole, is compatible with responsible, disciplined use of technology is a narrow sense, but it would mean changing the direction that easily can be seen to be in place when the giant industries are pushing in their agendas relative to politicians and schools and what not. There are some who may say: let us call things by a name, it doesn't matter which name, --robotic software and connectionist programs might as well be called "artificial intelligence" as well as anything else, for we need a snappy category name for it. This is, however, not at all the approach we should take when it comes to protecting insight into the reality of life; it is important, for all future generations, that we use labels and category names that do not flatten the image of ourselves and our minds, on the premises of technologists and companies trying to overwhelm society with their gadgets. These gadgets are fine, and we need good names--FCM is a name that I suggest--but we also need to honor the depth of meaning of such words as "intelligence" and "mind" and give the human being the benefit of these words with nuance, while machines should have no more flattering names than necessary. It's a question also, of right worldview. In sum, the anti-AI value involves building constructive approaches in which the arguments against AI are taken seriously. FINIS **************** COPYRIGHT. YOU CAN USE QUOTES FROM THIS TEXT WHEN LINK IS INCLUDED. YOU CAN ALSO REDISTRIBUTE THIS TEXT IN FULL IN ALL RESPECTFUL CONTEXTS IN ANY MEDIA WHEN LINK TO IS INCLUDED AND NO CHANGES (BEYOND GRAMMATICAL IMPROVEMENTS) ARE IMPLEMENTED AND THE TEXT IS KEPT WHOLE AND AS IS, WITH NO ADDITIONS AND NOTHING REMOVED, AND AUTHOR'S NAME S R WEBER INCLUDED. *************************************************************************** The following webpage for apps and info on the G15 PMN programming language also has the following pedagogical/academical text on Kurt Goedel's incompleteness work, but before you open it, please read the rest of this paragraph: essay1a20130321.txt. In the link just given, you find a text that refers to the G15 CPU assembly language as, quite simply, "G15", and it was written using one of the first editors written directly in this language, before the PMN higher-level feature was developed. The G15 CPU concept, first only in a virtual sense, and the assembly language for it, and the higher-level PMN are all our own and new designs. Name history: The name 'PMN" refers to 'primary multiverse noetics', and is intensely oriented towards the esthetics of the algorithm as aid to thinking clearly, 'noetics' being 'science of mind'. But at an earlier stage, PMN was thought of as acronym for "PatMatNet" or "Pattern Matching Network", which is the set of anti-AI approaches now summed up in what is called FCM, or First-Hand Computerised Mentality and which is part of the G15 PMN program called The Third Foundation and which is available at the above app page. The concept 'G15' itself refers to the willingness to look at the structure of numbers, such as the prime numbers and the golden ratio (fibonacci series ratio) numbers, where a number is then not merely regarded as an abstract and neutral quantity in a second-hand way. Thus, 15 equals 3 x 5 and such prime number decomposition and recomposition is a core way of how Kurt Goedel found it possible to map axiomatic systems to numbers, so that these axiomatic systems became self-referential. The G15 CPU is, in a way, dedicated to getting the most out of whole numbers of the psychologically meaningful 32-bit first-hand type. The sound of 'fifteen' is positive and happy and pleasant, and 'G' as 'gee' shares these features. It's a short and snappy name for a CPU in a tradition where CPUs have been identified, by and large, quite often by means of numbers (eg, i386).