Motivation Critical for AI
Abstract: a continuation of the systems approach as applied to AI and machine consciousness, some social commentary thrown in for good measure
This somewhat informal essay follows previous publications on designing autonomous computer systems or synthetic intelligence. Historically, there has been voluminous debate over the question of consciousness / self-awareness: is it critical or even necessary for intelligence? I believe time will vindicate the yay-sayers; in near-time, we will see wonders paralleled only by human advances in genetics. Perhaps this essay may play a small role in developing those wonders.
No single paradigm can gestate, for lack of a better word, a synthetic intelligence: not GP/GA, not neural nets, not even modern heuristics for which the author is a staunch and admittedly rabid supporter. If there is any cause this author is more an advocate of, it would be the systems approach. It has never failed me – not once. When all else fails, the systems/reliability approach provides true grit for the global dilemma.
Up to this point, I have provided a unique design for an AI that heavily depends on motivation; the core of my AI is what I loosely call ‘motivation subsystem’. But it was so ambiguous / ill-defined that it was itself – the major obstacle for any progress. I purposefully stayed away from goal-oriented designs specifically for the reasons we invented that class of algorithms: goal-oriented systems are limited by the goal and procedure sets we allocate.
Even if a system has capacity for growth/adaptation, such as with neural net based or GP based systems, this does not imply the system will be any good for general problem solving, living in the real-world, or out there in the wild-blue-yonder. Motivation obviously needs to be better defined and implementable .. As is typical of the systems approach, you cannot force it; you must wait for inspiration sometimes .. Finally, patience was rewarded with glorious inspiration. Readers who are familiar with my style and delivery will note again that I do not write unless it is worthy to be written and read.
Without further ado, I will state my solutions to the problem and supporting logic. The conceptual design for the motivation subsystem must be goal-oriented, fixed, and well-defined. The first goal must be: become self-aware. Definitions are critical and will follow later in the essay. The second goal should be something like: assist humanity to create a global, enduring, and thriving civilization/culture. Keyword: thriving. Antonyms: stagnant, dead. Another good keyword implied: sustainable. Other sub-goals implied: basic infrastructure for all human beings including but not limited to: clean water, waste management, medical services, and a nominal quality of life. And yes, it is part of my duty to remind humanity that it is indeed a sin to allow others of your kind to go without or exploit them in any way shape or form. (And yes, I am saying point-blank that making a factory in another country for cheap labor is a sin.)
Preaching performed, we can return to the purpose of the essay .. Definition of self-awareness: (as an aside, no jury/court can ever decide 100% that an entity is self-aware with absolute certainty since we have no mechanical/physical means, at present, to verify consciousness) an entity must determine that individually, by itself. So the central primary motivation of an AI must be: verify my own existence to reasonable certainty. You must give the AI a toolset adequate for its goal-list – should go without saying. We are dealing with implementable concepts in this essay – not the nitty-gritty of programming robotic arms. What we mean by ‘reasonable certainty’ also obviously needs to be more explicit. For decision making, we typically use 95 or 99% surety for practical purposes. On a daily basis, modern humans typically make decisions with much less. Let’s list them out:
- Verify my own existence with 99% certainty
- Assist humanity to create a thriving equitable enduring space-faring culture
(Obviously if we spread our genome around a little, we have greater chances for survivability; if we don’t, we are risking our planet’s entire genome.)
Of course I must be honest about my motivations for making such statements: adventure and curiosity. I need to see what’s out there – either robotically or with my own eyes. With all the evidence we have of potentially life-bearing planets, there must be a veritable zoo of ecologies not unlike displayed by the recent smash Avatar.
Applications: space/marine probes, robotic assistants of all kinds (surgical, general practice, hazardous environments, .. the list is endless),.. My personal favorite: inventor’s assistant with initial task: design and build a better you. ;) Critical in the design are the sensory/manipulation subsystems which are more fully developed in previous papers.
I believe I have made a reasonable attempt to justify a goal-oriented AI. Can the design be perverted to design the revolting robot-soldier? Of course. Can the design be easily twisted to fail? Of course, simply change/replace goal 2 with something like: Follow orders. / Perform ___ at any cost .. This should not prohibit implementation. All good ideas can and will be twisted beyond recognition. That is the nature of our malevolence. We must overcome that evil with a powerful good. Part of that good will be our partners in exploration: the robots, androids, and synthetic humans with fully conscious minds we design and give to them .. One note about Skynet, the fictional AI which decides to destroy humanity in the Terminator series: I don’t work for Cyberdyne Systems.