Autonomous technology 'requires debate'
News 19 08 2009: The Royal Academy of Engineering says that automated freight transport could be on the roads in as few as 10 years however it suggests that much debate is needed to address the ethical and legal issues raised by putting responsibility in the hands of machines. The Darpa Grand Challenge sponsored by the <?xml:namespace prefix = st1 />US defence department's research arm already has driverless cars negotiating traffic and obstacles and obeying traffic rules while fully autonomous rapid transit systems already exist at Heathrow Airport.
Digital Technology already paces our lives and the next step will be for it to directly control and replace them. Once accepted robotoids may become plentiful providing 24 hour service with no complaints as to working conditions, salaries or meal breaks. Next we will have static ticket machines sprouting legs touting for business while driverless trains and buses will add to the unemployment figures once deployed. Once intelligence manifests their circuit boards they will be given objectives and decide for themselves appropriate actions to take in meeting such objectives. By this time Robocop should be streetwise displacing a Police force to which current Gov databases connect and stream data for law enforcement. At Terabyte speeds and through access to “word meaning” libraries already being constructed along with access to digital mobile phone data ones discussed intentions or future announced actions will be logged and tallied as part of a learning curve towards probability statistics in either arresting now or at a predetermined time computed.
All frightening “stuff” maybe but once installed, interconnected and operational to whom will you complain THEN?.
Intelligence is where a robotoid creates itself new instructions based upon negotiating and deriving a positive path through decisions based upon prior learning whereas for a robotoid to contain unchanging instruction sets for selection based upon input parameters is about strict conformity to a programmers written code. In the first case the intelligent device should provide reasons to justify its new actions whereas in the latter its operational scope is very much limited to what others “think” it may have to do via written code. True intelligence devices would have to demonstrate their capabilities in the same way that a human must do – like in using a simulation device to test that its decision making processes are acceptable. Thus, if a device cannot demonstrate intelligence the liability through its failure to perform acceptably must rest wholly on the shoulders of the instruction set developer unless that is, such code is found not to be in its original form controlling such devices which then leads to legal issues concerning the lack of data security.