mailmanq, on 13 February 2013 - 02:18 PM, said:
A true AI should have true error too. Like in a game, it shouldn't be simulated error, but true error that it did something wrong, not a from a chance that it may fake an error, to seem real, or not to make the gane extremely hard.
I am no expert in AI, but this doesn't seem quite right to me. Unless the goal of an AI is to simulate the human brain as closely as possible and NOT to create a superior, infallible logician, then true error would not be a desired trait. And that is not to mention the fact that machines cannot think in any other language than logic, true error wouldn't really be "true error" in the first place: it would be the error of the programmer.
By true error, I am assuming you mean accidental logic errors, so please correct me if I'm wrong.
Edit: Of course, in terms of morals there will always be "true errors" when it comes to artificial intelligence due to the fact that many morals which humans have are not always a rational thing but rather just a "feeling" that you should or should not do certain things. By teaching an AI morals, you might actually be teaching them faulty logic.
One more thing: When it comes to logic, how can we ever be sure that anything is 100% accurate? It may be "logically accurate", but is it "morally accurate"? Has it considered ALL of the data available? (a lack of considered data seems most likely to me the largest reason for non-logical arguments in humans) In short, I don't believe that anything can ever achieve 100% accuracy. With logic, one can just about come to any conclusion they wish depending on what input knowledge you are given to work with, and that is something that AI would struggle with. The possibilities are infinite.
Take for example the stereotypical scenario of an AI "human termination". The AI somehow came to the conclusion that getting rid of all humans was a good thing, despite the fact that humans really have no negative effect on an AI's task (thinking). A possible reason that humans might want a termination of a species would be so that we could use the resources which are consumed by that species in order to further extend our race. But AI's wouldn't really have that need to "reproduce" (or would it? Would it think that more resources are necessary in order to 'think' more? And therein lies one of the issues when trying to comprehend AI logic). So what did trigger this human termination? What leap of twisted logic would it have to make, and would the input data all have been considered?
Yeah I'm pretty much just spitting out my thoughts, no real factual evidence behind this other than what a fallible human mind has concluded from observation. Take everything I say with a grain of salt.