Jump to content




[Challenge] Make an AI

computer turtle

70 replies to this topic

#61 Bubba

    Use Code Tags!

  • Moderators
  • 1,142 posts
  • LocationRHIT

Posted 13 February 2013 - 03:16 PM

View Postmailmanq, on 13 February 2013 - 02:18 PM, said:

A true AI should have true error too. Like in a game, it shouldn't be simulated error, but true error that it did something wrong, not a from a chance that it may fake an error, to seem real, or not to make the gane extremely hard.

I am no expert in AI, but this doesn't seem quite right to me. Unless the goal of an AI is to simulate the human brain as closely as possible and NOT to create a superior, infallible logician, then true error would not be a desired trait. And that is not to mention the fact that machines cannot think in any other language than logic, true error wouldn't really be "true error" in the first place: it would be the error of the programmer.

By true error, I am assuming you mean accidental logic errors, so please correct me if I'm wrong.

Edit: Of course, in terms of morals there will always be "true errors" when it comes to artificial intelligence due to the fact that many morals which humans have are not always a rational thing but rather just a "feeling" that you should or should not do certain things. By teaching an AI morals, you might actually be teaching them faulty logic.

One more thing: When it comes to logic, how can we ever be sure that anything is 100% accurate? It may be "logically accurate", but is it "morally accurate"? Has it considered ALL of the data available? (a lack of considered data seems most likely to me the largest reason for non-logical arguments in humans) In short, I don't believe that anything can ever achieve 100% accuracy. With logic, one can just about come to any conclusion they wish depending on what input knowledge you are given to work with, and that is something that AI would struggle with. The possibilities are infinite.

Take for example the stereotypical scenario of an AI "human termination". The AI somehow came to the conclusion that getting rid of all humans was a good thing, despite the fact that humans really have no negative effect on an AI's task (thinking). A possible reason that humans might want a termination of a species would be so that we could use the resources which are consumed by that species in order to further extend our race. But AI's wouldn't really have that need to "reproduce" (or would it? Would it think that more resources are necessary in order to 'think' more? And therein lies one of the issues when trying to comprehend AI logic). So what did trigger this human termination? What leap of twisted logic would it have to make, and would the input data all have been considered?

Yeah I'm pretty much just spitting out my thoughts, no real factual evidence behind this other than what a fallible human mind has concluded from observation. Take everything I say with a grain of salt.

#62 Cranium

    Ninja Scripter

  • Moderators
  • 4,031 posts
  • LocationLincoln, Nebraska

Posted 13 February 2013 - 04:04 PM

That was a very well thought out post there, Bubba. I enjoyed reading your opinion. I don't think Lua, or even Computercraft for that matter is even capable of 'true AI', in the sense that it will be able to do anything that a human can given enough time to learn. I think the best we can hope for is a 'game AI' where the most we can do is navigate and evaluate the surroundings.

#63 wilcomega

  • Members
  • 466 posts
  • LocationHolland

Posted 15 February 2013 - 08:46 AM

this is my idea of an AI:
this will need CCsensors or openCCsensors

you will have a server computer and a "baby" turtle, this turtle will do what ever you do, if you move foreward he moves foreward. he also has to detect when you are not moving so that the diffrent movements can be saved for later use. when it has learned enough it will start executing the movements and actions depending on diffrent events. these events are also save at the learning stage. this will make it so that you can teach it what to do and when to do.

btw the server has the peripheral connected. it uses rednet to comminicate with the turtles.

later on you could make it to the point where the turtle crafts other turtles and let the AI put on it. the new turtle will learn from the old turtle. this will make it so that in the end you is a city of self behaving turtles.

#64 Thief^

  • Members
  • 29 posts

Posted 19 February 2013 - 01:14 AM

I'm a computer games programmer (professionally) and know a fair bit about (computer game level) AI. And basically, there's nothing intelligent about it, they almost never have the ability to learn, and when they do they can only learn in limited ways that they have been pre-programmed to learn.

Most of the examples I've seen on the past few pages of this thread of "learning AI" aren't AI at all, they're databases! Particularly the example in this post: http://www.computerc...dpost__p__31818
That's not learning, it's just data storage, retrieval and manipulation.
"Learning" is being able to have an idea, try it out, fail or succeed and learn from that. True "learning" is impossible without original thoughts.


I also take offence at your claim that "anyone who has SOME skill with coding/scripting should aspire to make an AI." You are saying that I have no skill with programming.

#65 BigSHinyToys

  • Members
  • 1,001 posts

Posted 19 February 2013 - 01:28 AM

View PostThief^, on 19 February 2013 - 01:14 AM, said:

I'm a computer games programmer (professionally) and know a fair bit about (computer game level) AI. And basically, there's nothing intelligent about it, they almost never have the ability to learn, and when they do they can only learn in limited ways that they have been pre-programmed to learn.

Most of the examples I've seen on the past few pages of this thread of "learning AI" aren't AI at all, they're databases! Particularly the example in this post: http://www.computerc...dpost__p__31818
That's not learning, it's just data storage, retrieval and manipulation.
"Learning" is being able to have an idea, try it out, fail or succeed and learn from that. True "learning" is impossible without original thoughts.


I also take offence at your claim that "anyone who has SOME skill with coding/scripting should aspire to make an AI." You are saying that I have no skill with programming.
It could be argued that all humans do is process input through sensors then store and manipulate that data using it to decide the next course of action.

The problem with AI is the programmer decides how the program learns it learns by a set method if it learned how to learn it would then be able to adapt.

#66 Thief^

  • Members
  • 29 posts

Posted 19 February 2013 - 01:52 AM

Humans have a set of inputs and outputs and the ability to see correlations between inputs, and between outputs and subsequent inputs. But the brain isn't hard-wired for what any of the inputs or outputs actually do or mean (except a couple are wired to "unhappy", e.g. pain sensors).

For true intelligence from a program you'd have to just give it a pool of inputs and outputs, the ability to correlate inputs with each other and outputs with inputs, and then give it a reason to do anything (in humans this is arguably happiness and unhappiness) and let it work out how it's legs work.

Congrats, one baby. Good luck making it beyond that.

#67 BigSHinyToys

  • Members
  • 1,001 posts

Posted 19 February 2013 - 02:07 AM

That is an interesting perspective.

So strap a FPGA to a pair of motors and couple of cameras for input and see what happens.

#68 Cranium

    Ninja Scripter

  • Moderators
  • 4,031 posts
  • LocationLincoln, Nebraska

Posted 19 February 2013 - 02:40 AM

View PostThief^, on 19 February 2013 - 01:14 AM, said:

I also take offence at your claim that "anyone who has SOME skill with coding/scripting should aspire to make an AI." You are saying that I have no skill with programming.
You obviously read into that statement very wrong. I meant that anyone who has some good skill with programming, should want to make some sort of AI. Through all of the definitions. I never said that those who do not try are bad programmers. I want to make one, but I just don't think I will ever do anything with it.

#69 Lyqyd

    Lua Liquidator

  • Moderators
  • 8,465 posts

Posted 19 February 2013 - 09:27 AM

A lighthearted implementation of an "AI", with bonus xkcd reference:

while true do
  read()
  print("So it has come to this.")
end


#70 AndreWalia

  • Members
  • 294 posts
  • LocationSt.Louis, MO

Posted 20 February 2013 - 05:26 PM

View PostLyqyd, on 19 February 2013 - 09:27 AM, said:

A lighthearted implementation of an "AI", with bonus xkcd reference:

while true do
  read()
  print("So it has come to this.")
end
I give it a math.huge() out of 10

#71 BigSHinyToys

  • Members
  • 1,001 posts

Posted 20 February 2013 - 05:33 PM

Dam that was hard to find.
http://xkcd.com/1022/
There are too many things on that site related to AI.





1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users