Saturday, 10 December 2011

From thinking about Algorithms to thinking about AI

Thinking up algorithms.

This is the major aspect lacking in the CS curriculum in general. We are taught many examples, and while it allows some inductive learning, I don't think I have really mastered the art of writing algorithms. Sure we can understand them quickly after learning, but it's just amazing how great algorithms require a lot more thought to create, even if it can be stated in one A4 page.

We don't really know which path to go down and can only guess by experience. The rest appears to be plain hard work/luck/intuition. Following the frameworks of thinking will get you half the way, but there are intellectual hurdles that really differ in difficulty for different minds.

This all seems so similar to the P NP problem, where a solution can be checked very quickly, but getting to that solution takes exponentially longer.

In the context of programming, the issue is roughly equivalent to how some programmers always write less efficient code, use more memory/steps. Some people just don't look ahead so far enough to see the alternatives.

Refining and learning are integral parts of computer science. The only trick is to do that faster and more thoroughly and reduce repeating identical problems. And how do you learn faster? By making more, different mistakes while doing what you want to do. And that's probably the best thing a CS course should teach, especially to most students who are not gurus at what they do.

( Many CS students actually dislike programming and try to avoid such courses whenever they can, because let's face it - bugs can be scary and annoying. They directly mirror your thinking, of which they are a product of. They make you see what you don't want to face. I suspect most people don't want to see themselves as flawed and prone to error. It just doesn't seem to fit the self-image (and socially approved/generated image) of a professional, a "good engineer", or excellent student, or successful person. I find that too. It can be psychologically straining because your mirror image isn't what you want. )

The same philosophy should be applied AI. Instead of having to get the whole thing right from a top-down design to working prototype, working on efficient self-learning with minimal code should be the method. Allowing it to make mistakes and take chances, but having the capabilities to learn about itself and the universe. (And eventually defining for itself what a mistake is.)

But isn't this contrary to many applications that we want to use AI in, where we want them to be perfect, bug-less and infallible or else the mission fails? This is where the field of AI splits. On one hand we want predictable, correct behavior all the time, preferably proven algorithmically correct. On the other we want them to be as adaptable as possible and be autonomous and free, hopefully doing things we don't expect. I argue this division is mainly due to what we think the role of AI ought to be, what we want them to do and how we think about intelligence.

Do we want them to be like slaves to be commanded and made to serve, or free individuals?
(This has a lot to do with how we already see other people and objects in our environment. )

No comments: