Thursday, October 30, 2008

Oct 30 - Week 8

Since I missed this Monday's lecture, I'd decide to go to the evening instead. Surprisingly, I found the evening lecture is much better than the morning sections. Perhaps it's because there are fewer people prefer to listen a lecture in the evening.

We went through the pow_revisited example, which contains one simply while loop. It was quite obvious and meant to show us the structure of proving a loop is correct. The "trickier termination" is indeed tricker. A girl came up with the question with what if value a become negative? I was wondering the same thing at that time either. Well fortunately a won't be decremented as long as b is not zero, and in that case this statement is assumed. This would be the trickiest part, as only every time b becomes zero, a will decrement by 1, and b's value will be back to 6 again. It's not that obvious to figure out what the loop is behaving at the first place. And this is not the worst, the last example is a selection sort function, which contains two for loops. The algorithm is not hard, but you will likely yo lose the track of in dices as you are writing the proof. As Professor Danny said, never use any letter that is a index in the program in your proof, since that will really drive us into trouble.

Besides, we also need to check if the loop is ever going to end. At this point I like for loop more, as most of the time it has a finite set of content to iterate. For while loop I have to check the statement which it moves the iteration forward. Luckily if there is more than one condition along with the while loop, it will generate cases.

Overall, I do believe I kind of get what I am supposed to do when I were to write a proof about a loop --- that is, to find out the pre and post condition (wonder if there is any chance that these two conditions are given in the questions?), find out an expression of what the loop does (not sure what officially that calls) and try to make sure the loop ends.

Tuesday, October 28, 2008

Oct 28 - Week 8

Last week we did a proof about the binary search. I begin to doubt is it always the case that we use strong induction to prove the program correctness? At least for now I assume so. The proof structure is not quite complexed. Just as normal strong induction, we will assume P(1) up to P(n-1). As the programming code come into play now, we have to analyze what does these code do at the same time. Binary search has got a basic case, which is the input first index and last index is the same. It means we go through the entire list, so the if the index position indeed contains the element we want, then we are done. Otherwise we can not find anything, since we assumed the entire list has been gone through. This satisfies both precondition as well as postcondition. After finished the base case, we will play with the rest of the program for a well. The key part is how the program divide the list into two parts. That generate the proof into two separate branches. First is the list from the first to the m index, the other one runs from m+1 to the last. By comparing the element to the value of m, it will make the list smaller and smaller. We can not forget to check whether the precondition still holds at the same time.
The greatest common divisor example seems ambiguous to me. I sort of understanding how the proof is made, but have some problems understanding how does the program works.

I missed Monday's lecture, and did not have a chance to hand in the problem set 4 which was printed two days ago. I believe I will go to Thursday evening's lecture.