The 3-hour night version lecture is a real torture. For the example counter(m,n), i can't relate the loop invariant directly to the correctness of the program. It's weird that the b value is set to 6.
I tried pick two non-zeros natural numbers for m and n randomly. And the loop has to execute 6 more times to terminate after a and b decrease to 0 since b is assigned to 6 and c has to counter 6 more times. It may be necessary to use the loop invariant c = 7m + n to go to the postcondition, but it seems not to be that straightforward to be understood. Now i realize the importance of code design that make other people's life easier, for people who are learning how to prove loop invariant. Back to the loop invariant, i usually trace algorithm on simple example, like iterates for a few times and see what kind of changes are caused. As to me it's more helpful than just thinking of some fancy relationship between precondition and postcondition.
For more complicated loops, like nested loops, these things have to be done inside out. Prove termination and loop invariant in the very inside, then for any value of the loop outside, prove termination and loop invariant for outside loop. In the selection_sort example, the inside loop is going from index j=i+1 to the end of the list. Selection_sort is basically "selecting" a proper position for an element in the unsorted section of that list. I take a [4,2,3,1] as an example to run the loop. The inside loop actually helps me to evaluate a position by comparing the value of 4 to 2,3 and 1, then the outer loop moves to the next index and this easily sorts the whole list. But the inside loop may have k+1 iteration or may not. Since the value of the number could be equivalent. These "special" cases happened because of the postcondition, which is non-decreasing order here. Then it's not hard to prove the program "moves" and "compares" in some range.
Thursday, October 30, 2008
Wednesday, October 29, 2008
Busy Week 8
A2 was submitted a few days ago, but i didn't work out the first problem, neither did my partner.
I didn't derive a valid recursive formula for the 3rd problem as well. So i just proved it without a valid formula, which might lose some marks probably.
Luckily the 207 midterm was over this afternoon, i now have more time to review the materials Danny had talked about iterative programs.
It's again easy to start with, and more interesting than the previous sections. Obviously, we will use a lot of simple induction stuff here, for proving correctness of iterative programs. It's tricky to seek for a proper loop invariant, and it always takes times to go through the loop and see how this 'incomplete solution' works. Again, to prove the correctness, we carefully trace the loop and the changes each time it caused. This helps us to fulfill the implication from the ith to the (i+1)th loop.
I didn't derive a valid recursive formula for the 3rd problem as well. So i just proved it without a valid formula, which might lose some marks probably.
Luckily the 207 midterm was over this afternoon, i now have more time to review the materials Danny had talked about iterative programs.
It's again easy to start with, and more interesting than the previous sections. Obviously, we will use a lot of simple induction stuff here, for proving correctness of iterative programs. It's tricky to seek for a proper loop invariant, and it always takes times to go through the loop and see how this 'incomplete solution' works. Again, to prove the correctness, we carefully trace the loop and the changes each time it caused. This helps us to fulfill the implication from the ith to the (i+1)th loop.
Friday, October 24, 2008
the end of week 7
Proving program correctness is always an annoying part, especially dealing with searching items and sorting items in an array or list. There are many conditions to consider for the program(most of them we have seen during this week were recursive programs) correctness.
The proof of correctness may be like: by induction, for size of the input,
prove precondition -> termination&postcondition. It seems the structure of
complete induction is well following recursive structure.
Then we go through the first few lines and if lucky enough we can proceed to the
annoying part of the recursive function. In this case, the recursive step leads us
to examine the whole function again with different input size, a smaller size.
We don't stop checking until we get the expected result - the post condition.
But how to split up the size so that it will look more straightforward and easier to
be checked? For the binary search, the way is to break into the middle of a list.
So there may be various ways and algorithms for various problems to shrink the
searching or sorting range. The next thing is to continue on executing. The case
must be carefully chosen otherwise some unexpected errors may cause the crush
of the program.
The proof is complicated at most of the time so as to ensure all the codes work from the
first line with the initial input. It saves us a lot of time to complement any cases or to fix
up any mistakes afterwards.
The proof of correctness may be like: by induction, for size of the input,
prove precondition -> termination&postcondition. It seems the structure of
complete induction is well following recursive structure.
Then we go through the first few lines and if lucky enough we can proceed to the
annoying part of the recursive function. In this case, the recursive step leads us
to examine the whole function again with different input size, a smaller size.
We don't stop checking until we get the expected result - the post condition.
But how to split up the size so that it will look more straightforward and easier to
be checked? For the binary search, the way is to break into the middle of a list.
So there may be various ways and algorithms for various problems to shrink the
searching or sorting range. The next thing is to continue on executing. The case
must be carefully chosen otherwise some unexpected errors may cause the crush
of the program.
The proof is complicated at most of the time so as to ensure all the codes work from the
first line with the initial input. It saves us a lot of time to complement any cases or to fix
up any mistakes afterwards.
Monday, October 20, 2008
Lecture about Program Correctness
Monday came, Danny didn't come.
A lecturer named Nick presented us a lecture about program correctness.
Nick started by the sentence "My program is correct." He talked about how correct a program
would be from different points of view. He spent some minutes on explaining the definition of conditions, the relationship between precondition and input, postcondition and output.
This helps us to recall what we have learned in CSC165.
Then he gave the very familiar example, binary research, to reinforce the concept on how a program would be correct in each step. One question(maybe just for fun, maybe not) he mentioned during the lecture was 'how do you crush a program', which sounds very illegal.
But at the same time, this is just the way to prove the program correctness backwards.
I believe if one can crush a program easily, then it wouldn't be too hard to prove correctness.
That is where you see right from wrong. So yes, let's enjoy crushing our programs.
A lecturer named Nick presented us a lecture about program correctness.
Nick started by the sentence "My program is correct." He talked about how correct a program
would be from different points of view. He spent some minutes on explaining the definition of conditions, the relationship between precondition and input, postcondition and output.
This helps us to recall what we have learned in CSC165.
Then he gave the very familiar example, binary research, to reinforce the concept on how a program would be correct in each step. One question(maybe just for fun, maybe not) he mentioned during the lecture was 'how do you crush a program', which sounds very illegal.
But at the same time, this is just the way to prove the program correctness backwards.
I believe if one can crush a program easily, then it wouldn't be too hard to prove correctness.
That is where you see right from wrong. So yes, let's enjoy crushing our programs.
Saturday, October 18, 2008
Week 6
This week we focused on analyzing the complexity of mergesort and did some unwinding and proof. The repeating substitution procedure is simple. In most cases, when n>some integer, according to the definition, we can easily get a formula from the right hand side. Since we want the f(n) or g(n) or whatever to decrease to f(0)/g(0)=1, the formula we eventually derive will be like a constant to the power of something plus something else. To make a conjecture, we just carefully define n, which is usually natural numbers with some restrictions. The divide-and-conquer strategy provides us a way of breaking big problems into smaller and smaller parts. Then the size will be small enought for us to directly solve. Well, at least with spliting up and solving one of the subproblems, the base cases, we can always get a few marks.
Friday, October 10, 2008
term test 1
i used simple induction for all the three questions. Since i guess it would be easier to prove P(n)=>P(n=1) for the predicate in the three questions, except the last question. it seems it would be proper if i used complete induction to prove it. Similar problem i had seen before. since i was writing in a hurry, i just picked the most familiar technique for me to solve, which is simple induction. I was stuck when i was dividing the set into a part that can be applied with IH. The n+1 element took me a lot of time to derive the subsets containing it. I might be wrong on the answer since the induction step wasn't that successful. if there were 5 more minutes maybe i could do the induction step one more time and find out which number of subsets is going wrong.
Luckily the first two questions required less cases to analyze. i think i did well in the recursive one.
Luckily the first two questions required less cases to analyze. i think i did well in the recursive one.
Tuesday, October 7, 2008
review for week 4
The algorithm complexity part is usually thoughtful-provoking.
I didn't see big differences when i run a program on my laptop or some computers, but i did see the way that much fewer steps generated by applying recursion. Well, iterative functions come more naturally for me though great efficiency recursion has. Everytime i'm analyzing recursive functions, my sight jumps back and forth. Thought it makes my codes neat, it usually takes time for me to solve a problem recursively. We also did a bit of unwinding, estimating the time complexity of binary search and the proof of it.
I didn't see big differences when i run a program on my laptop or some computers, but i did see the way that much fewer steps generated by applying recursion. Well, iterative functions come more naturally for me though great efficiency recursion has. Everytime i'm analyzing recursive functions, my sight jumps back and forth. Thought it makes my codes neat, it usually takes time for me to solve a problem recursively. We also did a bit of unwinding, estimating the time complexity of binary search and the proof of it.
Subscribe to:
Posts (Atom)