4

4.1 (and introductory material)

The introduction talked about the sort of hazy nature of the definition of what precisely a “greedy algorithm” is–but it defined greedy algorithms as “builds up a solution in small steps, choosing a decision at each step myopically to optimize some underlying criterion.” The section proper outlines a greedy stays ahead proof, illustrating by example with the interval scheduling problem, and did the “let's look for a suitable measure” query that happened in the lecture, too. I used this section mainly as sort of a reference/model for my answer to number 1 on the problem set–but that didn't work out very well. A lot of the material was dense information about the details of the proof for the specific problem, and it was hard to pick out the parts of emphasis on strategy in general. Like, in problem one, my main issue was coming up with a measure that I could use throughout the algorithm to measure progress–it seemed really natural to do in the interval scheduling problem, and the section couldn't really help me with that. Maybe stuff was there, but I spent so much time looking for the information about strategies for proofs in general because it was all buried in the specific example.

I give it a 5/10 for readability.

4.2

This section talked about greedy exchange proofs. This problem helped me a lot more with problem 2 than 4.1 helped me with problem 1–but I don't know whether it's whether the section is any better or not, or if it's just that this kind of proof is a lot more natural for me, or maybe it's just that the problem for number 2 is closer to the maximum lateness problem than the interval scheduling was to the truck problem…it's pretty much just total lateness, I guess. But there are inversions, and there's a overall measure thing that you have to prove that inversions don't mess with in the wrong way. One especially helpful insight was that there are n choose 2 possible inversions – that'll help me in my efficiency analysis.

Another I was thinking about while reading the text is how the proof extends for pages–I'm not entirely sure I should be modelling my proofs after the ones in the textbook, because the textbook proofs are explanations, not just demonstrations of truth–it's geared more towards education than simply proving. To put it another way, like, I don't need to explain the concept of greedy measures to you when I'm writing a proof to turn in to you. You get them already, you teach this stuff. I should focus on writing a clear and simple proof–like I'm writing for an academic (which I am.) But I'd love to be able to model my proofs on examples of that, without all the educational explaining concepts to me

I give it a 6/10

4.3

Man, we aren't doing this one. Bummer.

4.4

This section overviewed Dijkstra's algorithm. It was pretty easy to follow considering I was exposed to this in 112 and during lecture. There was an interesting analogy, the authors compared Dijkstra's algorithm to a system of pipes where the edge lengths are the weights and then a droplet of water falls and creates a wave that expands at constant speed. I prefer to think of it as a quickly growing but hungry slime monster gobbling up the nearest nodes, but hey. They talk about how it's sort of a “continuous version” of depth-first search. It runs in O(n log n) though, not O(n). They also provided two proofs for correctness, one with all the exposition and another one with just inequalities.

I give this a 8/10

4.5

This section is the minimum spanning tree problem. Lots of material that was also in the lecture but it was a good review–and there was some stuff that wasn't in the lecture, for example–it talked about how if you allow edges with 0 cost, there can be multiple minimum path things that aren't trees–but that there always exists a tree that's a solution to the problem. It covers all three of the MST algorithms: Kruskal's, Prim's, the backwards Kruskals. I think Kruskal should have changed his name to “Proper,” because then we'd have Prim's algorithm and Proper's algorithm, and it would be great. It talked about it like it was sort of a peculiarity for a problem to have three algorithms that can solve it equally correctly and with the same O bound–it makes me wonder if there aren't other algorithms to the other problems like shortest path that aren't Dijkstra's but are equally as efficient. I really liked the proof to eliminate the distinct edge cost assumption–the making small little “perturbations” took me by surprise, I guess. I didn't even know what a “perturbation” was. Right before that they talk about that you can add the justified cut/cycle edges in any order whatsoever. They're real jerks for talking about something cool, but then saying “while we will not explore this further here…” Who does that?

I give it a 9/10.

4.6

This section talks about union-find, which you need to implement Proper's Kruskal's algorithm efficiently. Union-find is a lame name also. I have an idea–Kruskal can change his name to Proper so we can have Prim and Proper, and then we can just call the union-find data structure's Kruskal's–because Kruskal is still a pretty cool name, and it'd be a pity to eliminate it from existence. Anyway things I drew from the text–union-find cannot handle deletions of edges which split components. This union-find is really a wacky data structure, because the find always returns a _name_ of something instead of something itself–the textbook always names something using the elements it contains, which I guess makes sense because you can never split them it says, which means that a set could never lose the element it gets its name from–if you could that might be a bad idea. It goes through a lot of different implementations of union-find– you can have Find being O(1) but Union taking O(n) first, then they change that where you can have a sequence of k unions taking at most O(k log k) time, which is wacky. But then you can also implement it where Union is O(1) and Find is worst case O(log n), but then there's this crazy stuff with a sequence of n finds doing O(crazystuffinsidehere) with a “inverse Ackermann function.” Which just blows my mind.

courses/cs211/winter2012/journals/richard/chap4.txt · Last modified: 2012/03/01 06:04 by marmorsteinr
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0