CS 61B, Data Structures
Matt's Notes
Welcome! Here are some notes from our sections together!
Click on any link below to view.
Dis 00. Template.
January 00, 2018.
Dis 13. Shortest Paths and MSTs.
November 27+28, 2018.
Shortest Path Algorithms
Dijkstra's algorithm finds all the shortest paths from a starting node, to every other node in the graph. We maintain a fringe, that acts sort of like a priority queue. We add to the fringe places we know how to get to at the moment. Thus, we initialize it with just our starting node, because we know how to get to our starting node, from our starting node (that path is trivially of length 0). While the fringe is not empty, we take the shortest path in our fringe that we have, and lock it in; this is the shortest path to that destination node and the algorithm will not find a better one later on. Then, we add the immediate neighbors of that destination node into our fringe, with the appropriate total lengths. We repeat until the fringe is empty, which is when we have found the shortest paths to each of the nodes in the graph.

Dijkstra's is correct in that it is necessarily true that once we pull a path off the fringe, it is the shortest and we won't find a better one in the future steps of the algorithm. You can imagine Dijkstra's acting kind of like BFS. There is a wavefront of increasing radius from the start node. The wavefront will engulf a node the first opportunity it has, which will be via it's shortest path. If a "better" path is found later, then there will be a contradiction because in a BFS-like way, the wavefront would have engulfed the node via that better path before it engulfed it via the path we claimed was the shortest first.

Legend has it that Dijkstra's algorithm was invented by Dijkstra connecting a bunch of cards with strings like a graph, and picking up one (starting) node off the table. The order in which the cards come off the table is the same order the algorithm would lock in the shortest paths, and this is because how far that starting card is off the table is like the wavefront; at any given point in time, all cards connected by strings with a total length that is less than the distance the starting card is off the table, necessarily will also be off the table.

A* is Dijkstra's algorithm but with the addition of a heuristic at every step. The problem with Dijkstra's is that it finds the shortest path from a starting node, to every other node in the graph. But what if we only care about one destination node and we don't want it to spend time exploring paths to nodes in the opposite direction? A* hones in on our goal and gets us there without wandering aimlessly.

A heuristic is an estimation of how far we our from our goal. Our heuristics our admissible if they do not overestimate how far we our from our goal (they can either be right on target, or an understimate in order to qualify as admissible). If our heuristics are not admissible, A* may not return the actual shortest path.

A* is executed in the same way as Dijkstra's, but our values in the fringe will be the sum of the distance travelled so far to get there (what we had before), and our heuristic estimate of how much farther we have to go. Note that this means that Dijkstra's is simply A* with heuristics of all zeros.

A* is correct given admissibility. If a heuristic is an overestimate, it may prevent us from selecting the actual shortest path off our fringe, because we are too scared to touch it; we think it is more expensive than it really is. But if a heuristic is admissible, then at worst we are overly eager to explore paths that we think are cheap but actually turn out expensive. But in the end, we won't actually select a more expensive path over the actual shortest path. This example from Wikipedia illustrates why admissibility implies optimality.

MST's
A Minimum Spanning Tree is a set of edges in a graph that connects every node to every other node in some way; for any node, you can take some path to get to any other node (every node is reachable from any other). Note that a MST cannot have any cycles; if it did we would be paying for one more edge than necessary since we could remove the heaviest edge in that cycle and we'd still be able to get from any node to every other node in all cases. Note also that therefore, a graph with V number of vertices will have V-1 edges in its MST, again because it won't have any cycles, any node has exactly one edge which it connects it to the rest of the body of nodes, and cutting that edge would detach it completely (if it didn't detach it completely, then that means there was another way to get to it, namely, a cycle).

Prim's Algorithm gets us an MST. It starts by arbitrarily choosing one node, and puts it into the MST. In then looks at all the edges that could be added to expand our MST to include more nodes. We wouldn't want to choose an edge that connects nodes already in our MST, since that would introduce a cycle. We want to choose out of the edges that connect a node in our MST to a node not yet in our MST. Out of all our options, we can greedily choose to just take the lightest edge that accomplishes this. Our MST is now one node larger, and we can continue, considering the additional options we now have as well, until all our nodes in the graph are including in our MST.

Kruskal's Algorithm also gets us an MST (not necessarily identical to that returned by Prim's, but both will be equal in total weight since they are both optimal). In Kruskal's, we sort all of our edges from lightest to heaviest. Then, we go down that list of edges and add each edge iff it does not introduce a cycle.

The correctness of these algorithms is backed by the Cut Property. It states that if we cut our set of nodes into two separate sets, then of all the edges that cross this cut, the lightest one will be in the MST. This property is true because if we assume for the sake of contradiction that the edge we chose (the lightest one) is in fact not in the MST, then both the "real" edge we should have chosen and the one we actually chose both serve the purpose of connecting the two separate sets. It cannot be the case that the "real" edge and another edge together is a better choice than the one we chose; this would imply that two edges across this cut are required for the "optimal" MST, but that cannot be true because this necessarily means there is a cycle (since all of the nodes in each of the two separate sets are connected to each other). So it must be that the "real" edge alone is a better choice than the one we chose, but that cannot be true, because we chose our edge because it is the lightest one across the cut; the "real" edge must be heavier, and thus it cannot be the case that that "real" edge really belongs to the MST; they serve the same purpose yet the "real" edge is more costly--there is no MST in which it is a better choice than the one we chose.

As an additional note, Kruskal's Algorithm must check at every step whether adding a particular edge introduces a cycle or not. Naively, this can be done by performing a traversal on the edges so far with the proposed edge. If we traverse back to a node we've already seen, there must be a cycle. But, this costs linear time at every step. Instead, we can keep track of which nodes are connected together using a data structure called Quick Union. Here is a good guide from Spring explaining all of the variants of it. Most importantly, there are two optimizations we can take advantage of: (1) Weighted Quick Union - in which we merge smaller sets into larger ones, and (2) Path Compression - in which after we take the time to find a node deep in the tree-like structure, we move it up directly under the root for easier access next time. With optimization (1), we achieve Theta(log n) time in the worst case, since the height of the tree will never become too large. With optimization (2) on top of that, we do even better than Theta(log n) in the worst case. The function which bounds this runtime is called the Ackerman function.
Dis 08. Traversals.
October 16+17, 2018.
Preorder, Inorder, Postorder
Pre-order, In-order, and Post-order are ways to describe when nodes in a binary tree are processed (an example of processing a node would be printing its value). A binary tree is a tree in which nodes can have up to 2 children each.

Pre-order means to process the current node before pre-order traversing down the left and right of the current node. In-order means to first in-order traverse the left of the current node, process the current node, and then in-order traverse down the right of the current node. Post-order means to post-order traverse down the left and right of the current node before finally processing the current node.

Visually, you can determine the order nodes are processed during pre-, in-, or post-order traversal by drawing a peg on the west, south, or east side of each node, respectively, and tracing a line around the tree diagram as shown below. The nodes are processed in the order the yellow line touches each red peg.

Pre-order has pegs on the west side of each node.
A, B, C, D, E.



In-order has pegs on the south side of each node.
B, A, D, C, E.



Post-order has pegs on the east side of each node.
B, D, E, C, A.



More formally, we can write this pseudocode to reflect the definitions of when pre-, in-, and post-order processing occurs:

1
2
3
4
5
6
7
8   
binaryTreeTraversal(node):
    preorder(node)
    if left != null:
        binaryTreeTraversal(left)
    inorder(node)
        if right != null:
        binaryTreeTraversal(right)
    postorder(node)



DFS, BFS
The yellow path we traced here for pre/in/post-order is known as depth-first search traversal (DFS). This is because in all three variations, we visited nodes by venturing all the way down a path before back-tracking and venturing down another path (we go deep down into the tree first). Given the tree above, DFS traversal gives: 1 2 3 4 5 6 7.

On the other hand, breadth-first search traversal (BFS) involves visiting nodes level by level. Again, remember that pre/in/post-order does not make sense in the context of BFS because in all three of those, we visit nodes by following that yellow path-in DFS fashion. Given the tree above, BFS traversal gives: 1 2 4 5 3 6 7.

Before we continue, recall that a stack data structure is like a stack of pancakes. If new pancakes are added to my stack, I must eat the fresh pancakes on the top of the stack before I can eat the earlier ones which are sitting lower in the stack. In contrast, a queue is like a queue of people standing in line to buy Fish and Chips at the marina. Whoever got in line first will be the first one served and as new people jump in line, they must wait until everyone before them has been served before they have a chance to enjoy light, flaky fish steaks paired with luscious tarter sauce and a modest side of fries.

The pseudocode below describes the algorithm for DFS traversal. Note that just like how when we traverse an array we need an index/counter variable or when we traverse a LinkedList we need a node pointer, for tree traversal, we can use certain data structures to help us keep track of where we are. This is called a fringe. For DFS, we use a stack and use the following procedure:

1
2
3
4
5
6
7  
Use STACK as fringe.
    Add root to fringe.
    while fringe not empty:
    Remove next from fringe.
    Add its children to fringe.
    Process removed node.
    

The procedure for performing BFS traversal is almost identical! The only difference is that we instead use a queue has our fringe:

1
2
3
4
5
6
7 
Use QUEUE as fringe.
    Add root to fringe.
    while fringe not empty:
    Remove next from fringe.
    Add its children to fringe.
    Process removed node.
            

Dis 06+07. Asymptotics.
October 2+3+9+10, 2018.
We talked about the formal definition of the asymptotic bounds and we made it our goal to gain intuition for how they work so that we don't have to setup a limit explictily every single time. The summer CS61BL website has a great and thorough explanation of this again for you: CS 61BL Lab 07

Further, we plotted some lines using an iPython notebook just so that we can see what some functions look like and why it would make sense to want the asymptotic definitions we have. You can see the notebook we were looking at here: iPython notebook

Some strategies for asymptotic analysis: Remember that your usage of Big-O, Big-Omega, or Big-Theta depends on the exact question being asked.

If you are simply asked for the runtime of the function and you find that you cannot give a tight bound, you should give an upper bound and possibly a lower bound as well if it is meaningful. These bounds would describe the range of runtimes for all possible cases of inputs.

If for the same code you are asked specifically for the best case or worst case runtime, then you would give the same magnitude answer as your lower or upper bound, respectively, but you would present it as a tight bound. This is because the best (or worst) case is one particular case (one particular input) and thus you would state that that one case is tightly bound by a particular function; it doesn't really make sense to say that one case is lower or upper bounded (it's vague to state that when you can provide a tight bound instead).
Dis 04+05. Inheritance.
September 18+19+25+26, 2018.
Click here to view slides from this discussion.
Dis 03. Static and Arrays.
September 11+12, 2018.
Static Methods and Variables
Fields can be labeled as static (class variables), and methods can also be labeled as static.

For static fields, there is only one variable that is shared among all instances of the class, and thus consequently, all object instances see the same value for that field (again, because there is a single variable shared among all; there is not one variable per instance). To call upon a static field, we use dot notation on the class name rather than on a particular instance.

For static methods, an object instance is not required for the function to be called. As a consequence, it cannot access non-static fields. If you'd like, you can call static methods on an object instance, but in that case Java will lookup the class of the instance and call it as we've just described.

Problem Solving Strategies
Whenever you are asked to write a signficant amount of code by hand, first make a concrete example for yourself so that you understand what the problem is asking you to do, so that you get a feel for the operations that may be involved, and so that you have something to check against once you actually start writing code in a bit.

Next, break down the problem into functional parts or steps that you know how to do. If you are stuck, write out the parts of the code that are most obvious to you, even if you have to leave conditions or other things blank. For example, if we have to do an operation repeatedly, you know that there would have to be a loop or a recursive call, so you can set up the components for that first while you're still thinking (loop conditions, incrementations, base cases, etc.). Try to get the general situation working first, and then worry about the edge cases once you're done. Often times, those can be addressed by adding a simple if case at the top of your code later.

Arrays vs LinkedLists
Finally, we informally made observations about the arrays and linkedlists that we worked with today. We'll cover these in more detail later when we study runtime analysis. But for now, we saw that it was easier to get items out of an array, but that it was harder to "resize" them since they are of fixed length and so that would actually require us to make a new array and to copy everything over. LinkedLists on the other hand, require that we traverse the list in order to access a particular item, but it only requires us to switch around a few pointers to insert or delete items.
Lab 03. Doubly Linked Lists.
September 6+7, 2018.
Our Story of LinkedLists
We started out with the raw IntList. This was a nice a simple data structure that we could use to store individual integers in a series of connected links.

Next came the Singly Linked List, which allowed us to encapsulate our raw IntList. The advantage of encapsulation is that it allows us to provide for our users methods for performing operations on our data structure so that they do not need to be knowledgeable about the underlying intricacies of our implementation in our to use it; they can simply refer to an API or list of our methods and their descriptions and they can use our data structure like a black box.

Following that, we introduced Sentinels into our Singly Linked List. This was a "dummy" node that had an unimportant item that was always included. The purpose of a sentinel is to remove the need for null checks, since those can be messy in our code or it may be easy to miss a couple situations. Thus, instead of checking if an instance is null, we can check if it has returned back to the sentinel.

Finally, we allowed ourselves to have referenes in both directions, so that we could access both the previous and the next. This was the Doubly Linked List.
Dis 02. Destructive/Non-Destructive IntLists.
September 4+5, 2018.
Java Files for the Discussion Exercises
Use these files to test your solutions after you're satisfied with them on pencil and paper! IntLists
You'll often see statements such as L.tail.tail = L.tail.tail.tail;, but sometimes it seems like the way you follow the "tails" seem different for the left and right sides of the equals sign. Why is that?

Think about what the equals sign means. For example, let's look at an assignment statement such as this: String var = "object";. What this the box and pointer diagram look like?



We can see that var corresponds to a variable box in the diagram, and that the right hand side of the assignment statement was a String object.

In the same way, the left side of the statement L.tail.tail = L.tail.tail.tail; is a variable box in the diagram, and the right hand side is an IntList object. Therefore, when we count the number of "tails" for the left hand side, we stop counting in time so that we are left with the last "tail" variable box so that we can reassign it. When we count for the right hand side, we follow the final "tail" to an Object, and assign the left hand side to reference it.

Walk through this simple example and make sure you understand how this statement works and why.



Destructive vs. Non-Destructive
When writing a destructive function, you want to destroy the original and so you should not be creating new objects. Thus, destructive functions do not use the new keyword.

On the other hand, non-destructive functions want to preserve the original and return a new version. Thus, non-destructive functions should use the new keyword to create fresh objects as the necessary modifications are applied.
Lab 02. Git Demo.
August 30+31, 2018.
Today we did a git demo in standard git commands and merge conflict resolution.

Matthew@Matthew-PC MINGW64 ~/Desktop/temp
    $ ls
    repo/  repo2/

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp
    $ cd repo

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo (master)
    $ touch chickenpotpie

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo (master)
    $ ls
    chickenpotpie  hw0/  lab1/  lab2/

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo (master)
    $ vim chickenpotpie

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo (master)
    $ cat chickenpotpie
    celery
    carrots
    flour
    covfefe

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo (master)
    $ git status
    On branch master
    Your branch is up-to-date with 'origin/master'.
    Untracked files:
      (use "git add <file>..." to include in what will be committed)

            chickenpotpie

    nothing added to commit but untracked files present (use "git add" to track)

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo (master)
    $ git add chickenpotpie
    warning: LF will be replaced by CRLF in chickenpotpie.
    The file will have its original line endings in your working directory.

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo (master)
    $ git status
    On branch master
    Your branch is up-to-date with 'origin/master'.
    Changes to be committed:
      (use "git reset HEAD <file>..." to unstage)

            new file:   chickenpotpie


    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo (master)
    $ git commit -m "adding my recipe!"
    [master 1153f51] adding my recipe!
    warning: LF will be replaced by CRLF in chickenpotpie.
    The file will have its original line endings in your working directory.
     1 file changed, 4 insertions(+)
     create mode 100644 chickenpotpie

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo (master)
    $ git status
    On branch master
    Your branch is ahead of 'origin/master' by 1 commit.
      (use "git push" to publish your local commits)
    nothing to commit, working directory clean

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo (master)
    $ git push
    Counting objects: 3, done.
    Delta compression using up to 4 threads.
    Compressing objects: 100% (2/2), done.
    Writing objects: 100% (3/3), 399 bytes | 0 bytes/s, done.
    Total 3 (delta 0), reused 0 (delta 0)
    To cs61b-taa@derby.cs.berkeley.edu:students/cs61b-tap
       34d16ce..1153f51  master -> master

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo (master)
    $ cd ..

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp
    $ cd repo2

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo2 (master)
    $ git pull
    remote: Counting objects: 4, done.
    remote: Compressing objects: 100% (2/2), done.
    remote: Total 3 (delta 0), reused 0 (delta 0)
    Unpacking objects: 100% (3/3), done.
    From derby.cs.berkeley.edu:students/cs61b-tap
       34d16ce..1153f51  master     -> origin/master
    Updating 34d16ce..1153f51
    Fast-forward
     chickenpotpie | 4 ++++
     1 file changed, 4 insertions(+)
     create mode 100644 chickenpotpie

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo2 (master)
    $ ls
    chickenpotpie  hw0/  lab1/  lab2/

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo2 (master)
    $ vim chickenpotpie

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo2 (master)
    $ cat chickenpotpie
    celery
    carrots
    flour
    chicken

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo2 (master)
    $ git commit -am "fix typo"
    [master 3e75d1e] fix typo
     1 file changed, 1 insertion(+), 1 deletion(-)

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo2 (master)
    $ git push
    Counting objects: 3, done.
    Delta compression using up to 4 threads.
    Compressing objects: 100% (2/2), done.
    Writing objects: 100% (3/3), 285 bytes | 0 bytes/s, done.
    Total 3 (delta 1), reused 0 (delta 0)
    To cs61b-taa@derby.cs.berkeley.edu:students/cs61b-tap
       1153f51..3e75d1e  master -> master

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo2 (master)
    $ cd ..

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp
    $ cd repo

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo (master)
    $ vim chickenpotpie

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo (master)
    $ git commit -am "add more ingredients"
    warning: LF will be replaced by CRLF in chickenpotpie.
    The file will have its original line endings in your working directory.
    [master warning: LF will be replaced by CRLF in chickenpotpie.
    The file will have its original line endings in your working directory.
    92a8e3f] add more ingredients
    warning: LF will be replaced by CRLF in chickenpotpie.
    The file will have its original line endings in your working directory.
     1 file changed, 2 insertions(+)

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo (master)
    $ git push
    To cs61b-taa@derby.cs.berkeley.edu:students/cs61b-tap
     ! [rejected]        master -> master (fetch first)
    error: failed to push some refs to 'cs61b-taa@derby.cs.berkeley.edu:students/cs61b-tap'
    hint: Updates were rejected because the remote contains work that you do
    hint: not have locally. This is usually caused by another repository pushing
    hint: to the same ref. You may want to first integrate the remote changes
    hint: (e.g., 'git pull ...') before pushing again.
    hint: See the 'Note about fast-forwards' in 'git push --help' for details.

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo (master)
    $ git pull
    remote: Counting objects: 5, done.
    remote: Compressing objects: 100% (2/2), done.
    remote: Total 3 (delta 1), reused 0 (delta 0)
    Unpacking objects: 100% (3/3), done.
    From derby.cs.berkeley.edu:students/cs61b-tap
       1153f51..3e75d1e  master     -> origin/master
    Auto-merging chickenpotpie
    CONFLICT (content): Merge conflict in chickenpotpie
    Automatic merge failed; fix conflicts and then commit the result.

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo (master|MERGING)
    $ cat chickenpotpie
    celery
    carrots
    flour
    <<<<<<< HEAD
    covfefe
    eggs
    love
    =======
    chicken
    >>>>>>> 3e75d1e0e0e85eb6a0b59b7f59e1801b655b1882

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo (master|MERGING)
    $ vim chickenpotpie

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo (master|MERGING)
    $ cat chickenpotpie
    celery
    carrots
    flour
    eggs
    love
    chicken

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo (master|MERGING)
    $ git commit -am "resolve merge conflict"
    [master 1716c63] resolve merge conflict

    Matthew@Matthew-PC MINGW64 ~/Desktop/temp/repo (master)
    $ git push
    Counting objects: 6, done.
    Delta compression using up to 4 threads.
    Compressing objects: 100% (4/4), done.
    Writing objects: 100% (6/6), 603 bytes | 0 bytes/s, done.
    Total 6 (delta 2), reused 0 (delta 0)
    To cs61b-taa@derby.cs.berkeley.edu:students/cs61b-tap
       3e75d1e..1716c63  master -> master
    
Dis 01. Introduction to Reading and Writing Java!
August 28+29, 2018.
Primitives vs. Non-Primitives
We keep track of data using variables. Variables come in two flavors: primitives and non-primitives. Primitives in Java include only the following: byte, short, int, long, float, double, boolean, char. All other kinds of variables are non-primitives. Variables representing primitives store the appropriate raw value, while variables representing non-primitives store references to the actual data.

The creators of Java made this design decision for the language due to efficiency justifications. Imagine that you're in at an interview for your dream job. The manager really loves you and is almost ready to hire you, but first, they'd like to ensure that everything you've said about yourself is true. And so, the manager would like to verify with your best friend that you have indeed been truthful. But now the question is, would it be appropriate for you to clone your best friend so that the manager can chat with them? Probably not...your friend might not appreciate that very much. It makes much more sense to provide the manager with a reference to your friend, a phone number perhaps. This is why Java handles "complicated" (non-primitive) variables using references.

This goes hand-in-hand with what Professor Hug calls the Golden Rule of Equals. When we are drawing box and pointer diagrams, when a variable's contents are asked of, we provide whatever is inside that variable's corresponding box. This means that for primitives we copy the raw value, while for non-primitives we copy the arrow that we have drawn, such that our copied arrow points to the same destination. Given this distinction, if a function takes in both a primitive and a non-primitive as arguments, and it mangles both during the function call, when the function is over, which of the two (or both?) were actually changed? See the example below to find out:



Iteration and Recursion
Often times, we want to write code that does something over and over again. We have two techniques for doing this: iteration and recursion.

Iteration involves using a loop. In Java, one kind of loop is the while loop. To evaluate it, we follow these steps in order:
  1. Evaluate the loop condition. If it is false the loop is over and execution continues starting after the closing curly bracket of the loop.
  2. If it is true, then we start at the top of the loop's body of code and execute those lines in order until we reach its end.
  3. At this point, we go back to step 1 and check the condition again.
It is important to note that even if the condition becomes false in the middle of executing the loop body, the loop does not magically quite immediately. The loop can only quit in between executions of its body (unless there is a break statement, an Exception is thrown, etc.). Another kind of loop is the for loop. This kind of loop was introduced because programmers noticed that loops in general typically contained the initialization of some counter variable, the check of some condition against that counter variable, and the modification of this counter variable as iterations occur. For loops simply bundle these three elements together into a visually convenient place. We evaluate for loops following these steps in order:
  1. Initialize the counter variable.
  2. Evaluate the loop condition. If it is false the loop is over and execution continues starting after the closing curly bracket of the loop.
  3. If it is true, then we start at the top of the loop's body of code and execute those lines in order until we reach its end.
  4. Adjust the counter value according to the specified fashion. Usually an incrementation or decrementation.
  5. At this point, we go back to step 2 and check the condition again.
The other option we have is recursion. This occurs anytime we have a function and that function's body of code contains a statement in which it calls upon itself again. To prevent infinite recursion, we want our recursive functions to contain base cases, which are if statements that provide direct results of the simplest situations, and in all other cases our recursive calls should aim to move our complicated situation closer and closer to one of our base cases.

With the classic Fibonacci exercise (#3 on the worksheet), we saw how there may be many ways to solve the same problem and to arrive at the correct answer. Therefore, simply solving a problem one way is cute and all, but in the real world, that doesn't mean you're done--we want to find the "best" solution. What does "best" mean? It means code that is not only correct, but also fast, memory efficient, and clear to other human readers. In other words, in this class we will learn to consider time efficiency, space efficiency, and style/documentation.