Sample solutions to the midterm study guide practice problems Please only look at these solutions after you've tried the problems first. Otherwise, you're just kidding yourself. 1. var x = 1; infers the type of x to be int: it's equivalent to int x = 1; The static type is inferred automatically. So "abc"/x will not compile dynamic x = 1; instructs the compiler to stop all type checking on x. "bc"/x still compiles. dynamic x =1 is closer to object x=1 than to int x=1. 2. The following function in C computes 2**n (2 to the nth power) in log(n) steps by binary factorization on n. int pw2(unsigned int n) { int ax = 1; // accumulator factor = 2; // start with 2**1 = 2 while (n>0) { if (n%2==1) ax *= factor; n = n/2; // shift over next bit factor = factor * factor; // only need 2**2, 2**4, 2**8, 2**16, etc... } return ax; } Rewrite the function in a purely functional style, without loops, without destructive assignment and with a tail-recursive function wrapped inside a larger function. >> int powerof2(unsigned int n) { int iter(int n, int ax, int factor) { if (n<1) return ax; else if (n%2==1) return iter(n/2,ax*factor,factor*factor); else return iter(n/2,ax,factor*factor); }// inner function return iter(n,1,2); // body of outer function } 3. Consider the following Scheme/C program: (define (fold f identity L) ( if (null? L) identity (f (car L) (fold f identity (cdr L))))) int fold(int (*f)(int,int), int identity, list L) { if (isnil(L)) return identity; else return f(car(L),fold(f,identity,cdr(L))); } //assume list defines a linked list struct, car(L) returns the first item //contained in the list and cdr(L) returns pointer to rest of list, and //isnil(L) determines if L is an empty list. a. Explain what this function does. Give an example of how it works. >> The function takes a binary operation f and applies it sucessively to the elements of the list. The identity is the result for the null case. For example, (fold (lambda (x y) (* x y)) 1 '(2 3 4 5)) will compute 2*(3*(4*(5*1))) b. Is this function tail-recursive? No. The last operation is the application of the function f, not the recursive call. 4. What is the difference between static and dynamic scoping? Why is static scoping almost always used? >> Static (lexical) scoping looks up non-local (free) variables in the scope where the function is defined. Dynamic scoping looks them up in the scope in which each function is called. Consider void f() {return x+x;} void g() { int x = ... f(); ... } Here, the variable x in g() should be local to g(). That is, its usage should not affect the meaning of other parts of the program. By if C were dynamically scoped, then x is not really "local" in g(), since it's clearly used elsewhere. 4a. I may also ask a question like the following: Suppose you're shown a new language that has C-like syntax but is not exactly C (MAIN is in all capitals) int x = 1; void f() { print(x); } int MAIN() { int x = 2; f(); } Explain how you would use this program to determine if the language uses static or dynamic scoping. >> It prints 1 if statically scoped, 2 if dynamically scoped. C uses static scoping. Note that the function f must have a FREE variable (x), else it would not be able to distinguish between static/dynamic scoping. 4b. Harder variant of scoping question: show how to emulate dynamic scoping using static scoping. Dynamic scoping is useful for temporarily changing the values of global variables (they must be global in order for them to have the global effect of dynamically scoped variables). int x = 1; void f() { return x; } void g() { int stackx = x; // saves x on runtime stack x = 2; //// start of dynamic scope with new value for x printf("%d",f()); // prints 2 like it was dynamically scoped x = stackx; //// end of dynamic scope, value recovered from stack } 5. What is a closure? Write a Python/Javascript/C++ closure function that takes no arguments, but which returns 2 the first time it's called, 4 the second time it's called, 8 the third time, 16 the fourth time, and so on. That is, print f() # prints 2 print f() # prints 4, >> A closure can be thought of as a pair of pointers: one to the source code of a function (i.e., a lambda term), and another to an environment of bindings of variables. You won't have a prayer unless you've read the "modularity, objects and state" handout. def powers_iterator(): x = 1 def next_power(): nonlocal x x *= 2 return x return next_power #powers_iterator f = powers_iterator() 6. Determine the output of the following C++ program #include int main() { using namespace std; int x = 0; auto f = [&x](int y) mutable { x += y; }; auto g = [x](int y) mutable { x += y; return x; }; auto h = [x]() mutable { x = 1-x; return x; }; f(1); cout << "x is now " << x << endl; cout << g(1) << endl; cout << g(1) << endl; cout << "and x is now " << x << endl; cout << h() << endl; cout << h() << endl; cout << h() << endl; cout << "the final value of x is " << x << endl; return 0; }//main >>> x is now 1 1 2 and x is now 1 1 0 1 the final value of x is 1 Note that the g closure was formed before the first call to f, so the value of x that it captured is 0. f increments the x in main to 1 because it captured x by referenced. But g and h captured the x by value: they modify their own copy of x inside their own closures and do not affect the x in main. g is like an accumulator and h toggles between 1 and 0. 7. Explain why object oriented programming requires dynamic type information. Object oriented programming is based on "hierarchies", or the difference between an abstract view (shape) and the concrete view (circle, rectangle). The compiler often only knows the abstract type (shape) of objects and not the actual type (this is also why the compiler cannot compute how much memory would be required and why oop requires dynamic memory allocation). Thus the compiler cannot determine how to call certain functions like "area" on a specific shape. This decision must be made at runtime. But this means that at runtime, information must be available so the right version of the functions can be called. This is dynamic type information. 7b. what is the advantage of "overloading" versus "overriding" 1. overloading is resolved by the compiler and therefore faster than overriding, which is resolved at runtime (dynamic dispatch). 2. overloading is more statically type safe. The compiler uses static type information to make sure that the "right" version of the function is called. When the compiler doesnt know the exact type of objects, the following compiles: double x = (double)(object)"this is not a double"; 8. Determine the output of the following C# program and explain why: (I suggest you run it to see if you're right!) using System; class superclass { private int x = 1; protected static int y = 1; public void f() { Console.WriteLine("I'm the f in superclass\n"); } public virtual void g() { Console.WriteLine("I'm the g in superclass\n"); } } class subclass : superclass { public new int x = 1; public new void f() // static override { Console.WriteLine("I'm the f in subclass\n"); } public override void g() // dynamic override { x++; y++; Console.WriteLine("x is " + x + " and y is " + y); } } public class goodquestion { public static void h(string x) { Console.WriteLine("x is a string!");} public static void h(object x) { Console.WriteLine("x is an object!"); } public static void Main(string[] args) { superclass A; subclass B1; A = new subclass(); // A has different static and dynamic types B1 = new subclass(); A.f(); // prints "I'm f in the superclass" A.g(); // prints x is 2 and y is 2 B1.f(); // prints I'm f in the subclass B1 = (subclass)A; // why is this cast necessary? why is it valid? // it is valid to cast a superclass object to // a subclass object at compile time. In this // case it's also valid at runtime because A was // was indeed assigned a subclass object. Explict cast // is required when going from super to subclass, but // implicit from sub to superclass. B1.f(); // prints I'm f in the subclass object x = "abcd"; h(x); // prints "x is an object!" because overloading is resolved at // compile time, i.e., this is NOT dynamic dispatch. } } Please note: the meaning of the "new" keyword is rather specific to C# and not essential to understand. But it is important to understand the significance of virtual/override in C# (and in C++). Most importantly, it is important to understand that the two h functions OVERLOAD eachother, not override, and which h is applied is determined by the compiler. Dynamic dispatch can only look at the dynamic type of the object that the function is called on, not at the type of further arguments of the function. 8.2 can you write code to get the superclass.g() function?: >> you'll need an object that has both static and dynamic type superclass: superclass A = new superclass(); A.g(); 9a. Explain what, if anything, is wrong with the following Java class, and WHY: MUST EXPAIN WHY. class bigobject { T x; static T y; T[] A; bigobject(int n) { A = new T[n]; } } >> because of type erasure, a static variable can ony have type Object, so the y variable in bigobject and bigobject cannot be distinguished. Thus java does not allow static variables of generic type. Also java does not allow for generic array creation: new T[n] is not allowed. This is due to the fact that arrays, which predate generics, do not use type erasure: an int[] and a String[] are distinguishable at runtime. But type erasure means that generic array creation A = new T[n] is actually A = new Object[n], which means that the WRONG dynamic type information would be attached to the array at runtime (both int[] and String[] becomes just Object[]). 9b. Explain what, if anything, is wrong if exactly the same class is compiled in C#. The program compiles and runs without any problems. C# does not use type erasure. The constructor of the class accepts a hidden type argument when instantiated. This class will compile without problems. In particular, generic arrays like T[] are actually treated differently underneath as having interface IList where T is invariant. Covariance is therefore not allowed on generic arrays in C# (but allowed on non-generic arrays for backwards compatiblity). 9c. Explain how template classes/functions in C++ carries no runtime overhead in terms of speed/memory (how is zero-overhead abstraction achieved). C++ instantiates templates at compile time. It makes a copy of each instantiated class, such as bigobject, before compiling it, so the code you're running is exactly the same as if it were a non-generic class. The only "overhead" is a possibly larger executable due to redundant code, though newer compilers can also optimize away this problem. 9d. Determine the result of compiling the following C++ template, what (if any) errors would be emitted by the compiler, and why? template class C { public: T x; bool f() { return x.is_my_favorite_name_for_a_variable(); } }; The class compiles because generic-type variables (T x) are not type-checked until the template is instantiated. Then it type-checks each instantiation, and only if the f function is called on that instantiation. It will not compile when T is instantiated with an actual type, unless that type really does have a silly member function so named. Thus it's possible that the template class works for some types but not for others. This is the principal weakness of C++ templates and can't be fixed due to backwards compatibility (a lot of existing templates won't compile otherwise). 10. Compare the following implementations of polymorphic linked lists: class list { object head; list tail; } class list { A head; list tail; } What are the advantages of using one form instead of another? >> Consider the code: list n = new list(); n.head = 1; int x = n.head+1; // this compiles without type casting list y = (list)(n.head); // this will not compile But: list n = new list(); // (without generics) n.head = 1; int x = (int)(n.head)+1; // type casting needed. list y = (string)(n.head); // this compiles, with runtime error. So there are two advantages to using generics over "object" everywhere: 1 (major): more compile-time type safety. Compiler has more information about types of your structures, no need for possibly unsafe type-casts. 2. (minor): no need to check safety of type casting at runtime, and therefore slightly more efficient. The only possible argument with using object is that you can have a list containing values of different types (e.g. strings and ints), but that's the same as list. 11. Describe what will happen with the following C# segments of code, in terms of any compiler errors or runtime errors, and EXPLAIN WHY // assume using System; and using System.Collections.Generic; a. object[] A = new string[10]; A[0] = 1; // 1 is not a string but it's an object >>This code compiles, but the second line will cause a runtime error. b. List B = new List(); // equivalent to ArrayList in java >>This code will not compile because the type variable for List is not covariant (or contravariant), because the type variable T in the definition of List does not appear exclusively as a return type (or as a argument type). You have B.Add, which needs the type as an input type, as well as B.Get, which needs the type as a return type. So one of the (little known) advantages of using List over regular arrays is that they're more type safe. Java has the same behavior with regard to ArrayList. 12. Determine if the following program will compile and run, and EXPLAIN WHY. using System; interface I { int f(T other); } class A : I { protected int x = 2; public int f(A other) { return x-other.x; } } class B : A { double y; public B(double z) {y=z;} } public class hardquestion { public static void Main() { I x = new B(2.5); } } 12b. What minimal changes (without changing main) should be made to get the program to work as intended. >> This code will not compile because B inherits f from A, and so B also implements I. But it does not implement I, so there's a type error in main (the function f it inherits from class A takes an A argument, not a B argument). It will run if the type parameter A is contravariant: interface I { int f(T other); } This is a valid interface because T only appears as an argument type and not a return type. The IComparable interface of C# uses this feature. Most .Net interfaces have been upgraded to take advantage of covariance and contravariance where possible. 12c. What's the difference between (A*)p and dynamic_cast(p) in C++ ? (A*) only tells the compiler to treat value p as a pointer to structure A, it has no meaning at runtime. dynamic_cast(p) is equivalent to (A)p in Java or C# in that it also checks at runtime whether p is actually an A object, using runtime type information that is required of OOP languages. 13. Explain what, if anything, is wrong with the following C# interface: interface I { C f(A x); B g(C y); } a type can be contravariant (in) in the domain or covariant (out) in the codomain, but it can never be both covariant and contravariant. If a type variable occurs both as a argument type and a return type, it can only be *invariant* (no in out for C). 14. This question concerns the option monad and you may consult the source code for the monad for this problem. C# contains a Stack class that works as follows: using System.Collections.Generic; ... // in main: Stack S = new Stack(); S.Push(1); int x = S.Pop(); System.Console.WriteLine( S.Peek() ); This code fragment compiles but will throw an System.InvalidOperationException because after the one is poped, there is no value to peek (peek returns the top value without removing it). Both .Pop and .Peek could throw this exception. You are to write an "object adaptor" for this stack class, Safestack, with versions of push, pop, peek that return Nothing (None) instead of throwing exceptions (push can return void but you still need to adopt it). 14b. (This one's a bit hard, but it's good practice) Write a function that takes an instance of Safestack and a function of type Func> that pops the top two values from the stack, applies the function to them, and pushes the result back on the stack, if there is one. It should also return the value computed by the function. Option apply_op(Safestack stack, Func> fun) For example, if the top two values on the stack of integers are 1 and 3 (3 on top) a call to apply_op(stack, (x,y)=>Option.Make(x-y)) will pop the 3, then 1 from the stack, push 2 (3-1) on the stack, and return Some(2). However, if there were fewer than two values on the stack, or if the function fun returned None, then the function should still try to pop the stack twice (make the stack empty) and return None. You many only call push, pop and peek on the Safestack. Your program may never crash. /////// using System; using System.Collections.Generic; public class Safestack { private Stack inner = new Stack(); // default constructor suffice // it'd be ok to have push return void, but I choose to return `this` // so I can optionally chain together a series of operations as in // stack.push(1).push(2).push(3)... public Safestack push(T x) { inner.Push(x); return this; } public Option peek() { Option answer; try { answer = Option.Make(inner.Peek()); } catch (Exception e) { answer = Option.Nothing; } return answer; }//peek public Option pop() => peek().map(v => { inner.Pop(); return v; }); }//Safestack public class safestack { static Option apply_op(Safestack stack,Func> f) => stack.pop() .pair(stack.pop(), (x,y) => f(x,y) .map(z => {stack.push(z); return z;})) .flatten(); // alternative: static Option apply_op2(Safestack stack,Func> f) => stack.pop() .and_then(x => stack.pop() .and_then(y => f(x,y) .and_then(z => { stack.push(z); return Option.Make(z); //and_then requires lambda that returns Option }))); public static void Main() { Safestack stack = new Safestack(); stack.push(5).push(1).push(3); // 3 on top of stack var result = apply_op(stack, (x,y)=>Option.Make(x-y)); Console.WriteLine(result); // prints Some(2) result = apply_op2(stack, (x,y)=>Option.Make(x*y)); Console.WriteLine(result); // prints Some(10) result = apply_op(stack, (x,y)=>Option.Make(x/y)); Console.WriteLine(result); // prints None }//main }//safestack public class