---- Answers to sample questions to prepare for the final exam ----
0. The last two problems on the midterm, reproduced here:
Suppose that in C#, classes B and C are both subclasses of A, and
that B and C are otherwise unrelated. Assume also that:
A x = new B();
B y = new B();
Determine if the following lines are valid, and if not, whether a compile-time
or runtime error will be generated (consider the lines independently).
Explain your answer.
a. C z = (C)x;
b. C z = (C)y;
>>>
line a. will compile since the static type of z (C) is a subclass of
the static type of x (A). However, since x's real or runtime type os
B, not A, this line will cause a runtime exception. Line b. will not
compile because C and B are incompatible types. That is, the compiler
already have enough information, from the way the variables were decalared,
that the line does not type-check.
---
In C#, static and dynamic dispatch are distinguished using
certain keywords (virtual, override and new). Suppose you are
learning a new language that has C#/Java style syntax, but which does
not have these keywords. You look in your notes and see that you had
copied some code that the professor had written on the board:
class super
{
public void f() { print("super f"); }
}
class sub : super
{
public void f() { print("sub f"); }
}
Unfortunately, you dozed off afterwards and don't remember if the
professor said the language used static or dynamic dispatch. You need
to conduct an experiment to find the answer yourself. Write a code
fragment that will behave differently depending on whether static or
dynamic dispatch is used. {\bf Clearly indicate which is which.} You don't
have to write a ``main'' function - just the lines that matter.
>>>
super A = new sub();
A.f();
static dispatch will print super f, dynamic dispatch will print sub f.
Incidentally, you might have noticed that we didn't really use the
C# keywords new, virtual, override that much. That's because we used
interfaces, and never had to "override" any existing method. In old C++,
one had to use lines such as "virtual void f() = 0; " to define
an interface.
1. Given the following:
interface I
{
I f();
}
class A : I
{
public virtual I f() { return this; }
}
Explain what's wrong with the following lines:
A x = new A();
A y = x.f();
>>>
The second line will not compile because the return type of x.f() is I,
not A. If we had written A y = (A)x.f(), then it would compile since
A is a subclass of I. It would also run in this case since x is indeed
an A object.
2. In the visitor pattern we had a critical function:
public object accept(visitor v) { return v.visit(this); }
Where was this function defined (in what class(es) does it exist?)
What is "this" referring to?
>>>
The function is placed in every visitee subclass. "this" refers to
the visitee, the data object to be visited. A subtle point about this
function is that, if you look at our examples such as the food visitors,
you'll see that there's no accept in the "foodbase" superclass. Why
not put it in the base superclass, and just have every subclass inherit it?
It's because of "this" - the type of "this" in foodbase would be foodbase,
and not one of the subclasses (meat, fruit, vegetable). The visitor objects
can only visit one of these specific subclasses.
3. Explain the difference between natural and artificial polymorphism.
Give an example of each.
>>>
Polymorphism is first of all a characteristic of the algorithm, not of
the program or programming language. Some algorithms, such as finding
the length of a linked list, are polymorphic by nature since it never
looks at what's stored in the linked lists. Some algorithms, such as
sorting a list, also have the potential to work for many kinds of
lists, but only if we supply the appropriate definition of what it
means for two elements to be "<" and "==" to eachother. This kind of
procedure are artificially polymorphic. Programming languages have
mechanisms that allow you to build this kind of abstraction.
Inheritance is one method. One can define different procedures for
"<" and "==" in subclasses. In "simpler" languages such as Scheme,
Perl or pure C, one can achieve this kind of polymorphism by passing a
function parameter to the procedure. For example, one that defines
"<" for a sorting procedure.
Please understand this by putting it in your own words. Don't just
memorize it. See question 5 below (hint: it's asking the same question,
but from a different angle).
4. Explain in your own words one advantage of parametric polymorphism
over inheritance polymorphism. That is, what's the difference between
using a type parameter , and calling your variables (for example)
"A x" instead of "object x".
The basic problem with the inheritance approach to polymorphism is that
we loose the ability to type check code at compile time. As you should
know well by now, many errors are only runtime errors, not compile time
errors. At compile time, all that is known is the superclass of a variable,
not its actual type. If we type a variable as "object", then it could be
anything and we effectively loose all ability to catch type errors statically.
Generics, or parametric types, give us the ability to type check code
at compile time. That is, we define a parametric class such as
class yourclass
{ ...
but when we use it, we have to instantiate it with an actual type, as in
yourclass x = ...
yourclass y = ...
Thus the compiler can see the specific type information for each
object. But parametric types are not always better than inheritance,
since inheritance lets you define different versions of procedures for
different subtypes. The following question goes into this further ...
5. Some people are of the opinion that untyped languages such as Scheme and
Perl are better for implementing polymorphism, and that types just get
in the way. That is, in Scheme for example, one can define the length
function as:
(define (length l) (if (null? l) 0 (+ 1 length(cdr l))))
There's no mention of types and obviously the function will work with
any kind of list. Types, even parametric types, simply gets in the
way of programming. Regardless of whether you agree with this opinion,
there are issues that this view is not taking into consideration.
What are they?
>>>
There are two major problems:
1. without types, we loose the ability to catch many errors at compile
time by allowing non-logically structured programs to run.
2. For procedures that are naturally polymorphic this opinion has merit.
One need not be worried about supplying type information in
Scheme/Perl. But "artificial" polymorphism requires one to build
a layer of abstraction above different algorithms. Having a typeless
language won't help here - you'll still have to supply the different
algorithms (e.g. for both integer and rational equality).
The role of parametric polymorphism (generics) is not clear cut. The
advantage it has over the inheritance method is that it allows static
type checking, using the type inference rules we talked about.
However, parametric polymorphism won't allow you to *build*
abstraction either. That is, using a type variable alone won't
resolve the differences between integer and rational equality, for
example. One must still provide different algorithms. Thus parametric
polymorphism is only useful in implementing routines that are already
(naturally or rendered) polymorphic. In other words, in untyped
languages such as Scheme or Python it's trivial to write naturally
polymorphic functions. In a typed language, one still wishes to write
such functions, but in a way that's consistent with the type system
and its advantages. This is the dilemma that parametric polymorphism
attempts to address.
Sample AspectJ problems:
6. Write a pointcut definition to pick out attempts to access any
public variable of a class A. By pointcut *definition* I mean
something in the form
pointcut name(...) : ...
>>> pointcut publicget() : get(public * A.*)
7. Assume that classes C and D both have a function void f(int).
Write a pointcut expression that picks out calls to either function. Also
capture the argument passed with the pointcut.
>>>
pointcut(int x) : (call(void C.f(int)) || call(void D.f(int))) && args(x);
8. Given class:
class B
{
private int x;
pubilc B(int x0) { x=x0; } // constructor
public int f(int n)
{ if (n<2) return 1; else return n*f(n-1); }
public void g(int y)
{ System.out.println(x+f(y)); }
}
a. Write a pointcut that picks out the "execution" of the constructor of B.
(when the constructor is called, the object doesn't exist yet).
>>> execution(B.new(..))
b. Write a pointcut that picks out the initial call to f (as opposed to
recursive calls).
>>> call(int B.f(int)) && !withincode(int B.f(int))
c. The following pointcut and advice tries to change the parameter passed to
g. Explain what's wrong with it the way it's written. DON'T JUST CORRECT
IT; *EXPLAIN* WHY IT'S WRONG THE WAY IT IS!
before(int y) : call(void B.g(int)) && args(y)
{
g(y+1);
}
(hint: there are three problems that need to be addressed).
>>>
1. A before advice won't prevent the original computation from being
carried out, unless it throws an exception. This should be an
around advice.
2. The object (instance of B) that g was called on was not captured.
Need to use the "target" pointcut
3. The call to g within the advice will cause the advice to activate
again.
The correct way to write this would be
void around(int y, B n): call(void B.g(int)) && args(y)
&& target(n) && !adviceexecution
{
n.g(y+1);
}
or
void around(int y, B n): call(void B.g(int)) && args(y)
&& target(n) && !within(nameofaspect)
{
n.g(y+1);
}
or
void around(int y, B n): call(void B.g(int)) && args(y) && target(n)
{
proceed(y+1,n);
}
// note that proceed always take the same params as the advice, regardless
// of what the method g takes. It just means "proceed with the following
// information from the given pointcut".
d. Write an advice that throws an error if the g function is called from
anywhere except main (public static void *.main(..))
before() : call(void B.g(int)) && !withincode(public static void *.main(..))
{
throw new Error("can't call from there")
}
9. Explain the difference between the "withincode" and "cflow" pointcuts.
>>>
withincode examines the static source code of a function, whereas cflow
is concerned with what happens during run time. for example, if
we have
void f()
{ g(); }
void g()
{ h(); }
and f(); is called.
then the call to h from g will be recognized by cflow(call(void *.f()))
but not by withincode(void *.f()). Only the call to g will be recognized
by withincode(void *.f()) since statically f evidently calls g. (g is
called while executing the source code of f).
Also remember the difference between signature pointcuts and property
pointcuts. That is, never use something like withincode and cflow
alone. Use them only in conjunction with call/execution/set/get, or something
that defines the join point more precisely.