Software Construction: Data Abstraction Flashcards
To build a call graph
look at all the method calls in a method definition and draw a line on the graph from the method to each method it calls.
Hypothesis driven debugging
making careful educated guesses, with plans for how to validate or invalidate them.
If you notice a value (or field) is showing up as wrong, then
there are three typical problems to check: you’re not retrieving the value correctly, or you’re not setting the field correctly, or you’re not setting it at all.
If you are changing a parameter’s value inside a method, and aren’t seeing that value reflected once the method has returned
check to make sure you’re not just changing a local variable instead of the object referenced by the parameter.
If you’re looking for an object based on some field, and not finding that object,
make sure you are comparing against the field values, rather than accidentally comparing object references. (We will see an even more elegant way to do this in the second course)
An exception’s stack trace message
tells us where the execution died (in the method at the top, right below the message)
An index out of bounds exception means
you tried to access off the end of a collection. Remember to check whether you are in range (likely by using the collection.size() method) before accessing a collection.
A null pointer exception means
you tried to execute a method, or access a field in an object that has not been instantiated. Before using an object, it’s best to check if it exists! You can use if (theObject!=null) to check.
Remember to keep checking hypotheses even if you have found your bug.
There may be more than one thing wrong.
There are four phases in our design approach (often applied in a circular way):
specification of public interface, usage scenarios, test specification, and finally implementation.
Specification means
determining exactly what the public operations should be, and what they should expect/assume when called (REQUIRES), what they change (MODIFIES) and what they produce (EFFECTS).
Usage scenarios means
looking at all the ways that your abstraction will be used – this will perhaps motivate new operations, which will then need to be specified
Test specification means
writing a thorough and rigorous test suite for every operation prior to implementation
Implementation involves
deciding on an internal representation of the data in the abstraction, and then implementing the public and supporting private methods to make it work. The tests should all pass when the implementation is done
a REQUIRES specification of a method indicates
the preconditions of running that method.
typically we do not include trivial preconditions (such as the parameter must be non-null), but instead focus on
conditions of correct running of the method.
If a method can be called regardless of the state of the program, then
no REQUIRES clause is needed.
mutability means that we can
set the value of an object and it will remain changed. We saw this with variables earlier in this course.
The MODIFIES clause indicates
whether a method, or any method it calls, mutates any object.
If a method mutates its own object, we indicate that it MODIFIES:
this
If a method mutates another object, we indicate that it modifies
that other object.
If A.a() calls B.b(), and B.b() specifies that it modifies this, then A.a() would specify that it modifies
B.
If nothing is modified in a method, or any method it calls, then we
do not include the MODIFIES clause in the specification.
The EFFECTS clause indicates
the purpose of the method – describing the work that the method does.
For public methods, this clause EFFECTS only describes
publicly visible effects – implementation details are not described.
Effects clauses for private methods have
more flexibility in their level of detail, but typically also only indicate “publicly” visible effects – however these might mention internal details of the class implementation more freely than the specification for public methods (because a developer of the class is reading these specifications, not a user of a class).
Thinking through usage scenarios involves
considering all the situations in which another class would make use of the data abstraction we are creating. We want to think through not just the operations that are available, but the order in which they will be called. This, in turn, helps us make sure we have all the methods we need.
To sketch out usage scenarios
create a main function (IntelliJ shortcut: psvm) either in a new class, or an existing class (typically we make a new class for this purpose).
Into that main function
place calls to the data abstraction in all the arrangements and combinations you can think might be valid. Think through every situation in which the data abstraction will likely be used, and make a little usage scenario for that combination. These scenarios do not have to be as exhaustive as the test scenarios we will do in the next part. Instead they should capture the spirit of the usage of the data abstraction.
Test driven design (typically a part of agile software development) involves
writing tests right after writing your specifications, then implementing your code until the tests all pass.
Tests are a more comprehensive way of
checking the conformance of your implementation to its intended specification.
Black Box Testing is a way of testing software
without knowing any of the internal structure or implementation details of the system being tested. In this case, we are interested in testing the functionality of the software.
White Box Testing is a way of testing software
that actually tests the internal structure and implementation details of the system. In this case, we are no longer interested in testing the functionality of the software – rather, we want to test the actual details of implementation.
A test suite is the
collection of test cases that, together, thoroughly (though not 100% exhaustively – that’s impossible for all but the most trivial cases) test the data abstraction.
Create a test case for everything happening in the what clause?
effects clause, and create a test case for each combination of inputs and outputs.
Create tests such that
each branch of your method is followed (this is loosely related to branch coverage, though at this stage we are just checking branches generally, since we do not yet have an implementation against which to design a test case with proper branch coverage)
Write test cases to test the edges of
ranges (boundary checking) to check for typos in range checks or conditional statements.
Test case should be named after the
method they are testing, and also after the scenario they are testing to help you keep track of what you have tested. (Test)
Test cases always specify
the inputs, and the expected outcome
Name test classes “Test” and then
the name of the class you are testing
Each test is basically structured:
(1) set up,
(2) call the method to test,
(3) check that the expected outcomes occurred.
Pull duplicated lines of set up declaration code into
fields.
JUnit looks for the @Test annotation to
identify methods to call as tests
@Before (JUnit 4) or @BeforeEach (JUnit 5) lets you
annotate a set up method that will be run before each test method. The annotated method does not need to be called from within the test methods – it runs automatically.
Pull all duplicated lines of set up behaviour into
that @Before/@BeforeEach method.
AssertTrue, AssertFalse and AssertEquals can also be used to
check the state of objects and values. If any of these fail, the test fails.
Conditions can be used to check values or states of an object to check that it has been modified correctly – and if it has not,
the Fail keyword can force the test to fail.
JUnit is a mechanism for type of testing
black box testing: testing abstractions from the outside
Writing tests with JUnit
For each test case, we write a method in our Test class, and annotate it with @Test (that will cause IntelliJ to add either JUnit 4 or JUnit 5 into the classpath – either is fine for this course)
We write the set up code for each test, and put it in the set up method, and annotate that method with @Before (or @BeforeEach)
Then we write comments to indicate the 3 parts of the test: (1) check that the set up is correct, (2) call the method to test, (3) check that the outcomes were expected. Write these comments as specifically as possible for what you are testing.
Checking the setup involves checking that the work expected by the method has not already been done. Sometimes this can be intuited, but typos creep into code, and especially if more than one person is collaborating, behaviour can suddenly pop up anywhere. Best to check that the set up is clean.
Calling the method to test is straightforward. Sometimes this is integrated into the next stage, if you are directly checking the output of the method. Other people like to save the output of a method, and then check it one a separate line of code. That might make it easier for troubleshooting later.
Checking the expected outcome involves looking at the outputs of the method and writing conditions or assertions that will fail if the outcome is not correct.
Remember: A test will pass unless you make it fail, so having a passing test isn’t a great accomplishment. You should have a correctly passing test!
Breakpoints can be set in tests, just like in “regular” code. Sometimes it is helpful to
pull calls to methods into their own line of code, rather than placing them inside assert statements, so that you can more easily isolate them for breakpoints.
A problem with the branching in complicated conditional statements often causes
bugs in code.
If a test is failing when you believe it should be passing (or the reverse), you
may be asserting the wrong thing. Typos often creep into assert statements.
Remember to properly structure your test so that it is setup - call method - check outputs. If you mix these up
the logic of the test may not be well formed, and the test may pass or fail when it should not.
Deciding which method to call (which implementation to use) is called…
method dispatch.