Midterm Exam Flashcards
object
data and all stuff you can this use data for/with
class
specify objects, encapsulate data and methods together
Encapsulation
allows for access restrictions (“data hiding”) through non-public methods and non-public data
– allow users specific access
public vs private vs protected
- any class can access public methods/variables
- Private methods/variables can only be accessed by the class they are declared in.
- Protected methods and variables can only be accessed by the class they are declared in, or any subclasses of that class.
constructors
create instances of classes (AKA objects)
data abstraction
- the user of a class knows the nature of the data and the public methods, but not the implementation details
The user does not need to know how something works!
encapsulation VS data abstraction
Encapsulation allows for implementation details (data and methods) to be restricted
Data abstraction allows for a user to use the class effectively without having to know these implementation details
static
static methods and variables are not part of the Objects that classes create
– rather they are part of the Class themselves.
final
final variables, methods, and classes are constant
***disallows overrides
- final variables can’t be reassigned
- final methods can’t be overridden
- final classes can’t be subclassed
references
Other than primitive types, all Java variables are references
- a variable that holds the memory address of an object. – points to the location where the object’s data is stored.
.equals()
compares references or contents of Strings
subclasses
can call the constructor of a superclass from a subclass
when to use an overriding method?
When a subclass defines a method with:
– The same name
– The same number and type of parameters
– The same return type
As a method in the superclass, then the subclass is overriding the superclass’s method.
If you want to call the overridden method from the subclass…
use super
if you omit super?
the method calls itself, which calls itself, which repeats until the program crashes (StackOverflow error)
What if we don’t want our methods to be overridden?
disallow overrides by marking methods as final
The Object Class
- Every object we work with extends from Object
- contains certain methods including: toString, equals, clone
- Most cases we want to override these methods
toString()
returns a String representation of the Object
equals()
equals(Object other)
– returns a boolean whether the current Object is equal to other.
clone()
returns a copy of the current Object
What happens if the Object we want to clone() contains other Objects as data fields?
– we need to copy (shallow or deep)
shallow copy
- just assign those references to the new Object.
– both copies will have references to the same nested ____ object
ex/ Tea class has a field called ‘milk’ which is of type String
– Doing a shallow copy of myTea will produce a separate Tea Object that points to the same milk String that the original myTea does.
**if modified, the shallow copy points will also be affected bc they are the same Object
deep copy
all nested objects must also be copied
ex/ Doing a deep copy of myTea will also deep copy the objects it points to, so Milk also gets copied.
** if modified, the copy remains unchanged
abstract methods/classes
-Abstract methods cannot be private, static, or final
- Any class with at least one abstract method must be declared as an abstract class
- Constructors cannot be abstract
What if we need a subclass to be derived from two different super classes?
(multiple inheritence)
This isn’t allowed in Java. In Java, a subclass can only have one superclass.
*** but there is another way by using INTERFACES
interfaces
a component that declares one or more public methods
– Should have comments to inform users
– Any data fields should be public final static
*** Focus on what needs done instead of how
- divides the class into 2 parts:
1. interface (what)
2. implementation (how)
Abstract Classes vs. Interfaces
- Interfaces aren’t classes
- Abstract classes are better when you want to declare data fields or methods that your subclasses will have in common
- A class can implement MANY interfaces but can extend only ONE other class.
generic data types
Enable you to write a placeholder instead of an actual class type
<T>
- placeholder is called a type parameter
</T>
? super T
any type that is a superclass of T
T extends Comparable<…>
means any type that implements the Comparable interface for whatever type is inside of the <>
the method will work for any type (T) that can be compared to a type that is a superclass of T.
Abstract Data Types (ADTs)
- data type whose functionality is separated from its implementation
- Users of the ADT don’t need to know what the implementation
– only need to know the functionality
what methods do not all objects have?
compareTo()
they have:
- equals()
- toString()
- clone()
<T> can be used to:
</T>
- swap 2 items in an array
- sort
and more!!
on any type (T)
wild card operator (?)
- ? used to represent an unknown class type
? extends ___ (any subclass of ___)
? super ____ (any superclass of ___)
the Bag ADT
no rules about:
- how many items to put in
- order of items
- duplicate items
- types of items that can go in (homogenous java types)
BagInterface can be used to …
- int getCurrentSize()
- boolean isEmpty()
- boolean add(T newEntry)
- T remove()
- boolean remove(T anEntry)
- void clear()
- int getFrequencyOf(T anEntry)
- boolean contains(T anEntry)
- T[] toArray()
***bags allow for adding/removing very fast
generics types purpose
used to represent ADTs in java
– noted as “T” types
**more than 1 generic type
possible errors when adding/removing from arrays
- array is full
- different type of object
preconditions VS postconditions
Preconditions specify the assumed state of the ADT BEFORE a method is executed
Postconditions specify the intended state of the ADT AFTER method execution
**Using these, we can infer the method’s effect
** usually specified in the comments
example pre/post conditions for add
pre: bag contains N items
post: bag contains N+1 items
– new entry is now in bag
abnormal situations for bags / how to handle them
- Bag is in an invalid state
- Bag is full
- newEntry is an invalid Object
Ways to handle them:
- Return false, signifying that the add operation was not successful
- Throw an Exception to explain why we can’t continue
interfaces
- define a contract for data structures to follow
ex/ ADT is an interface for a specific data structure
What does ? extends T describe?
refers to any class that extends the generic type T
pros/cons of using an array for bag
pros:
- adding an entry to the bag is fast
- removing an entry is fast
cons:
- may be wasteful of memory (doubles to increase size but we only need one more spot)
- doubling capacity requires copying each item to a new array (gets slower everytime)
Bag: Linked Chain Implementation
- stores each item individually in memory
- items are not stored next to each other (like in array bag)… so not stored contiguously in memory
- NO INDEXES: each item stores a reference to the next item in the chain
** can still go through the whole chain and access each item as if we were looping through an array
** last item points to null
pros of using linked chains
- Bag can grow and shrink in size as needed.
- Remove and ‘recycle’ nodes that are no longer needed
- Java’s Garbage Collector will clean up nodes that don’t have any references
arraybag<T> copy constructor</T>
Constructs an arraybag that contains all the items that other bag contains
Loop needs to complete iterations to move items from old bag to new bag
Iterations = number of items
Loop is longer to complete the more items there are in the array bags
SLOW, shallow copies
node class
- private inner class
- exists only inside of the LinkedBag class
- The Node class/objects, are only accessed from inside the LinkedBag class.
linked chains are…
dynamic!!!
– grow/shrink depending on the amount of data unlike arrays
LinkedBag .add() Visualization
- firstNode starts as null
- create newNode with data
- assign newNode to be firstNode
- then assign firstNode to be newNode that another string can be added to
** gives access to the first node in chain from LinkedBag class
*** the most recently added item is firstNode
*** adds to beginning of list, so we do not have to navigate to the end of list with each add
why unordered bag is an advantage?
Items in a Bag are unordered to our advantage.
We can’t access specific items quickly, but we can add them quickly!
This is extra useful for systems that ingest a lot of data but don’t access the data very often
what happens to first node when removing?
The original firstNode is longer referenced by anything else
cannot be accessed by the java program anymore.
Java will mark this memory as “free” so another variable/object can use it. (Garbage Collector)
Arraybag VS LinkedChain
Fixed VS dynamic size
Slower VS faster (no resizing, running out of space, resizes automatically with no extra time)
linked chain pros/cons
- Data isn’t stored contiguously in memory
-Can’t used index-based retrieval (need to loop from the beginning or end every time)
what happens to a Node that is no longer referenced?
it gets garbage collected
differences between LinkedBag and LinkedList
Linked data structures use Nodes & references to keep track of data
– Unlike Arrays, which store data contiguously and use indexes to access data
Bag and List are both interfaces
One key difference: Bag doesn’t care about order, but List does.
LinkedBag uses a linked chain to implement the Bag interface
LinkedList uses a linked chain to implement the List
Cons of Linked Chains
- removing a certain element requires searching the entire chain
– arrays are even slower bc u have to shift items after removing them. - chains require twice the memory than an array of the same logical size
(every node has 2 parts: data and ref to next node) - Linked Chains also require iterating to retrieve specific items in the Chain.
(arrays are faster due to indexes)
how many bytes of space do references types in Java take up?
- 4 bytes of spaces
ex/ chain with 3 items
– 3 * 2 * 4 = 24 byte of memory
ex/ array with 3 items
– 3 * 4 = 12 byte
finding the address of indexes in arrays
- all ints are 4 bytes
- move up 4 bytes every index
- index 0 address 0
- index 1 address 4
- index 4 address 16
** quickly access array items by taking the start address and adding (index * item_size[byte])
Why do we care about efficiency?
- lots of data out there
- need more and more efficient algorithms to process data at a reasonable rate
- reaching the limit of computing (transistors)
RIP Moore’s Law
Moore’s Law
Moore’s Law (Gordon Moore, 1965): ‘The number of transistors that can fit on a single microchip will double every 18 months.’
This was great until about 1999 when we realized we wouldn’t be able to keep up with this. It started fading away around 2015. Now, we’re incrementally adding more transistors but not at the same rate.
algorithm analysis
- algorithms have time and space complexity
- studying time and space complexities (memory and time usage)
basic operations
- most significant contributer to an algorithm’s total time complexity
- adding, multiplying, dividing
- total basic operations:
algorithm:
A) n
B) (n^2 + n) / 2
C) 3
counting basic operations
Addition:
A) n
B) n(n+1)/2
C) 1
Multiplication:
A) 0
B) 0
C) 1
Division:
A) 0
B) 0
C) 1
- total basic operations:
algorithm:
A) n
B) (n^2 + n) / 2
C) 3
Visualizing # operations with larger inputs
of basic operations
A) n
- linear
B) (n^2 + n) / 2
- exponential
C) 3
- parallel
Why do we like to avoid factorial-time algorithms?
For an input of size 100 (n = 100), a factorial algorithm would need 9.33E+157 basic operations to complete. That is a large number:
VERY INEFFICIENT; uses a lot of memory
time complexity of an algorithm
1) count the number of executed steps
– set step = 1
– loops = n (iterations)
2) add them all together
– ex/ 1+ n + n + 1 = 2n + 2
(runtime complexity)
- f(n) = # of executed steps
- n = input size
*** loops increase time complexity due to number of iterations
Big-O Notation definition
- ignores lower-order terms
- provides a more generalized runtime
function f(n) is of order at most g(n)
***useful for comparing time complexity across algorithms
— best-worst-average cases
Using Big-O Notation
1) Ignore lower order terms
– constant < log log n < log n < log2n < n < n log n < n2 < n3 < 2n < n!
2) Ignore constant factors (c)
– c * n = O(n)
– c + n = O(n)
– 2cn is not O(2n)
*** drop lower terms and conditions
*** multiplication means NO DROPPING
Big O Notation Order
– decreasing
n!
2^n
n^3
n^2
n log n
n
log^2 n
log n
log log n
1
Big O Example
1) f(3n! + n3 + 2n)
Ignore lower orders. n! is the highest order, so ignore n3 and 2n
2) f(3n!)
Ignore constant coefficients
3) O(n!)
Best, Worse, and Average Cases
- sometimes algorithms can perform better with specific inputs
Best case: item is the first one in the list: O(1) (constant time)
Worst case: item is last in the list: O(n) (linear time)
(LINEAR – looping through every iteration)
Average case: item is ½ way through the list: O(0.5n) which is really O(n)
What is the Big-O form of this function?
f(n) = n * 3n^2 - 2
O(n^3)
Big O analysis of Bags
- The best case for removing (or checking for contains) an item is that the item is at the front of the linked chain (or array).
The average case is something being in the middle. So O(1/2n) which is O(n)
- WORST: LINEAR O(n)
Why is the worse case constant? – array bag is not resizeable (returns false – it cannot go into that bag when it is full)
List ADT vs. Bag ADT
Lists:
- position and arrangement of items matters
- order must be preserved after add/insert/remove
- items can be inserted into the middle of the List
*** cannot do these things with a bag → just adds it to the bag (not a specific position in the bag)
Bags:
- do not care about the ordering of items
removing an item from the list
- removing an item from the List requires shifting everything after it over to fill the gap.
Also notice when the array becomes full, the capacity is doubled and the items are copied over.
Using a Linked Chain to implement the List ADT
Using a Linked Chain, we can trivially add new items.
Additionally, removing items doesn’t require copying/pasting data into a new array.
– doesn’t worry about resizing
– adds to the end of a linked list
LinkedBag vs LinkedList
LinkedBag didn’t have to preserve order, so it added items to the beginning.
LinkedList has to preserve order (since you can insert items partway through the list) so it adds everything to the end.
getNodeAt for LList
getNodeAt is private because users of the LList class (or, more specifically, the ListInterface) don’t care about Nodes and references. They just care about the data being in the correct order!
Big O Time Comparison Between AList and LList
clear()
- AList = O(1)
- LList = O(1)
contains(T anEntry)
- AList = O(n)
- LList = O(n)
replace(int givenPosition, T newEntry)
- AList = O(1)
- LList = O(n)
getEntry(int givenPosition)
- AList = O(1)
- LList = O(n)
add()
- AList = O(1)
- LList = O(1)
improving big O between AList and LList
AList.clear() can be improved by simply declaring a new array of the same size. This avoids looping through the array to set every cell to null.
LList.add(T newEntry) can be improved by modifying the LList to keep a reference to the lastNode in addition to the firstNode. This makes add() as simple as setting lastNode to the new Node and pointing the old lastNode to the new Node.
What is the Big O time complexity of this code?
int n = 1000;
int sum = 0;
for(int i=0; i<n; i++) {
for(int j=0; j<n; j++) {
sum += i + j;
}
}
return sum;
O(n^2)
How many operations are happening here (f())? and what is the Big O runtime?
for(int i=0; i<n; i++) {
for(int j=0; j<n; j++) {
for(int k=0; k<n; k++) {
value += i + j;
value -= k;
}
}
}
f(3n^3), O(n^3)
What’s the worst-case runtime?
O(n) LINEAR
binary search
- splits the search range in half each iteration
- as long as the list is sorted, we can check if the item is in a given range or not very easily.
binary search Big O runtime
O (log n)
Stack ADT
- order matters
- can only access items from the “top”
- can only add items to the “top”
- can only remove items to the “top”
– can use an array or a linked chain to implement a stack
(push, pop, peek)
push VS pop VS peek
push = adds an item to the stack
pop = removes item from the stack
peek = fetch the first element of the Stack (nothing happens to the element)
ArrayStack
Stack operations always happen at the ‘top’ of the Stack
- start at the end bc you do not have to shift all items over when u remove
- storing items at the beginning of the array is inefficient
2 ways to implement stacks
array stack and linked list
ArrayStack.push() (the inefficient way)
we have to shift existing items over, which is an O(n) operation
– have to do “n” shifts to move everything over one
– remove items at index 0 and shift
ArrayStack.pop() (the inefficient way)
we have to shift existing items over as we pop them, which is an O(n) operation.
– still linear running time complexity
ArrayStack.push() (efficient way)
we don’t need to shift items over if we keep track of the index that ‘top’ is at. So push() when adding to the end of the array is O(1).
– constant time operation; independent of the size of the stack
– array can get full, we have to resize which involves copying everything over (linear operation O(n))
ArrayStack.pop() (efficient way)
we no longer have to shift anything over, so the runtime for pop() is O(1)
– constant time running complexity
Array Implementation Time Complexities
.push(T newEntry)
top at beginning: O(n)
top at end: O(1)
.pop()
O(n)
O(1)
.peek()
O(1)
O(1)
.isEmpty()
O(1)
O(1)
*** push and pop with top at beginning of array is the most inefficient O(n)
LinkedStack.push(), pop(), peek()
Push, pop, and peek on a LinkedStack are all O(1) operations, as the stack size does not influence time
NEVER HAVE TO RESIZE (BENEFIT OF USING LINKED STACK)
Con of LinkedStack
- Cannot directly access an element
- Cannot use indexing ^^^^
- Always using a linear search from beginning or end working one by one through elements in the chain
- Takes up twice as much data for the certain
Algorithm that checks for Balanced Parentheses runtime
O(n) since it loops through the input array one time (n = size of input, so in this case it’s the length of the char array)
algebraic notation and stacks
Algebraic expressions take different forms:
1) Infix: each binary operator appears between its operands a + b
2) Prefix: each binary operator appears before its operands + a b
3) Postfix: each binary operator appears after its operands a b +
prefix and postfix for algebraic notation
Prefix and Postfix are easy to evaluate for computers!
– No need for parentheses
– No need for order of operations (sorry Aunt Sally)
** why prefix/postfix notation are better (can be solved algorithmically)
How computers evaluate Infix notations with Stacks
Computers aren’t good with infix (e.g. a + b) notation. So, in order to evaluate infix statements, computers must:
- Make sure the expression is balanced (parentheses, brackets, etc. are called delimiters)
- Convert the expression from Infix to Postfix
- Evaluate the Postfix expression
Checking for Balanced Delimiters
- Iterate over the statement (“for each character”)
- If it’s a number or operation, ignore it
- If it’s an opening delimiter: ( [ {
— push() it onto the stack
- If it’s a closing delimiter: ) ] }
— If the result of pop() is the opening match, discard both
— If the result of pop() isn’t a match, return false
2) Repeat a-c. If you get to the end, return isEmpty()
balanced
[ { } ] )
[ ] { ( )
[ ( () { [ ] } () ) ]
Not balanced
Not balanced
Balanced
Converting from Infix to Postfix
- Create a String for output and a Stack for operators
For each character:
- If it’s an operand (i.e. a number or variable), append it to output
- If it’s the ^ operator, push() it to the operators stack (special case)
- If it’s any other operator (+, -, *, /), pop() operators from the operators stack and append each to output until the stack is empty or until peek() returns an operator with a lower precedence than the new operator. Then push() the new operator onto the operators stack.
- If it’s an open parenthesis (“(“), push() it onto the operators stack
- If it’s a close parenthesis (“)”), pop() operators from the stack and append to output until an open parenthesis (“(“) is popped. Discard both parentheses.
3) pop() and append any remaining operators to output
precedence for stacks
Excuse My Dear Aunt Sally, Please:
1. Exponents
2. Multiplication/Division
3. Addition/Subtraction
4. Parentheses
Note: this is different than regular algebra! Parentheses are special operators so they are treated as ‘lowest precedence’
Expression: 3 * ( 5 / [ 4 ^ {7 - 1}] + 2)
what is the post fix?
3 5 4 7 1 - ^ / 2 + *
(5 - 2 + 3) * (3 ^ 4)
post fix?
5 2 - 3 + 3 4 ^ *
Evaluating Postfix using a Stack
- Initialize an empty Stack
- For each character in Postfix expression:
- If it’s an operand (number or variable), .push() it onto the stack
- If it’s an operator (+, -, *, /, ^):
— .pop() second operand from stack
— .pop() first operand from stack
— Apply operator on the two operands
— .push() result onto the stack - Return the remaining value in the stack
5 2 - 3 + 3 4 ^ *
evaluating postfix
486
7 2 ^ 3 - 6 /
evaluating postfix
7 (remember integer division…no decimals)
Runtime Stack (aka Program Stack)
The OS creates a stack for each running program
The stack is used to hold data for each method call
***activation record
- When a method is called, it’s activation record is pushed onto the stack!
- When a method returns, the top activation record is popped
Program Counter (PC)
- The Program Stack starts with an activation record for main
- PC keeps track of the next instruction to execute
countDown()
- Print integer
- If integer is greater than 1:
— call countDown(integer - 1) - Else:
— Don’t do anything
recursion
Recursion is a way to solve problems by breaking them into smaller kinds of the same problem
A method that calls itself is considered recursive
The invocation is a recursive call, recursive invocation, or smaller caller
Designing Recursive methods
1) method must be given an input value
2) must behave differently depending on that input
3) One or more cases should provide solution that does not require recursion (base case)
4) One or more cases must include a smaller caller
Important Distinction between Iterative and Recursive methods
- Iterative methods exclusively use loops
- Recursive methods call themselves; can also contain loops
Stack of Activation Records
- Each call to a method generates an activation record
- Recursive method uses more memory than an iterative method
- If a recursive call generates too many activation records, could cause stack overflow
How can we find the middle of an array?
- integer division
int mid = (first + last) / 2
int mid = (first + last) / 2;
Given a recurrence relation:
t(1) = 1
t(n) = 1 + t(n - 1) for n > 1
- Intuit the running time complexity
- Prove the base case
- Prove the inductive step
- done!
Proof by Mathematical Induction: Inductive Step
Example
t(1) = 1
t(n) = 1 + t(n - 1) for n > 1
*** Need to prove that t(n) = n
Inductive Step:
Assume that (c) is true for all values < k and then prove it is true for k
- Inductive hypothesis (d): t(n) = n for all n < k
- We want to prove that t(k) = k
- From (b), we know t(k) = 1 + t(k - 1)
- From (d), t(k) = 1 + k - 1 = k
- End of proof that t(n) = n
- Runtime Complexity of factorial() is in fact O(n)
Big O of logarithmic recursive calls
O(log n)
So we’ve proved that t(n) = log2(n) + 1
So… What’s the Big O of log2(n) + 1?
Using Recursion for Towers of Hanoi
Goal: Move all disks to third pole
Rules:
1. Move one disk at a time
2. Can only move the top disk on a stack
3. No disk can rest on a smaller disk
*** recursive algorithm can be used to solve any number of disks
What ADT are the Activation Record and Towers of Hanoi best suited for?
stack
Proof by Mathematical Induction Stepwise
- Build the base case and recurrence relation for the function by counting # of operations.
– More operations = more time. t(n): the amount of time it takes for the algorithm to complete on an input of size n. - Evaluate t(n) for various values of n and use intuition to guess running time
- Try to prove your guess using induction
– If unsuccessful, go back to step 2
backtracking
Backtracking means to retrace your steps.
When you need to choose among several potential solutions, you try one and see whether you are successful. If you are, you’re done!
If you aren’t successful, backtrack to an earlier state and try a different solution.
If you backtrack and still don’t have any other solutions to choose from, keep backtracking.
Backtracking to Solve 8 Queens
- To place 8 queens on a board we need to think recursively:
- Try to place a queen in a specific column
– Use a loop to try all 8 squares in the column - If the current square isn’t legal, remove the queen
- If the current square is legal, keep it there and recurse
– If this doesn’t lead to a solution, remove the queen and move on - When all squares in a column have been tried, backtrack to the previous column
– If we can’t place a queen in column i, there’s no point in trying column i+1. Instead, backtrack and move the queen in column i-1 before moving forward.
Solving 8 Queens Iteratively
Each time we find that a certain square doesn’t work, we’d need to record that to make sure we don’t try it again
We’d also need to keep track of the state of the board at various points (especially if we backtrack)
The runtime stack stores all of this information for us when we use recursion. If we wanted to do this iteratively, we would have to use our own Stack instead.
LINEAR runtime O(n)
divide and conquer algorithm
Solve a problem by repeatedly breaking it into smaller subproblems that can eventually be solved directly
Usually the subproblems are a fraction of the size of the original problem
good for recursion!!!
Limitations of Recursion
- Each Activation record (recursive call) takes up space on the Runtime stack and takes time to create.
- Recursive solutions always run slower than iterative ones
- We can convert recursive solutions to iterative ones so they run faster
Tail Recursion
A recursive method where the recursive call comes last
** After a tail-recursive method runs, nothing is done afterwards
converting to tail recursion to iterative solution
To convert tail-recursion to an iterative solution:
- Make sure the recursive call is inside an if statement
- Change the if statement to a while loop
- Update variables(s) instead of updating them in the recursive call
Recursion Trees/Fibonacci
We can use Recursion trees to estimate running time complexity!
nodes ~doubles at each level
each node is O(1)
fib() is O(2^n)
fib() has its downsides
We end up recomputing a lot of things more than once!
fib(2) 3x
fib(1) 5x
fib(0) 3x
Can we improve code when we call the same method with the same arguments multiple times?
Memoization is the process of storing results to past function calls so we don’t need to re-evaluate them every time.
This doesn’t affect the big O runtime complexity, but will generally make the program complete much faster for larger inputs.
What’s the Big O time complexity of ArrayList.add(newEntry), assuming it uses a resizable array?
O(1)
Amortized Time
Amortized Time is the total running time divided by the number of calls
Amortized vs. Average Runtime
Amortized running time is the average running time over a sequence of operations
– Useful when the operations have varying running time
Ex: add() is O(1) when we don’t have to resize but O(n) when we do
Average-case running time is the average over a probability distribution for the input
– Usually we assume it’s a uniform distribution
Ex: Assume each index in the array has the same probability of holding the target value in search()
Binary Search
splits our search space in half at each step, making its average and worst case running times better than sequential search
Average case running time
depends on input data
What is worst-case runtime for binary search over a sorted linked chain?
O(n)
public VS private
Public: Can be accessed from outside of the class
Private: Can’t be accessed outside of the class
Static
Attached to a class, not an object
Final
Can be accessed from outside of the class
A class’s clone() method returns a new Object, but that object’s references still point to the cloned Object’s data. Is this an example of a deep copy or a shallow copy?
This is a shallow copy. Deep copies also make copies of the objects that the copied object references. Since the copy and the original point to the same data, this is a shallow copy,
True or False: In addition to classes that handle Generics, you can also create a Java method that works with Generic types, ex: public static <T>void foo(int bar, int baz)</T>
TRUE
Which fields do Node classes typically have? (circle all that apply)
String name
Node next
int index
T data
Node next
T data