This site is from a past semester! The current version is here.
CS2113/T Aug '19
  • Week 1 [Aug 12]
  • Week 2 [Aug 19]
  • Week 3 [Aug 26]
  • Week 4 [Sep 2]
  • Week 5 [Sep 9]
  • Week 6 [Sep 16]
  • Week 7 [Sep 30]
  • Week 8 [Oct 7]
  • Week 9 [Oct 14]
  • Week 10 [Oct 21]
  • Week 11 [Oct 28]
  • Week 12 [Nov 4]
  • Week 13 [Nov 11]
  • Textbook
  • Admin Info
  • Report Bugs
  • Slack
  • Forum
  • Project Info
  • Instructors
  • Announcements
  • File Submissions
  • Tutorial Schedule
  • Duke
  • Project Phase1 Dashboard
  • Java Coding Standard
  • samplerepo-things
  • Projects List
  • config.json templates for Reposense
  • PersonalAssistant-Duke
  • Project Phase2 Dashboard
  • Reference project - Addressbook
  • Repl.it classroom
  • Previous WeekNext Week

    Week 10 [Oct 21]

    • [W10.1] Design Principles: Intermediate-Level

       How Polymorphism Works

    • [W10.1a] Paradigms → OOP → Inheritance → Substitutability

    • [W10.1b] Paradigms → OOP → Inheritance → Dynamic and Static Binding

    • [W10.1c] Paradigms → OOP → Polymorphism → How

       More Design Principles

    • [W10.1d] Principles → Liskov Substitution Principle

    • [W10.1e] Principles → Law of Demeter

    • [W10.1f] Principles → Interface Segregation Principle

    • [W10.1g] Principles → Dependency Inversion Principle

    • [W10.1h] Principles → SOLID Principles

    • [W10.1i] Principles → Brooks' Law

    • [W10.2] Design Patterns: Basics

       Introduction

    • [W10.2a] Design → Design Patterns → Introduction → What

    • [W10.2b] Design → Design Patterns → Introduction → Format

       Singleton pattern

    • [W10.2c] Design → Design Patterns → Singleton → What

    • [W10.2d] Design → Design Patterns → Singleton → Implementation

    • [W10.2e] Design → Design Patterns → Singleton → Evaluation

       Facade pattern

    • [W10.2f] Design → Design Patterns → Facade Pattern → What

       Command pattern

    • [W10.2g] Design → Design Patterns → Command Pattern → What

       Abstraction Occurrence pattern

    • [W10.2h] Design → Design Patterns → Abstraction Occurrence Pattern → What

    • [W10.3] Test Case Design
    • [W10.3a] Quality Assurance → Test Case Design → Introduction → What

    • [W10.3b] Quality Assurance → Testing → Exploratory and Scripted Testing → What

    • [W10.3c] Quality Assurance → Testing → Exploratory and Scripted Testing → When

    • [W10.3d] Quality Assurance → Test Case Design → Introduction → Positive vs Negative Test Cases

    • [W10.3e] Quality Assurance → Test Case Design → Introduction → Black Box vs Glass Box

    • [W10.3f] Quality Assurance → Test Case Design → Testing Based on Use Cases

       Equivalence Partitioning

    • [W10.3g] Quality Assurance → Test Case Design → Equivalence Partitions → What

    • [W10.3h] Quality Assurance → Test Case Design → Equivalence Partitions → Basic

    • [W10.3i] Quality Assurance → Test Case Design → Equivalence Partitions → Intermediate

       Boundary Value Analysis

    • [W10.3j] Quality Assurance → Test Case Design → Boundary Value Analysis → What

    • [W10.3k] Quality Assurance → Test Case Design → Boundary Value Analysis → How

    • [W10.4] Quality Assurance

       Testability & Coverage

    • [W10.4a] Quality Assurance → Testing → Introduction → Testability

    • [W10.4b] Quality Assurance → Testing → Test Coverage → What

    • [W10.4c] Quality Assurance → Testing → Test Coverage → How

    • [W10.4d] Tools → JUnit → JUnit: Intermediate

    • [W10.4e] Quality Assurance → Testing → Test-Driven Development → What

       Other QA Techniques

    • [W10.4f] Quality Assurance → Quality Assurance → Introduction → What

    • [W10.4g] Quality Assurance → Quality Assurance → Introduction → Validation vs Verification

    • [W10.4h] Quality Assurance → Quality Assurance → Code Reviews → What

    • [W10.4i] Quality Assurance → Quality Assurance → Static Analysis → What


    [W10.1] Design Principles: Intermediate-Level


    How Polymorphism Works

    W10.1a Paradigms → OOP → Inheritance → Substitutability

    Can explain substitutability

    Every instance of a subclass is an instance of the superclass, but not vice-versa. As a result, inheritance allows substitutability : the ability to substitute a child class object where a parent class object is expected.

    an Academic is an instance of a Staff, but a Staff is not necessarily an instance of an Academic. i.e. wherever an object of the superclass is expected, it can be substituted by an object of any of its subclasses.

    The following code is valid because an AcademicStaff object is substitutable as a Staff object.

    Staff staff = new AcademicStaff (); // OK
    

    But the following code is not valid because staff is declared as a Staff type and therefore its value may or may not be of type AcademicStaff, which is the type expected by variable academicStaff.

    Staff staff;
    ...
    AcademicStaff academicStaff = staff; // Not OK
    

    W10.1b Paradigms → OOP → Inheritance → Dynamic and Static Binding

    Can explain dynamic and static binding

    Dynamic Binding (aka late binding) : a mechanism where method calls in code are resolved at runtime, rather than at compile time.

    Overridden methods are resolved using dynamic binding, and therefore resolves to the implementation in the actual type of the object.

    Paradigms → OOP → Inheritance →

    Overriding

    Method overriding is when a sub-class changes the behavior inherited from the parent class by re-implementing the method. Overridden methods have the same name, same type signature, and same return type.

    Consider the following case of EvaluationReport class inheriting the Report class:

    Report methods EvaluationReport methods Overrides?
    print() print() Yes
    write(String) write(String) Yes
    read():String read(int):String No. Reason: the two methods have different signatures; This is a case of overloading (rather than overriding).

    Paradigms → OOP → Inheritance →

    Overloading

    Method overloading is when there are multiple methods with the same name but different type signatures. Overloading is used to indicate that multiple operations do similar things but take different parameters.

    Type Signature: The type signature of an operation is the type sequence of the parameters. The return type and parameter names are not part of the type signature. However, the parameter order is significant.

    Example:

    Method Type Signature
    int add(int X, int Y) (int, int)
    void add(int A, int B) (int, int)
    void m(int X, double Y) (int, double)
    void m(double X, int Y) (double, int)

    In the case below, the calculate method is overloaded because the two methods have the same name but different type signatures (String) and (int)

    • calculate(String): void
    • calculate(int): void

    Which of these methods override another method? A is the parent class. B inherits A.

    • a
    • b
    • c
    • d
    • e

    d

    Explanation: Method overriding requires a method in a child class to use the same method name and same parameter sequence used by one of its ancestors

    Consider the code below. The declared type of s is Staff and it appears as if the adjustSalary(int) operation of the Staff class is invoked.

    void adjustSalary(int byPercent) {
        for (Staff s: staff) {
            s.adjustSalary(byPercent);
        }
    }
    

    However, at runtime s can receive an object of any subclass of Staff. That means the adjustSalary(int) operation of the actual subclass object will be called. If the subclass does not override that operation, the operation defined in the superclass (in this case, Staff class) will be called.

    Static binding (aka early binding): When a method call is resolved at compile time.

    In contrast, overloaded methods are resolved using static binding.

    Paradigms → OOP → Inheritance →

    Overloading

    Method overloading is when there are multiple methods with the same name but different type signatures. Overloading is used to indicate that multiple operations do similar things but take different parameters.

    Type Signature: The type signature of an operation is the type sequence of the parameters. The return type and parameter names are not part of the type signature. However, the parameter order is significant.

    Example:

    Method Type Signature
    int add(int X, int Y) (int, int)
    void add(int A, int B) (int, int)
    void m(int X, double Y) (int, double)
    void m(double X, int Y) (double, int)

    In the case below, the calculate method is overloaded because the two methods have the same name but different type signatures (String) and (int)

    • calculate(String): void
    • calculate(int): void

    Note how the constructor is overloaded in the class below. The method call new Account() is bound to the first constructor at compile time.

    class Account {
    
        Account () {
            // Signature: ()
            ...
        }
    
        Account (String name, String number, double balance) {
            // Signature: (String, String, double)
            ...
        }
    }
    

    Similarly, the calcuateGrade method is overloaded in the code below and a method call calculateGrade("A1213232") is bound to the second implementation, at compile time.

    void calculateGrade (int[] averages) { ... }
    void calculateGrade (String matric) { ... }
    

    W10.1c Paradigms → OOP → Polymorphism → How

    Can explain how substitutability operation overriding, and dynamic binding relates to polymorphism

    Three concepts combine to achieve polymorphism: substitutability, operation overriding, and dynamic binding.

    • Substitutability: Because of substitutability, you can write code that expects object of a parent class and yet use that code with objects of child classes. That is how polymorphism is able to treat objects of different types as one type.
    • Overriding: To get polymorphic behavior from an operation, the operation in the superclass needs to be overridden in each of the subclasses. That is how overriding allows objects of different subclasses to display different behaviors in response to the same method call.
    • Dynamic binding: Calls to overridden methods are bound to the implementation of the actual object's class dynamically during the runtime. That is how the polymorphic code can call the method of the parent class and yet execute the implementation of the child class.

    Which one of these is least related to how OO programs achieve polymorphism?

    (c)

    Explanation: Operation overriding is the one that is related, not operation overloading.


    More Design Principles

    W10.1d Principles → Liskov Substitution Principle

    Can explain Liskov Substitution Principle

    Liskov Substitution Principle (LSP): Derived classes must be substitutable for their base classes. -- proposed by Barbara Liskov

    LSP sounds same as substitutability but it goes beyond substitutability; LSP implies that a subclass should not be more restrictive than the behavior specified by the superclass. As we know, Java has language support for substitutability. However, if LSP is not followed, substituting a subclass object for a superclass object can break the functionality of the code.

    Paradigms → OOP → Inheritance →

    Substitutability

    Every instance of a subclass is an instance of the superclass, but not vice-versa. As a result, inheritance allows substitutability : the ability to substitute a child class object where a parent class object is expected.

    an Academic is an instance of a Staff, but a Staff is not necessarily an instance of an Academic. i.e. wherever an object of the superclass is expected, it can be substituted by an object of any of its subclasses.

    The following code is valid because an AcademicStaff object is substitutable as a Staff object.

    Staff staff = new AcademicStaff (); // OK
    

    But the following code is not valid because staff is declared as a Staff type and therefore its value may or may not be of type AcademicStaff, which is the type expected by variable academicStaff.

    Staff staff;
    ...
    AcademicStaff academicStaff = staff; // Not OK
    

    Suppose the Payroll class depends on the adjustMySalary(int percent) method of the Staff class. Furthermore, the Staff class states that the adjustMySalary method will work for all positive percent values. Both Admin and Academic classes override the adjustMySalary method.

    Now consider the following:

    • Admin#adjustMySalary method works for both negative and positive percent values.
    • Academic#adjustMySalary method works for percent values 1..100 only.

    In the above scenario,

    • Admin class follows LSP because it fulfills Payroll’s expectation of Staff objects (i.e. it works for all positive values). Substituting Admin objects for Staff objects will not break the Payroll class functionality.
    • Academic class violates LSP because it will not work for percent values over 100 as expected by the Payroll class. Substituting Academic objects for Staff objects can potentially break the Payroll class functionality.

    The Rectangle#resize() can take any integers for height and width. This contract is violated by the subclass Square#resize() because it does not accept a height that is different from the width.

    class Rectangle {
        ...
        /** sets the size to given height and width*/
        void resize(int height, int width){
            ...
        }
    }
    
    
    class Square extends Rectangle {
        
        @Override
        void resize(int height, int width){
            if (height != width) {
                //error
           }
        }
    }
    

    Now consider the following method that is written to work with the Rectangle class.

    void makeSameSize(Rectangle original, Rectangle toResize){
        toResize.resize(original.getHeight(), original.getWidth());
    }
    

    This code will fail if it is called as maekSameSize(new Rectangle(12,8), new Square(4, 4)) That is, Square class is not substitutable for the Rectangle class.

    If a subclass imposes more restrictive conditions than its parent class, it violates Liskov Substitution Principle.

    True.

    Explanation: If the subclass is more restrictive than the parent class, code that worked with the parent class may not work with the child class. Hence, the substitutability does not exist and LSP has been violated.

    W10.1e Principles → Law of Demeter

    Can explain the Law of Demeter

    Law of Demeter (LoD):

    • An object should have limited knowledge of another object.
    • An object should only interact with objects that are closely related to it.

    Also known as

    • Don’t talk to strangers.
    • Principle of least knowledge

    More concretely, a method m of an object O should invoke only the methods of the following kinds of objects:

    • The object O itself
    • Objects passed as parameters of m
    • Objects created/instantiated in m (directly or indirectly)
    • Objects from the direct association of O

    The following code fragment violates LoD due to the reason: while b is a ‘friend’ of foo (because it receives it as a parameter), g is a ‘friend of a friend’ (which should be considered a ‘stranger’), and g.doSomething() is analogous to ‘talking to a stranger’.

    void foo(Bar b) {
        Goo g = b.getGoo();
        g.doSomething();
    }
    

    LoD aims to prevent objects navigating internal structures of other objects.

    An analogy for LoD can be drawn from Facebook. If Facebook followed LoD, you would not be allowed to see posts of friends of friends, unless they are your friends as well. If Jake is your friend and Adam is Jake’s friend, you should not be allowed to see Adam’s posts unless Adam is a friend of yours as well.

    Explain the Law of Demeter using code examples. You are to make up your own code examples. Take Minesweeper as the basis for your code examples.

    Let us take the Logic class as an example. Assume that it has the following operation.

    setMinefield(Minefiled mf):void

    Consider the following that can happen inside this operation.

    • mf.init();: this does not violate LoD since LoD allows calling operations of parameters received.
    • mf.getCell(1,3).clear();: //this violates LoD because Logic is handling Cell objects deep inside Minefield. Instead, it should be mf.clearCellAt(1,3);
    • timer.start();: //this does not violate LoD because timer appears to be an internal component (i.e. a variable) of Logic itself.
    • Cell c = new Cell(); c.init();: // this does not violate LoD because c was created inside the operation.

    This violates Law of Demeter.

    void foo(Bar b) {
        Goo g =  new Goo();
        g.doSomething();
    }
    

    False

    Explanation: The line g.doSomething() does not violate LoD because it is OK to invoke methods of objects created within a method.

    Pick the odd one out.

    • a. Law of Demeter.
    • b. Don’t add people to a late project.
    • c. Don’t talk to strangers.
    • d. Principle of least knowledge.
    • e. Coupling.

    (b)

    Explanation: Law of Demeter, which aims to reduce coupling, is also known as ‘Don’t talk to strangers’ and ‘Principle of least knowledge’.

    W10.1f Principles → Interface Segregation Principle

    Can explain interface segregation principle

    Interface Segregation Principle (ISP): No client should be forced to depend on methods it does not use.

    The Payroll class should not depend on the AdminStaff class because it does not use the arrangeMeeting() method. Instead, it should depend on the SalariedStaff interface.

    public class Payroll {
        //...    
        private void adjustSalaries(AdminStaff adminStaff){ //violates ISP
            //...
        }
    
    }
    
    public class Payroll {
        //...    
        private void adjustSalaries(SalariedStaff staff){ //does not violate ISP
            //...
        }
    }
    

    W10.1g Principles → Dependency Inversion Principle

    Can explain dependency inversion principle (DIP)

    Dependency Inversion Principle:

    1. High-level modules should not depend on low-level modules. Both should depend on abstractions.
    2. Abstractions should not depend on details. Details should depend on abstractions.

    Example:

    In design (a), the higher level class Payroll depends on the lower level class Employee, a violation of DIP. In design (b), both Payroll and Employee depends on the Payee interface (note that inheritance is a dependency).

    Design (b) is more flexible (and less coupled) because now the Payroll class need not change when the Employee class changes.

    Which of these statements is true about the Dependency Inversion Principle.

    • a. It can complicate the design/implementation by introducing extra abstractions, but it has some benefits.
    • b. It is often used during testing, to replace dependencies with mocks.
    • c. It reduces dependencies in a design.
    • d. It advocates making higher level classes to depend on lower level classes.
    • a. It can complicate the design/implementation by introducing extra abstractions, but it has some benefits.
    • b. It is often used during testing, to replace dependencies with mocks.
    • c. It reduces dependencies in a design.
    • d. It advocates making higher level classes to depend on lower level classes.

    Explanation: Replacing dependencies with mocks is Dependency Injection, not DIP. DIP does not reduce dependencies, rather, it changes the direction of dependencies. Yes, it can introduce extra abstractions but often the benefit can outweigh the extra complications.

    W10.1h Principles → SOLID Principles

    Can explain SOLID Principles

    The five OOP principles given below are known as SOLID Principles (an acronym made up of the first letter of each principle):

    Principles →

    Single Responsibility Principle

    Single Responsibility Principle (SRP): A class should have one, and only one, reason to change. -- Robert C. Martin

    If a class has only one responsibility, it needs to change only when there is a change to that responsibility.

    Consider a TextUi class that does parsing of the user commands as well as interacting with the user. That class needs to change when the formatting of the UI changes as well as when the syntax of the user command changes. Hence, such a class does not follow the SRP.

    Gather together the things that change for the same reasons. Separate those things that change for different reasons. Agile Software Development, Principles, Patterns, and Practices by Robert C. Martin

    Principles →

    Open-Closed Principle

    The Open-Close Principle aims to make a code entity easy to adapt and reuse without needing to modify the code entity itself.

    Open-Closed Principle (OCP): A module should be open for extension but closed for modification. That is, modules should be written so that they can be extended, without requiring them to be modified. -- proposed by Bertrand Meyer

    In object-oriented programming, OCP can be achieved in various ways. This often requires separating the specification (i.e. interface) of a module from its implementation.

    In the design given below, the behavior of the CommandQueue class can be altered by adding more concrete Command subclasses. For example, by including a Delete class alongside List, Sort, and Reset, the CommandQueue can now perform delete commands without modifying its code at all. That is, its behavior was extended without having to modify its code. Hence, it was open to extensions, but closed to modification.

    The behavior of a Java generic class can be altered by passing it a different class as a parameter. In the code below, the ArrayList class behaves as a container of Students in one instance and as a container of Admin objects in the other instance, without having to change its code. That is, the behavior of the ArrayList class is extended without modifying its code.

    ArrayList students = new ArrayList< Student >();
    ArrayList admins = new ArrayList< Admin >();
    

    Which of these is closest to the meaning of the open-closed principle?

    (a)

    Explanation: Please refer the handout for the definition of OCP.

    Principles →

    Liskov Substitution Principle

    Liskov Substitution Principle (LSP): Derived classes must be substitutable for their base classes. -- proposed by Barbara Liskov

    LSP sounds same as substitutability but it goes beyond substitutability; LSP implies that a subclass should not be more restrictive than the behavior specified by the superclass. As we know, Java has language support for substitutability. However, if LSP is not followed, substituting a subclass object for a superclass object can break the functionality of the code.

    Paradigms → OOP → Inheritance →

    Substitutability

    Every instance of a subclass is an instance of the superclass, but not vice-versa. As a result, inheritance allows substitutability : the ability to substitute a child class object where a parent class object is expected.

    an Academic is an instance of a Staff, but a Staff is not necessarily an instance of an Academic. i.e. wherever an object of the superclass is expected, it can be substituted by an object of any of its subclasses.

    The following code is valid because an AcademicStaff object is substitutable as a Staff object.

    Staff staff = new AcademicStaff (); // OK
    

    But the following code is not valid because staff is declared as a Staff type and therefore its value may or may not be of type AcademicStaff, which is the type expected by variable academicStaff.

    Staff staff;
    ...
    AcademicStaff academicStaff = staff; // Not OK
    

    Suppose the Payroll class depends on the adjustMySalary(int percent) method of the Staff class. Furthermore, the Staff class states that the adjustMySalary method will work for all positive percent values. Both Admin and Academic classes override the adjustMySalary method.

    Now consider the following:

    • Admin#adjustMySalary method works for both negative and positive percent values.
    • Academic#adjustMySalary method works for percent values 1..100 only.

    In the above scenario,

    • Admin class follows LSP because it fulfills Payroll’s expectation of Staff objects (i.e. it works for all positive values). Substituting Admin objects for Staff objects will not break the Payroll class functionality.
    • Academic class violates LSP because it will not work for percent values over 100 as expected by the Payroll class. Substituting Academic objects for Staff objects can potentially break the Payroll class functionality.

    The Rectangle#resize() can take any integers for height and width. This contract is violated by the subclass Square#resize() because it does not accept a height that is different from the width.

    class Rectangle {
        ...
        /** sets the size to given height and width*/
        void resize(int height, int width){
            ...
        }
    }
    
    
    class Square extends Rectangle {
        
        @Override
        void resize(int height, int width){
            if (height != width) {
                //error
           }
        }
    }
    

    Now consider the following method that is written to work with the Rectangle class.

    void makeSameSize(Rectangle original, Rectangle toResize){
        toResize.resize(original.getHeight(), original.getWidth());
    }
    

    This code will fail if it is called as maekSameSize(new Rectangle(12,8), new Square(4, 4)) That is, Square class is not substitutable for the Rectangle class.

    If a subclass imposes more restrictive conditions than its parent class, it violates Liskov Substitution Principle.

    True.

    Explanation: If the subclass is more restrictive than the parent class, code that worked with the parent class may not work with the child class. Hence, the substitutability does not exist and LSP has been violated.

    Principles →

    Interface Segregation Principle

    Interface Segregation Principle (ISP): No client should be forced to depend on methods it does not use.

    The Payroll class should not depend on the AdminStaff class because it does not use the arrangeMeeting() method. Instead, it should depend on the SalariedStaff interface.

    public class Payroll {
        //...    
        private void adjustSalaries(AdminStaff adminStaff){ //violates ISP
            //...
        }
    
    }
    
    public class Payroll {
        //...    
        private void adjustSalaries(SalariedStaff staff){ //does not violate ISP
            //...
        }
    }
    

    Principles →

    Dependency Inversion Principle

    Dependency Inversion Principle:

    1. High-level modules should not depend on low-level modules. Both should depend on abstractions.
    2. Abstractions should not depend on details. Details should depend on abstractions.

    Example:

    In design (a), the higher level class Payroll depends on the lower level class Employee, a violation of DIP. In design (b), both Payroll and Employee depends on the Payee interface (note that inheritance is a dependency).

    Design (b) is more flexible (and less coupled) because now the Payroll class need not change when the Employee class changes.

    Which of these statements is true about the Dependency Inversion Principle.

    • a. It can complicate the design/implementation by introducing extra abstractions, but it has some benefits.
    • b. It is often used during testing, to replace dependencies with mocks.
    • c. It reduces dependencies in a design.
    • d. It advocates making higher level classes to depend on lower level classes.
    • a. It can complicate the design/implementation by introducing extra abstractions, but it has some benefits.
    • b. It is often used during testing, to replace dependencies with mocks.
    • c. It reduces dependencies in a design.
    • d. It advocates making higher level classes to depend on lower level classes.

    Explanation: Replacing dependencies with mocks is Dependency Injection, not DIP. DIP does not reduce dependencies, rather, it changes the direction of dependencies. Yes, it can introduce extra abstractions but often the benefit can outweigh the extra complications.

    W10.1i Principles → Brooks' Law

    Can explain Brooks' law

    Brooks' Law: Adding people to a late project will make it later. -- Fred Brooks (author of The Mythical Man-Month)

    Explanation: The additional communication overhead will outweigh the benefit of adding extra manpower, especially if done near to a deadline.

    Do the Brook’s Law apply to a school project? Justify.

    Yes. Adding a new student to a project team can result in a slow-down of the project for a short period. This is because the new member needs time to learn the project and existing members will have to spend time helping the new guy get up to speed. If the project is already behind schedule and near a deadline, this could delay the delivery even further.

    Which one of these (all attributed to Fred Brooks, the author of the famous SE book The Mythical Man-Month), is called the Brook’s law?

    • a. All programmers are optimists.
    • b. Good judgement comes from experience, and experience comes from bad judgement.
    • c. The bearing of a child takes nine months, no matter how many women are assigned.
    • d. Adding more manpower to an already late project makes it even later.

    (d)

    [W10.2] Design Patterns: Basics


    Introduction

    W10.2a Design → Design Patterns → Introduction → What

    Can explain design patterns

    Design Pattern : An elegant reusable solution to a commonly recurring problem within a given context in software design.

    In software development, there are certain problems that recur in a certain context.

    Some examples of recurring design problems:

    Design Context Recurring Problem
    Assembling a system that makes use of other existing systems implemented using different technologies What is the best architecture?
    UI needs to be updated when the data in application backend changes How to initiate an update to the UI when data changes without coupling the backend to the UI?

    After repeated attempts at solving such problems, better solutions are discovered and refined over time. These solutions are known as design patterns, a term popularized by the seminal book Design Patterns: Elements of Reusable Object-Oriented Software by the so-called "Gang of Four" (GoF) written by Eric Gamma, Richard Helm, Ralph Johnson,and John Vlissides.

    Which one of these describes the ‘software design patterns’ concept best?

    (b)

    W10.2b Design → Design Patterns → Introduction → Format

    Can explain design patterns format

    The common format to describe a pattern consists of the following components:

    • Context: The situation or scenario where the design problem is encountered.
    • Problem: The main difficulty to be resolved.
    • Solution: The core of the solution. It is important to note that the solution presented only includes the most general details, which may need further refinement for a specific context.
    • Anti-patterns (optional): Commonly used solutions, which are usually incorrect and/or inferior to the Design Pattern.
    • Consequences (optional): Identifying the pros and cons of applying the pattern.
    • Other useful information (optional): Code examples, known uses, other related patterns, etc.

    When we describe a pattern, we must also specify anti-patterns.

    False.

    Explanation: Anti-patterns are related to patterns, but they are not a ‘must have’ component of a pattern description.


    Singleton pattern

    W10.2c Design → Design Patterns → Singleton → What

    Can explain the Singleton design pattern

    Context

    A certain classes should have no more than just one instance (e.g. the main controller class of the system). These single instances are commonly known as singletons.

    Problem

    A normal class can be instantiated multiple times by invoking the constructor.

    Solution

    Make the constructor of the singleton class private, because a public constructor will allow others to instantiate the class at will. Provide a public class-level method to access the single instance.

    Example:

    We use the Singleton pattern when

    (c)

    W10.2d Design → Design Patterns → Singleton → Implementation

    Can apply the Singleton design pattern

    Here is the typical implementation of how the Singleton pattern is applied to a class:

    class Logic {
        private static Logic theOne = null;
    
        private Logic() {
            ...
        }
    
        public static Logic getInstance() {
            if (theOne == null) {
                theOne = new Logic();
            }
            return theOne;
        }
    }
    

    Notes:

    • The constructor is private, which prevents instantiation from outside the class.
    • The single instance of the singleton class is maintained by a private class-level variable.
    • Access to this object is provided by a public class-level operation getInstance() which instantiates a single copy of the singleton class when it is executed for the first time. Subsequent calls to this operation return the single instance of the class.

    If Logic was not a Singleton class, an object is created like this:

    Logic m = new Logic();
    

    But now, the Logic object needs to be accessed like this:

    Logic m = Logic.getInstance();
    

    W10.2e Design → Design Patterns → Singleton → Evaluation

    Can decide when to apply Singleton design pattern

    Pros:

    • easy to apply
    • effective in achieving its goal with minimal extra work
    • provides an easy way to access the singleton object from anywhere in the code base

    Cons:

    • The singleton object acts like a global variable that increases coupling across the code base.
    • In testing, it is difficult to replace Singleton objects with stubs (static methods cannot be overridden)
    • In testing, singleton objects carry data from one test to another even when we want each test to be independent of the others.

    Given there are some significant cons, it is recommended that you apply the Singleton pattern when, in addition to requiring only one instance of a class, there is a risk of creating multiple objects by mistake, and creating such multiple objects has real negative consequences.


    Facade pattern

    W10.2f Design → Design Patterns → Facade Pattern → What

    Can explain the Facade design pattern

    Context

    Components need to access functionality deep inside other components.

    The UI component of a Library system might want to access functionality of the Book class contained inside the Logic component.

    Problem

    Access to the component should be allowed without exposing its internal details. e.g. the UI component should access the functionality of the Logic component without knowing that it contained a Book class within it.

    Solution

    Include a Façade class that sits between the component internals and users of the component such that all access to the component happens through the Facade class.

    The following class diagram applies the Façade pattern to the Library System example. The LibraryLogic class is the Facade class.

    Does the design below likely to use the Facade pattern?

    True.

    Facade is clearly visible (Storage is the << Facade >> class).


    Command pattern

    W10.2g Design → Design Patterns → Command Pattern → What

    Can explain the Command design pattern

    Context

    A system is required to execute a number of commands, each doing a different task. For example, a system might have to support Sort, List, Reset commands.

    Problem

    It is preferable that some part of the code executes these commands without having to know each command type. e.g., there can be a CommandQueue object that is responsible for queuing commands and executing them without knowledge of what each command does.

    Solution

    The essential element of this pattern is to have a general <<Command>> object that can be passed around, stored, executed, etc without knowing the type of command (i.e. via polymorphism).

    Let us examine an example application of the pattern first:

    In the example solution below, the CommandCreator creates List, Sort, and Reset Command objects and adds them to the CommandQueue object. The CommandQueue object treats them all as Command objects and performs the execute/undo operation on each of them without knowledge of the specific Command type. When executed, each Command object will access the DataStore object to carry out its task. The Command class can also be an abstract class or an interface.

    The general form of the solution is as follows.

    The <<Client>> creates a <<ConcreteCommand>> object, and passes it to the <<Invoker>>. The <<Invoker>> object treats all commands as a general <<Command>> type. <<Invoker>> issues a request by calling execute() on the command. If a command is undoable, <<ConcreteCommand>> will store the state for undoing the command prior to invoking execute(). In addition, the <<ConcreteCommand>> object may have to be linked to any <<Receiver>> of the command (?) before it is passed to the <<Invoker>>. Note that an application of the command pattern does not have to follow the structure given above.


    Abstraction Occurrence pattern

    W10.2h Design → Design Patterns → Abstraction Occurrence Pattern → What

    Can explain the Abstraction Occurrence design pattern

    Context

    There is a group of similar entities that appears to be ‘occurrences’ (or ‘copies’) of the same thing, sharing lots of common information, but also differing in significant ways.

    In a library, there can be multiple copies of same book title. Each copy shares common information such as book title, author, ISBN etc. However, there are also significant differences like purchase date and barcode number (assumed to be unique for each copy of the book).

    Other examples:

    • Episodes of the same TV series
    • Stock items of the same product model (e.g. TV sets of the same model).

    Problem

    Representing the objects mentioned previously as a single class would be problematic because it results in duplication of data which can lead to inconsistencies in data (if some of the duplicates are not updated consistently).

    Take for example the problem of representing books in a library. Assume that there could be multiple copies of the same title, bearing the same ISBN number, but different serial numbers.

    The above solution requires common information to be duplicated by all instances. This will not only waste storage space, but also creates a consistency problem. Suppose that after creating several copies of the same title, the librarian realized that the author name was wrongly spelt. To correct this mistake, the system needs to go through every copy of the same title to make the correction. Also, if a new copy of the title is added later on, the librarian (or the system) has to make sure that all information entered is the same as the existing copies to avoid inconsistency.

    Anti-pattern

    Refer to the same Library example given above.

    The design above segregates the common and unique information into a class hierarchy. Each book title is represented by a separate class with common data (i.e. Name, Author, ISBN) hard-coded in the class itself. This solution is problematic because each book title is represented as a class, resulting in thousands of classes (one for each title). Every time the library buys new books, the source code of the system will have to be updated with new classes.

    Solution

    Let a copy of an entity (e.g. a copy of a book)be represented by two objects instead of one, separating the common and unique information into two classes to avoid duplication.

    Given below is how the pattern is applied to the Library example:

    Here's a more generic example:

    The general solution:

    The << Abstraction >> class should hold all common information, and the unique information should be kept by the << Occurrence >> class. Note that ‘Abstraction’ and ‘Occurrence’ are not class names, but roles played by each class. Think of this diagram as a meta-model (i.e. a ‘model of a model’) of the BookTitle-BookCopy class diagram given above.

    Which pairs of classes are likely to be the << Abstraction >> and the << Occurrence >> of the abstraction occurrence pattern?

    1. CarModel, Car. (Here CarModel represents a particular model of a car produced by the car manufacturer. E.g. BMW R4300)
    2. Car, Wheel
    3. Club, Member
    4. TeamLeader, TeamMember
    5. Magazine (E.g. ReadersDigest, PCWorld), MagazineIssue

    One of the key things to keep in mind is that the << Abstraction >> does not represent a real entity. Rather, it represents some information common to a set of objects. A single real entity is represented by an object of << Abstraction >> type and << Occurrence >> type.

    Before applying the pattern, some attributes have the same values for multiple objects. For example, w.r.t. the BookTitle-BookCopy example given in this handout, values of attributes such as book_title, ISBN are exactly the same for copies of the same book.

    After applying the pattern, the Abstraction and the Occurrence classes together represent one entity. It is like one class has been split into two. For example, a BookTitle object and a BookCopy object combines to represent an actual Book.

    1. CarModel, Car: Yes
    2. Car, Wheel: No. Wheel is a ‘part of’ Car. A wheel is not an occurrence of Car.
    3. Club, Member: No. this is a ‘part of’ relationship.
    4. TeamLeader, TeamMember: No. A TeamMember is not an occurrence of a TeamLeader or vice versa.
    5. Magazine, MagazineIssue: Yes.

    Which one of these is most suited for an application of the Abstraction Occurrence pattern?

    (a)

    Explanation:

    (a) Stagings of a drama are ‘occurrences’ of the drama. They have many attributes common (e.g., Drama name, producer, cast, etc.) but some attributes are different (e.g., venue, time).

    (b) Students are not occurrences of a Teacher or vice versa

    (c) Module, Exam, Assignment are distinct entities with associations among them. But none of them can be considered an occurrence of another.

    [W10.3] Test Case Design

    W10.3a Quality Assurance → Test Case Design → Introduction → What

    Can explain the need for deliberate test case design

    Except for trivial SUTs, exhaustive testing is not practical because such testing often requires a massive/infinite number of test cases.

    Consider the test cases for adding a string object to a collection:

    • Add an item to an empty collection.
    • Add an item when there is one item in the collection.
    • Add an item when there are 2, 3, .... n items in the collection.
    • Add an item that has an English, a French, a Spanish, ... word.
    • Add an item that is the same as an existing item.
    • Add an item immediately after adding another item.
    • Add an item immediately after system startup.
    • ...

    Exhaustive testing of this operation can take many more test cases.

    Program testing can be used to show the presence of bugs, but never to show their absence!
    --Edsger Dijkstra

    Every test case adds to the cost of testing. In some systems, a single test case can cost thousands of dollars e.g. on-field testing of flight-control software. Therefore, test cases need to be designed to make the best use of testing resources. In particular:

    • Testing should be effective i.e., it finds a high percentage of existing bugs e.g., a set of test cases that finds 60 defects is more effective than a set that finds only 30 defects in the same system.

    • Testing should be efficient i.e., it has a high rate of success (bugs found/test cases) a set of 20 test cases that finds 8 defects is more efficient than another set of 40 test cases that finds the same 8 defects.

    For testing to be E&E, each new test we add should be targeting a potential fault that is not already targeted by existing test cases. There are test case design techniques that can help us improve E&E of testing.

    Given below is the sample output from a text-based program TriangleDetector ithat determines whether the three input numbers make up the three sides of a valid triangle. List test cases you would use to test this software. Two sample test cases are given below.

    C:\> java TriangleDetector
    Enter side 1: 34
    Enter side 2: 34
    Enter side 3: 32
    Can this be a triangle?:  Yes
    Enter side 1:
    

    Sample test cases,

    34,34,34: Yes
    0, any valid, any valid: No
    

    In addition to obvious test cases such as

    • sum of two sides == third,
    • sum of two sides < third ...

    We may also devise some interesting test cases such as the ones depicted below.

    Note that their applicability depends on the context in which the software is operating.

    • Non-integer number, negative numbers, 0, numbers formatted differently (e.g. 13F), very large numbers (e.g. MAX_INT), numbers with many decimal places, empty string, ...
    • Check many triangles one after the other (will the system run out of memory?)
    • Backspace, tab, CTRL+C , …
    • Introduce a long delay between entering data (will the program be affected by, say the screensaver?), minimize and restore window during the operation, hibernate the system in the middle of a calculation, start with invalid inputs (the system may perform error handling differently for the very first test case), …
    • Test on different locale.

    The main point to note is how difficult it is to test exhaustively, even on a trivial system.

    Explain the why exhaustive testing is not practical using the example of testing newGame() operation in the Logic class of a Minesweeper game.

    Consider this sequence of test cases:

    • Test case 1. Start Minesweeper. Activate newGame() and see if it works.
    • Test case 2. Start Minesweeper. Activate newGame(). Activate newGame() again and see if it works.
    • Test case 3. Start Minesweeper. Activate newGame() three times consecutively and see if it works.
    • Test case 267. Start Minesweeper. Activate newGame() 267 times consecutively and see if it works.

    Well, you get the idea. Exhaustive testing of newGame() is not practical.

    Improving efficiency and effectiveness of test case design can,

    • a. improve the quality of the SUT.
    • b. save money.
    • c. save time spent on test execution.
    • d. save effort on writing and maintaining tests.
    • e. minimize redundant test cases.
    • f. forces us to understand the SUT better.

    (a)(b)(c)(d)(e)(f)

    W10.3b Quality Assurance → Testing → Exploratory and Scripted Testing → What

    Can explain exploratory testing and scripted testing

    Here are two alternative approaches to testing a software: Scripted testing and Exploratory testing

    1. Scripted testing: First write a set of test cases based on the expected behavior of the SUT, and then perform testing based on that set of test cases.

    2. Exploratory testing: Devise test cases on-the-fly, creating new test cases based on the results of the past test cases.

    Exploratory testing is ‘the simultaneous learning, test design, and test execution’ [source: bach-et-explained] whereby the nature of the follow-up test case is decided based on the behavior of the previous test cases. In other words, running the system and trying out various operations. It is called exploratory testing because testing is driven by observations during testing. Exploratory testing usually starts with areas identified as error-prone, based on the tester’s past experience with similar systems. One tends to conduct more tests for those operations where more faults are found.

    Here is an example thought process behind a segment of an exploratory testing session:

    “Hmm... looks like feature x is broken. This usually means feature n and k could be broken too; we need to look at them soon. But before that, let us give a good test run to feature y because users can still use the product if feature y works, even if x doesn’t work. Now, if feature y doesn’t work 100%, we have a major problem and this has to be made known to the development team sooner rather than later...”

    Exploratory testing is also known as reactive testing, error guessing technique, attack-based testing, and bug hunting.

    Exploratory Testing Explained, an online article by James Bach -- James Bach is an industry thought leader in software testing).

    Scripted testing requires tests to be written in a scripting language; Manual testing is called exploratory testing.

    A) False

    Explanation: “Scripted” means test cases are predetermined. They need not be an executable script. However, exploratory testing is usually manual.

    Which testing technique is better?

    (e)

    Explain the concept of exploratory testing using Minesweeper as an example.

    When we test the Minesweeper by simply playing it in various ways, especially trying out those that are likely to be buggy, that would be exploratory testing.

    W10.3c Quality Assurance → Testing → Exploratory and Scripted Testing → When

    Can explain the choice between exploratory testing and scripted testing

    Which approach is better – scripted or exploratory? A mix is better.

    The success of exploratory testing depends on the tester’s prior experience and intuition. Exploratory testing should be done by experienced testers, using a clear strategy/plan/framework. Ad-hoc exploratory testing by unskilled or inexperienced testers without a clear strategy is not recommended for real-world non-trivial systems. While exploratory testing may allow us to detect some problems in a relatively short time, it is not prudent to use exploratory testing as the sole means of testing a critical system.

    Scripted testing is more systematic, and hence, likely to discover more bugs given sufficient time, while exploratory testing would aid in quick error discovery, especially if the tester has a lot of experience in testing similar systems.

    In some contexts, you will achieve your testing mission better through a more scripted approach; in other contexts, your mission will benefit more from the ability to create and improve tests as you execute them. I find that most situations benefit from a mix of scripted and exploratory approaches. --[source: bach-et-explained]

    Exploratory Testing Explained, an online article by James Bach -- James Bach is an industry thought leader in software testing).

    Scripted testing is better than exploratory testing.

    B) False

    Explanation: Each has pros and cons. Relying on only one is not recommended. A combination is better.

    W10.3d Quality Assurance → Test Case Design → Introduction → Positive vs Negative Test Cases

    Can explain positive and negative test cases

    A positive test case is when the test is designed to produce an expected/valid behavior. A negative test case is designed to produce a behavior that indicates an invalid/unexpected situation, such as an error message.

    Consider testing of the method print(Integer i) which prints the value of i.

    • A positive test case: i == new Integer(50)
    • A negative test case: i == null;

    W10.3e Quality Assurance → Test Case Design → Introduction → Black Box vs Glass Box

    Can explain black box and glass box test case design

    Test case design can be of three types, based on how much of SUT internal details are considered when designing test cases:

    • Black-box (aka specification-based or responsibility-based) approach: test cases are designed exclusively based on the SUT’s specified external behavior.

    • White-box (aka glass-box or structured or implementation-based) approach: test cases are designed based on what is known about the SUT’s implementation, i.e. the code.

    • Gray-box approach: test case design uses some important information about the implementation. For example, if the implementation of a sort operation uses different algorithms to sort lists shorter than 1000 items and lists longer than 1000 items, more meaningful test cases can then be added to verify the correctness of both algorithms.

    Note: these videos are from the Udacity course Software Development Process by Georgia Tech

    W10.3f Quality Assurance → Test Case Design → Testing Based on Use Cases

    Can explain test case design for use case based testing

    Use cases can be used for system testing and acceptance testing. For example, the main success scenario can be one test case while each variation (due to extensions) can form another test case. However, note that use cases do not specify the exact data entered into the system. Instead, it might say something like user enters his personal data into the system. Therefore, the tester has to choose data by considering equivalence partitions and boundary values. The combinations of these could result in one use case producing many test cases.

    To increase E&E of testing, high-priority use cases are given more attention. For example, a scripted approach can be used to test high priority test cases, while an exploratory approach is used to test other areas of concern that could emerge during testing.

    Every test case adds to the cost of testing. In some systems, a single test case can cost thousands of dollars e.g. on-field testing of flight-control software. Therefore, test cases need to be designed to make the best use of testing resources. In particular:

    • Testing should be effective i.e., it finds a high percentage of existing bugs e.g., a set of test cases that finds 60 defects is more effective than a set that finds only 30 defects in the same system.

    • Testing should be efficient i.e., it has a high rate of success (bugs found/test cases) a set of 20 test cases that finds 8 defects is more efficient than another set of 40 test cases that finds the same 8 defects.

    For testing to be E&E, each new test we add should be targeting a potential fault that is not already targeted by existing test cases. There are test case design techniques that can help us improve E&E of testing.

    Quality Assurance → Testing → Exploratory and Scripted Testing →

    What

    Here are two alternative approaches to testing a software: Scripted testing and Exploratory testing

    1. Scripted testing: First write a set of test cases based on the expected behavior of the SUT, and then perform testing based on that set of test cases.

    2. Exploratory testing: Devise test cases on-the-fly, creating new test cases based on the results of the past test cases.

    Exploratory testing is ‘the simultaneous learning, test design, and test execution’ [source: bach-et-explained] whereby the nature of the follow-up test case is decided based on the behavior of the previous test cases. In other words, running the system and trying out various operations. It is called exploratory testing because testing is driven by observations during testing. Exploratory testing usually starts with areas identified as error-prone, based on the tester’s past experience with similar systems. One tends to conduct more tests for those operations where more faults are found.

    Here is an example thought process behind a segment of an exploratory testing session:

    “Hmm... looks like feature x is broken. This usually means feature n and k could be broken too; we need to look at them soon. But before that, let us give a good test run to feature y because users can still use the product if feature y works, even if x doesn’t work. Now, if feature y doesn’t work 100%, we have a major problem and this has to be made known to the development team sooner rather than later...”

    Exploratory testing is also known as reactive testing, error guessing technique, attack-based testing, and bug hunting.

    Exploratory Testing Explained, an online article by James Bach -- James Bach is an industry thought leader in software testing).

    Scripted testing requires tests to be written in a scripting language; Manual testing is called exploratory testing.

    A) False

    Explanation: “Scripted” means test cases are predetermined. They need not be an executable script. However, exploratory testing is usually manual.

    Which testing technique is better?

    (e)

    Explain the concept of exploratory testing using Minesweeper as an example.

    When we test the Minesweeper by simply playing it in various ways, especially trying out those that are likely to be buggy, that would be exploratory testing.


    Equivalence Partitioning

    W10.3g Quality Assurance → Test Case Design → Equivalence Partitions → What

    Can explain equivalence partitions

    Consider the testing of the following operation.

    isValidMonth(m) : returns true if m (and int) is in the range [1..12]

    It is inefficient and impractical to test this method for all integer values [-MIN_INT to MAX_INT]. Fortunately, there is no need to test all possible input values. For example, if the input value 233 failed to produce the correct result, the input 234 is likely to fail too; there is no need to test both.

    In general, most SUTs do not treat each input in a unique way. Instead, they process all possible inputs in a small number of distinct ways. That means a range of inputs is treated the same way inside the SUT. Equivalence partitioning (EP) is a test case design technique that uses the above observation to improve the E&E of testing.

    Equivalence partition (aka equivalence class): A group of test inputs that are likely to be processed by the SUT in the same way.

    By dividing possible inputs into equivalence partitions we can,

    • avoid testing too many inputs from one partition. Testing too many inputs from the same partition is unlikely to find new bugs. This increases the efficiency of testing by reducing redundant test cases.
    • ensure all partitions are tested. Missing partitions can result in bugs going unnoticed. This increases the effectiveness of testing by increasing the chance of finding bugs.

    W10.3h Quality Assurance → Test Case Design → Equivalence Partitions → Basic

    Can apply EP for pure functions

    Equivalence partitions (EPs) are usually derived from the specifications of the SUT.

    These could be EPs for the isValidMonth example:

    • [MIN_INT ... 0] : below the range that produces true (produces false)
    • [1 … 12] : the range that produces true
    • [13 … MAX_INT] : above the range that produces true (produces false)

    isValidMonth(m) : returns true if m (and int) is in the range [1..12]

    When the SUT has multiple inputs, you should identify EPs for each input.

    Consider the method duplicate(String s, int n): String which returns a String that contains s repeated n times.

    Example EPs for s:

    • zero-length strings
    • string containing whitespaces
    • ...

    Example EPs for n:

    • 0
    • negative values
    • ...

    An EP may not have adjacent values.

    Consider the method isPrime(int i): boolean that returns true if i is a prime number.

    EPs for i:

    • prime numbers
    • non-prime numbers

    Some inputs have only a small number of possible values and a potentially unique behavior for each value. In those cases we have to consider each value as a partition by itself.

    Consider the method showStatusMessage(GameStatus s): String that returns a unique String for each of the possible value of s (GameStatus is an enum). In this case, each possible value for s will have to be considered as a partition.

    Note that the EP technique is merely a heuristic and not an exact science, especially when applied manually (as opposed to using an automated program analysis tool to derive EPs). The partitions derived depend on how one ‘speculates’ the SUT to behave internally. Applying EP under a glass-box or gray-box approach can yield more precise partitions.

    Consider the method EPs given above for the isValidMonth. A different tester might use these EPs instead:

    • [1 … 12] : the range that produces true
    • [all other integers] : the range that produces false

    Some more examples:

    Specification Equivalence partitions

    isValidFlag(String s): boolean
    Returns true if s is one of ["F", "T", "D"]. The comparison is case-sensitive.

    ["F"] ["T"] ["D"] ["f", "t", "d"] [any other string][null]

    squareRoot(String s): int
    Pre-conditions: s represents a positive integer
    Returns the square root of s if the square root is an integer; returns 0 otherwise.

    [s is not a valid number] [s is a negative integer] [s has an integer square root] [s does not have an integer square root]

    Consider this SUT:

    isValidName (String s): boolean

    Description: returns true if s is not null and not longer than 50 characters.

    A. Which one of these is least likely to be an equivalence partition for the parameter s of the isValidName method given below?

    B. If you had to choose 3 test cases from the 4 given below, which one will you leave out based on the EP technique?

    A. (d)

    Explanation: The description does not mention anything about the content of the string. Therefore, the method is unlikely to behave differently for strings consisting of numbers.

    B. (a) or (c)

    Explanation: both belong to the same EP

    W10.3i Quality Assurance → Test Case Design → Equivalence Partitions → Intermediate

    Can apply EP for OOP methods

    When deciding EPs of OOP methods, we need to identify EPs of all data participants that can potentially influence the behaviour of the method, such as,

    • the target object of the method call
    • input parameters of the method call
    • other data/objects accessed by the method such as global variables. This category may not be applicable if using the black box approach (because the test case designer using the black box approach will not know how the method is implemented)

    Consider this method in the DataStack class: push(Object o): boolean

    • Adds o to the top of the stack if the stack is not full.
    • returns true if the push operation was a success.
    • throws
      • MutabilityException if the global flag FREEZE==true.
      • InvalidValueException if o is null.

    EPs:

    • DataStack object: [full] [not full]
    • o: [null] [not null]
    • FREEZE: [true][false]

    Consider a simple Minesweeper app. What are the EPs for the newGame() method of the Logic component?

    As newGame() does not have any parameters, the only obvious participant is the Logic object itself.

    Note that if the glass-box or the grey-box approach is used, other associated objects that are involved in the method might also be included as participants. For example, Minefield object can be considered as another participant of the newGame() method. Here, the black-box approach is assumed.

    Next, let us identify equivalence partitions for each participant. Will the newGame() method behave differently for different Logic objects? If yes, how will it differ? In this case, yes, it might behave differently based on the game state. Therefore, the equivalence partitions are:

    • PRE_GAME : before the game starts, minefield does not exist yet
    • READY : a new minefield has been created and waiting for player’s first move
    • IN_PLAY : the current minefield is already in use
    • WON, LOST : let us assume the newGame behaves the same way for these two values

    Consider the Logic component of the Minesweeper application. What are the EPs for the markCellAt(int x, int y) method?. The partitions in bold represent valid inputs.

    • Logic: PRE_GAME, READY, IN_PLAY, WON, LOST
    • x: [MIN_INT..-1] [0..(W-1)] [W..MAX_INT] (we assume a minefield size of WxH)
    • y: [MIN_INT..-1] [0..(H-1)] [H..MAX_INT]
    • Cell at (x,y): HIDDEN, MARKED, CLEARED

    Boundary Value Analysis

    W10.3j Quality Assurance → Test Case Design → Boundary Value Analysis → What

    Can explain boundary value analysis

    Boundary Value Analysis (BVA) is test case design heuristic that is based on the observation that bugs often result from incorrect handling of boundaries of equivalence partitions. This is not surprising, as the end points of the boundary are often used in branching instructions etc. where the programmer can make mistakes.

    markCellAt(int x, int y) operation could contain code such as if (x > 0 && x <= (W-1)) which involves boundaries of x’s equivalence partitions.

    BVA suggests that when picking test inputs from an equivalence partition, values near boundaries (i.e. boundary values) are more likely to find bugs.

    Boundary values are sometimes called corner cases.

    Boundary value analysis recommends testing only values that reside on the equivalence class boundary.

    False

    Explanation: It does not recommend testing only those values on the boundary. It merely suggests that values on and around a boundary are more likely to cause errors.

    W10.3k Quality Assurance → Test Case Design → Boundary Value Analysis → How

    Can apply boundary value analysis

    Typically, we choose three values around the boundary to test: one value from the boundary, one value just below the boundary, and one value just above the boundary. The number of values to pick depends on other factors, such as the cost of each test case.

    Some examples:

    Equivalence partition Some possible boundary values

    [1-12]

    0,1,2, 11,12,13

    [MIN_INT, 0]
    (MIN_INT is the minimum possible integer value allowed by the environment)

    MIN_INT, MIN_INT+1, -1, 0 , 1

    [any non-null String]

    Empty String, a String of maximum possible length

    [prime numbers]
    [“F”]
    [“A”, “D”, “X”]

    No specific boundary
    No specific boundary
    No specific boundary

    [non-empty Stack]
    (we assume a fixed size stack)

    Stack with: one element, two elements, no empty spaces, only one empty space

    [W10.4] Quality Assurance


    Testability & Coverage

    W10.4a Quality Assurance → Testing → Introduction → Testability

    Can explain testability

    Testability is an indication of how easy it is to test an SUT. As testability depends a lot on the design and implementation. You should try to increase the testability when you design and implement a software. The higher the testability, the easier it is to achieve a better quality software.

    W10.4b Quality Assurance → Testing → Test Coverage → What

    Can explain test coverage

    Test coverage is a metric used to measure the extent to which testing exercises the code i.e., how much of the code is 'covered' by the tests.

    Here are some examples of different coverage criteria:

    • Function/method coverage : based on functions executed e.g., testing executed 90 out of 100 functions.
    • Statement coverage : based on the number of line of code executed e.g., testing executed 23k out of 25k LOC.
    • Decision/branch coverage : based on the decision points exercised e.g., an if statement evaluated to both true and false with separate test cases during testing is considered 'covered'.
    • Condition coverage : based on the boolean sub-expressions, each evaluated to both true and false with different test cases. Condition coverage is not the same as the decision coverage.

    if(x > 2 && x < 44) is considered one decision point but two conditions.

    For 100% branch or decision coverage, two test cases are required:

    • (x > 2 && x < 44) == true : [e.g. x == 4]
    • (x > 2 && x < 44) == false : [e.g. x == 100]

    For 100% condition coverage, three test cases are required

    • (x > 2) == true , (x < 44) == true : [e.g. x == 4]
    • (x < 44) == false : [e.g. x == 100]
    • (x > 2) == false : [e.g. x == 0]
    • Path coverage measures coverage in terms of possible paths through a given part of the code executed. 100% path coverage means all possible paths have been executed. A commonly used notation for path analysis is called the Control Flow Graph (CFG).
    • Entry/exit coverage measures coverage in terms of possible calls to and exits from the operations in the SUT.

    Which of these gives us the highest intensity of testing?

    (b)

    Explanation: 100% path coverage implies all possible execution paths through the SUT have been tested. This is essentially ‘exhaustive testing’. While this is very hard to achieve for a non-trivial SUT, it technically gives us the highest intensity of testing. If all tests pass at 100% path coverage, the SUT code can be considered ‘bug free’. However, note that path coverage does not include paths that are missing from the code altogether because the programmer left them out by mistake.

    W10.4c Quality Assurance → Testing → Test Coverage → How

    Can explain how test coverage works

    Measuring coverage is often done using coverage analysis tools. Most IDEs have inbuilt support for measuring test coverage, or at least have plugins that can measure test coverage.

    Coverage analysis can be useful in improving the quality of testing e.g., if a set of test cases does not achieve 100% branch coverage, more test cases can be added to cover missed branches.

    Measuring code coverage in Intellij IDEA

    W10.4d Tools → JUnit → JUnit: Intermediate

    Can use intermediate features of JUnit

    Skim through the JUnit 5 User Guide to see what advanced techniques are available. If applicable, feel free to adopt them.

    W10.4e Quality Assurance → Testing → Test-Driven Development → What

    Can explain TDD

    Test-Driven Development(TDD)_ advocates writing the tests before writing the SUT, while evolving functionality and tests in small increments. In TDD you first define the precise behavior of the SUT using test cases, and then write the SUT to match the specified behavior. While TDD has its fair share of detractors, there are many who consider it a good way to reduce defects. One big advantage of TDD is that it guarantees the code is testable.

    A) In TDD, we write all the test cases before we start writing functional code.

    B) Testing tools such as Junit require us to follow TDD.

    A) False

    Explanation: No, not all. We proceed in small steps, writing tests and functional code in tandem, but writing the test before we write the corresponding functional code.

    B) False

    Explanation: They can be used for TDD, but they can be used without TDD too.


    Other QA Techniques

    W10.4f Quality Assurance → Quality Assurance → Introduction → What

    Can explain software quality assurance

    Software Quality Assurance (QA) is the process of ensuring that the software being built has the required levels of quality.

    While testing is the most common activity used in QA, there are other complementary techniques such as static analysis, code reviews, and formal verification.

    W10.4g Quality Assurance → Quality Assurance → Introduction → Validation vs Verification

    Can explain validation and verification

    Quality Assurance = Validation + Verification

    QA involves checking two aspects:

    1. Validation: are we building the right system i.e., are the requirements correct?
    2. Verification: are we building the system right i.e., are the requirements implemented correctly?

    Whether something belongs under validation or verification is not that important. What is more important is both are done, instead of limiting to verification (i.e., remember that the requirements can be wrong too).

    Choose the correct statements about validation and verification.

    • a. Validation: Are we building the right product?, Verification: Are we building the product right?
    • b. It is very important to clearly distinguish between validation and verification.
    • c. The important thing about validation and verification is to remember to pay adequate attention to both.
    • d. Developer-testing is more about verification than validation.
    • e. QA covers both validation and verification.
    • f. A system crash is more likely to be a verification failure than a validation failure.

    (a)(b)(c)(d)(e)(f)

    Explanation:

    Whether something belongs under validation or verification is not that important. What is more important is that we do both.

    Developer testing is more about bugs in code, rather than bugs in the requirements.

    In QA, system testing is more about verification (does the system follow the specification?) and acceptance testings is more about validation (does the system solve the user’s problem?).

    A system crash is more likely to be a bug in the code, not in the requirements.

    W10.4h Quality Assurance → Quality Assurance → Code Reviews → What

    Can explain code reviews

    Code review is the systematic examination code with the intention of finding where the code can be improved.

    Reviews can be done in various forms. Some examples below:

    • In pair programming

      • As pair programming involves two programmers working on the same code at the same time, there is an implicit review of the code by the other member of the pair.

    Pair Programming:

    Pair programming is an agile software development technique in which two programmers work together at one workstation. One, the driver, writes code while the other, the observer or navigator, reviews each line of code as it is typed in. The two programmers switch roles frequently. [source: Wikipedia]


    [image credit: Wikipedia]

    A good introduction to pair programming:

    • Pull Request reviews

      • Project Management Platforms such as GitHub and BitBucket allows the new code to be proposed as Pull Requests and provides the ability for others to review the code in the PR.
    • Formal inspections

      • Inspections involve a group of people systematically examining a project artifacts to discover defects. Members of the inspection team play various roles during the process, such as:

        • the author - the creator of the artifact
        • the moderator - the planner and executor of the inspection meeting
        • the secretary - the recorder of the findings of the inspection
        • the inspector/reviewer - the one who inspects/reviews the artifact.

    Advantages of code reviews over testing:

    • It can detect functionality defects as well as other problems such as coding standard violations.
    • Can verify non-code artifacts and incomplete code
    • Do not require test drivers or stubs.

    Disadvantages:

    • It is a manual process and therefore, error prone.

    W10.4i Quality Assurance → Quality Assurance → Static Analysis → What

    Can explain static analysis

    Static analysis: Static analysis is the analysis of code without actually executing the code.

    Static analysis of code can find useful information such unused variables, unhandled exceptions, style errors, and statistics. Most modern IDEs come with some inbuilt static analysis capabilities. For example, an IDE can highlight unused variables as you type the code into the editor.

    Higher-end static analyzer tools can perform for more complex analysis such as locating potential bugs, memory leaks, inefficient code structures etc.

    Some example static analyzer for Java: CheckStyle, PMD, FindBugs

    Linters are a subset of static analyzers that specifically aim to locate areas where the code can be made 'cleaner'.