Sie sind auf Seite 1von 24

Assignment 3: Test Plan

University of San Diego


CSOL560-05-FA18
October 21, 2018
Table of Contents
INTRODUCTION................................................................................. 1
LITERATURE REVIEW........................................................................ 1
UNIT ................................................................................................................................... 2
A. Prevent Digital Counterfeiting.................................................................................... 2
B. Manipulation Prevention Tests.................................................................................... 3
C. Process Security........................................................................................................... 3
D. Low Barriers to Entry Tests......................................................................................... 4
FUNCTIONAL TESTING...................................................................... 4
Phase 1: System Overview............................................................................................... 5
Phase 2: Test Design......................................................................................................... 5
Testing block-chain related Non-Functional Requirements (NFR).............................. 7
Phase 3: Test Planning...................................................................................................... 7
Phase 4: Test Execution and Results Verification............................................................ 8
REGRESSION TESTING....................................................................... 9
VERIFICATION................................................................................... 9
Validation Testing Perspectives............................................................. 12
White Box Testing........................................................................................................... 12
Black Box Testing............................................................................................................ 12
Gray Box Testing..............................................................................................................12
Phases of Validation Testing................................................................................................. 13
I. Component/Unit Testing.................................................................................................. 14
Smart Contract Testing .................................................................................................... 14
Node testing................................................................................................................ ...... 15
II. Integration Testing............................................................................................................ 15
III. System Testing................................................................................................................. 15
Performance.................................................................................................................. .... 15
Penetration / Security Testing........................................................................................... 15
Alpha Testing................................................................................................................ .... 16
IV. User Acceptance Testing................................................................................................. 16
Beta Testing................................................................................................................. ..... 16
MITIGATION....................................................................................... 17
Stack buffer overrun detection.......................................................................................... 17
Data Execution Prevention (DEP)....................................................................................17
Address Space Layout Randomization (ASLR)................................................................ 17
CONCLUSIONS.................................................................................... 18
REFERENCES...................................................................................... 19
INTRODUCTION
The Fusion Engine provides a safe and secure environment for managing high volume
transactions online. The project creates an environment of trust without the need of any external,
middle parties. It is a shared distributed ledger, that is replicated and synchronized among the
members of the private peer-to-peer network. The ledger securely records the transactional
history of asset exchanges between nodes of the network in a linear and chronological order.
Every transaction written to the ledger has a timestamp and unique cryptographic signature
associated with it. Once the information gets stored in the Fusion Engine, it cannot be modified
or tampered. All of the confirmed and verified transactions are combined into a block and
chained to the most current block to form a blockchain within the Fusion Engine.
The blockchain technology incorporated into the Fusion Engine is decentralized and
nodes will add complexity to the testing process. The design of the fusion Engine further
implies that if a bug goes into the production system, it may require a complete code
revision. Using correct testing techniques and methodologies is extremely critical in this
case.
The level of testing influenced by the customized nature of the Fusion Engine platform
designed for a consortium of organizations. In the case of a private blockchain, we can
simulate all of the testing scenarios internally. A private blockchain operates in a
controlled environment, the traditional testing method applied, and a detailed test strategy
can be custom designed as the functionality is customized.
A detailed four phases testing lifecycle designed the blockchain-oriented methodology of
the Fusion Engine. Section 2 reviews the work performed in blockchain testing. Section 3
contains the complete Testing lifecycle we propose. In section 4 we conclude and provide
possible future testing tasks that can be carried out further on our proposed Fusion Engine
solution.

LITERATURE REVIEW
The blockchain technology implements within the Fusion Engine is a form of data
structure where relevant information is collected and stored along with some additional
information of validation. Applications that are developed on the Fusion Engines
blockchain guarantee data integrity and uniqueness to ensure blockchain-based systems
are trustworthy. In the case of the Fusion Engine, security is a critical requirement for the
system.
There are requirements for a testing suite of the FUSION ENGINE to include the
following:
1) Smart Contract Testing (SCT), ensures that the smart contracts satisfy the contractor's
specifications, comply with the laws of the legal systems involved, and do not include
unfair contract terms.
2) Transaction Testing for the Fusion Engine include such tests as checking for double
expenditure and ensuring status integrity. There are various tools which provide
automated testing and verification of smart contracts which is critical in the testing
process, software hooks must exist, allowing external automated scripts to operate the
platform, and observe the outcome, and verifying that the outcome is as per expectation.

In government manufacturing systems if we do not have these software hooks, then smart
contracts functioning testing will be complicated. We will need scripted interactions to
kick the process off; the test logic developed as part of the contracts.
The Fusion Engine incorporates distributed blockchain methodology, testing is done in
isolation and requires the proper development of objects capable of effectively simulating
the blockchain. Blockchain methodology satisfies the requirement of reliable transactions
without a centralized management mechanism that protects the environment from
unreliable participants in the network.
A complete Software testing life cycle is proposed to test the Fusion Engine. This new
Software Testing Life Cycle will utilize a complete set of tests from all perspectives.

UNIT TESTING
Modern software development is not performed in one herculean serial effort but rather
in thousands of individual efforts or ‘units’. Today’s modern software applications are
coherent assemblies of tens of millions of lines of code, comparable in scale to one of the
Great Pyramids where each piece or unit is constructed, tested then joined with the
greater collection (Williams, 2018). Unit testing is a critical first step in developing
properly functioning software.
The key to successful unit testing is in its simplicity. Unit testing is conducted at the
lowest software assembly with each piece of code tested independently with self-
sufficiency (ESJ, 2012). Good unit testing is easy to write, readable, reliable, fast, fully
isolated and easily automatable (Kolodiy, 2018). Unit testing consist of three distinct
phases: code initialization, stimulus application and observation. These phases are also
known as Arrange, Act and Asses or ‘AAA’ (Feregrino, 2016).
The fusion engine has the functional requirements to prevent digital counterfeiting,
prevent manipulation, be secure and have low barriers to entry. Unit testing will be
conducted to ensure functional requirements are properly implemented.

A. Prevent Digital Counterfeiting Tests


Ensuring items are not sold more than once or ‘copied’ ensures the items are genuine and
not a digital counterfeit. Keeping supply chain members and their actions private, couple
with strong encryption, will prevent unauthorized access or manipulation of the supply
data.
1) Invalid credentials:
Attempt to log on with invalid encryption keys:
o Pass if not able to log on
o Fail if able to log on
Attempt to log on with no encryption key:
o Pass if not able to log on
o Fail if able to log on
2) Log into database:
Attempt to log into database:
o Pass if not able to login
o Fail if able to login

B. Manipulation Prevention Tests


Data integrity is of prime importance. It is vital that the digital data is unaltered and in the
same state as during initial registration.
1) Data corruption
Attempt to alter digital records:
o Pass if unable to alter or change data files
2) Fail if able to alter or change data files
Data alteration
Attempt to change electronic files or records:
o Pass if unable to alter or change electronic files or records
o Fail if able to alter or change electronic files or records

C. Process Security Test


Supply chains generate large amounts of data and it is essential that all involved entities
are digitally trustworthy and electronically verifiable.
1) Electronic confirmation security.
Attempt to spoof electronic confirmation.
o Pass if unable to spoof electronic confirmation
o Fail if able to spoof electronic confirmation
2) Attempt to use log on with false identity.
o Pass if not able to log on with false identity
o Fail if able to log on with false identity

D. Low Barriers to Entry Tests


A system that is not available due to technical complexity or cost is not desirable.
Usability by a broad spectrum of users is vital.
1) Affordability test.
Is system reasonably priced?
o Pass if entry-level company can afford product
o Fail if entry-level company can afford product
2) Ease of use test
Are security measures reasonably accessible?
o Pass if devices such as smart phones or tablets can be used
o Fail if devices such as smart phones or tablets cannot be used

FUNCTIONAL TESTING
The proposed solution includes creating a complete Software testing life cycle to test the
Fusion Engine, which focuses on testing all of the critical components from all critical
perspectives. There will be four phases as shown in Figure 1.
Figure 1. Block chain-oriented Software (BOS) Testing Lifecycle
Phase 1: System Overview
The first phase of our Fusion Engine testing cycle is the system overview phase. Early
adoption of testers in the SDLC so that they have an understanding of all components and
assignments of teams to specific components. A component map generated which
contains all the components and subcomponents of the completed system which will
include all system interfaces. The component map will give a general understanding of
the overall operational environment of the system.
From this complete component map, a system component map is generated, which
contains the shortlisting of all the components that pertain to blockchain technology. The
components shortlisted also mapped into a component diagram, and this defines the
scope of testing. Once the scope of testing is defined, the whole team will have a clear
idea of the required testing and which team is responsible for which component. The
output of phase 1 is the System component map determining the testing scope.

Phase 2: Test Design


A detailed test strategy will be designed specifically to the blockchain. We will identify
the key components that need verification in the system. A hyper ledger composer model
will be used to test the Fusion Engine software components. The deliverables of the
second phase will be a detailed level test strategy.
The Hyper ledger Composer model includes an object-oriented modeling language that is
used to define the domain model for a network definition. Hyper ledger Composer CTO
files include:
1) A distinct namespace is defined, all resource declarations within the file implicitly tied
to this namespace.
2) A set of resource definition listed, encompassing assets, transactions, participants, and
events.
3) Optionally include steps for importing declarations and resources from other
namespaces.
The model for a business network will be defined in a file with the following attributes:
namespace, resources, imports from other namespaces, as required:
• Modeling a network: Hyper ledger Composer modeling only allows working with a
single model at a time.
• Instantiating the model: The Assets and Participants are defined from the model,
and the registry is empty. When the business network becomes initiated, both the
asset and participant registries are empty. We then need to create assets and
participant examples registered in the registry.
We instantiate and test the model using the following steps:
• Business network Testing: Models are great at acting as a blueprint for the Fusion
Engine, but a model of an asset is not considered valid unless it results in an actual
test result. For the business model, an asset that uses the business network also needs
to be instantiated.
• The Assets and Participant registries: Instantiated resources and their instances
assigned to their assigned registries. Asset instances are registered asset registry and
participant instances to the participant registry.

The perishable-network model defines a transaction using a JavaScript function in a


library module. This script can be used to instantiate the model and create entries in the
asset and participant registries. The script can be used in a way to create the business
network from a template in a more efficient way, avoiding any manual steps. The script
performs three distinct tasks:
1) Initiates the Instances of the assets and nodes defined in the model definition.
2) Defines the instances property parameters.
3) Assigns the instances to their associated registries.
We create use cases and generate sequence diagrams corresponding to use cases and
check if requirements have been satisfied. We then identify the missing requirement steps
utilizing the sequence diagram flow, each activity mapped to a specific step. Missing
requirements identified and assists in modifying use cases.

Testing block-chain related Non-Functional Requirements (NFR)


We will incorporate agile testing methodologies at the start of development. For Instance,
if a requirement is defined for an application to be tolerant of high transactional loads.
Then we use a loading testing technology like multi-cast and send random data loads
across the defined networks to several nodes at the same time at various intervals.
Planning and prioritization are essential regarding which NFR tests are to be given
priority in the event of time or resources constraints. Having the proper environment is
essential. For instance, when a system has to handle large loads transactions-per-second
with two CPUs and four cores, but in production, we have 4 CPUs and eight cores in
these circumstances a test can result in two outcomes. One, we will get good results or
two we will get lousy results-which is a waste of time and workforce assuming some
issue with the app. Recording NFR is essential because we can prove our system
performance and go back in the future when we learn something new.
Phase 3: Test Planning
A low-level view shows how each testing phase is to be executed and provides an
estimate of the number of tests at every level and also the specific coverage. We verify
the system availability and testing environments availability. If the system is not
available, then alternative testing strategies will be planned. Alternative test strategies
involve setting up a private blockchain for testing.
An estimate on the type of coverage and the number of tests provided. The determination
on the volume of tests, type of testing tools and automation is mandatory. We will
consider each required level of testing, the selection of methodology and tools.
Table 1 gives an example of the testing phases along with its associated recommended
methodology and tools. Table 2 gives an estimate of the number of tests performed at
every testing phase or testing level.

Testing Level Methodology and Tools

Unit Testing Test Driven Development

Verifying contracts, blocks and updating, etc.


System Testing
through scripts (Black Box)

Integrated Testing Test-Net

Functional UI Testing Automated tests for front end (Selenium Scripts

Table 1. Testing levels with methodology and tools

Testing Level Volume of Tests


Unit Testing 2,500

System Testing 1,000

Integration Testing 275


Functional UI Testing 50

Table 2. Testing Levels with Volumes of tests


Use Cases developed in the second phase mapped to the tests mentioned above. Making
sure that we have covered all the test scenarios and have included all the user
requirements inclusion of user scenario. The output of phase 3 is a Final test strategy and
a document of test cases.

Phase 4: Test Execution and Results Verification


The fourth phase is the last phase of the proposed Blockchain oriented software-testing
life cycle. It involves executing all the tests at every testing level with the documented
methodology and tools from Phase 3. Automated Testing scripts which follow a test-
driven development approach on a suitable framework.
Essential testing activities emphasized in this phase are low-level verification, and
validation of blocks, Smart Contracts, and Transactions. We also need to test all the third-
party interfaces implemented in the system as well as the user interface and functional
flows.
The results then need to be consolidated, analyzed and verified back to the business side.
There has to be a bug report which lists all the defects identifies as well as a detailed test
report stating passed and failed test executions.

Testing Phases Methodology and tools


Unit Testing Test Driven Development

Verifying contracts, blocks and updating, etc.


System Testing
through scripts (Black Box)

Integration Testing Test-Net

Automated tests for front end (Selenium


Functional UI Testing
scripts)

Table 3. Testing phase with methodology and tools


The deliverables of Phase 4 listed in Table 3 include the testing results and defect reports
for the team to use in further processing. This cycle is repeated until the system works as
expected and there are no critical errors found during testing.

REGRESSION TESTING
Regression testing is a process for verifying that changes in software made by one
developer to one or more portions of the code does not affect other portions of the code.
An example of regression testing is: Consider a product which has functionality to trigger
confirmation, acceptance, and dispatched emails when Confirm, Accept, and Dispatch
buttons are clicked.
An issue occurs in the confirmation email and in order to fix it, some code changes have
to be done. When this occurs, not only the confirmation emails need to be tested, but the
Acceptance and Dispatched emails must also be tested after the code change to ensure the
change has not affected the related functionality.
For this software, we will implement regression testing by verifying that any
modification in the product does not affect the existing modules. A full functionality test
plan will be established, and after every code change or update to the software, regression
testers will follow the regression test plan to ensure all functionality is fully intact.
After functional testing occurs when a new build is available and is verified by the
functional testing process, regression testing will be used to ensure the functionality of
the previous modules still works as intended, and the new build or functionality has not
affected software processes, triggers, or workflows. Regression testing will be done using
the administration and user interfaces to test the user experience and functionality of the
software as a whole.
In the event a regression test finds a bug in the system, that will be submitted back to the
developers to address. When the fix has been put in place, functionality testing will verify
the fix and regression testing will be performed again to ensure the bug is fixed and no
further flaws have been introduced to affect any other functionality of the software. This
process will be repeated until the software passes regression testing.

VERIFICATION
In order to verify the software application, we must be sure that we are building the right
application for the individual needs. The verification process includes:
• Testing
• Inspection
• Design analysis
• Specification analysis
• Consistency checking
Our process involves the use of static analysis where we are ensuring the software
application works effectively without actually executing the program. We do this to
provide a program understanding so we all understand the code and ensure it follows all
regulations (Rouse, 2006, para 1).
The following diagram shows the taxonomy of satisfactory software specifications. It
shows all criteria for verification, which include completeness, consistency, feasibility,
and

testability.

Figure 2 Taxonomy of satisfactory software specifications


Please see the following flow chart for reference as a verification and validation tests are
completed:

Figure 3 Software Design Verification & Validation

The functional and regression test must be completed to verify the software application.
Once the software has been verified, it then goes to the validation process to ensure that
the way we are building/have built is correct.
Records of all testing is maintained in the applications repository in compliance with our
records matrix (Hart, 2014, p. 5). This ensures that if anyone needed to see how we
verified the software application, they can find all tests and the results of those tests.

Validation Testing Perspectives


Validation testing will ensure that the final revision of our software will meet the actual
needs of our customer. Our validation process will commence following the conclusion
of the verification process. Validation testing is a subjective process that involves
dynamic testing of the actual program code in black box, white box, and gray box
contexts. These three perspectives assume different levels of understanding the source
code itself (Vasylyna, 2018):
• White Box testing assumes code known
• Black Box testing assumes code is unknown
• Grey Box Testing assumes code is partially known

White Box Testing


White box testing will begin with an understanding of the internal code and structure of
the software. All testers conducting white box testing will possess an understanding of
the programming language used for the API and blockchain implementation. White box
testing will focus primarily on strengthening security, the flow of inputs and outputs
through the application, and improving the design and usability of the application.
Consequently, we will utilize this perspective primarily in the Component/Unit and
Integration Testing phases.

Black Box Testing


Black box testing will test the functionality of the software without looking at the internal
code structure, implementation details, or internal paths of the software. The focus during
black box testing will be on the inputs and outputs of the software without any attention
to the inner workings of the software. Black box testing will be conducted by individuals
from an end-user perspective (users who do not possess an understanding of the internal
source code). This testing perspective is based entirely on the software requirements and
specifications. Although black box testing will be utilized in certain portions of each
phase of testing, it will see extensive employment during the User Acceptance phase,
particularly the Beta release of the software.

Gray Box Testing


Gray box testing combines elements from both white box and black box testing
perspectives. Although gray box testing does not require access to the source code, it
begins with a partial knowledge of the internal workings of the application. Testers will
have an understanding of the system component interactions but will not possess detailed
knowledge about the internal application source code. Gray box testing is considered to
be non-intrusive and unbiased because it analyzes the application from the outside. The
major advantage of gray box testing is that it grants the ability to test both the
presentation layer and the component/unit sides of an application. There will be a clear
distinction between the developers and the testers when conducting gray box testing in
order to minimize the risk of personnel conflicts and their ability to provide valuable
feedback. Gray
box testing
perspectives
will factor
primarily into the
System Testing
phase.

Figure 4: Validation Testing Perspectives


Phases of Validation Testing
Validation testing will be conducted throughout the SDLC but will primarily be focused
in four distinct phases (Techspirited, 2018):
I. Component/Unit Testing
II. Integration Testing
III. System Testing
IV. User Acceptance Testing
Figure 5: Phases of Validation Testing

I. Component/Unit Testing
Component and unit testing will focus on searching for defects in the software
components and verifying the functionality of the different software components
(modules, objects, classes, etc.) individually. These processes will primary focus on white
box testing perspectives with particular attention paid to the following elements:
• Internal security holes
• Broken or poorly structured paths in the coding processes
• The flow of specific inputs through the code
• Expected output
• The functionality of conditional loops
• Testing of each statement, object and function on an individual basis
Smart Contract Testing
The API’s for Smart Contract Testing will be validated individually during
Component/Unit Testing and in conjunction with other processes during Integration
testing. Particular attention will be paid to the method of applying the smart contract,
conditional statement verification, boundary value analysis, decision tables, and behavior
driven development techniques.
Node testing
Due to the fact that blockchain applications function in a peer-to-peer distributed
network, we must evaluate the network nodes that the blockchain passes through and the
specific protocol for authentication that they are using. The validity of a transaction is
based on a consensus from a majority of these nodes. These tests will independently
validate the consensus of the heterogeneous nodes in order to verify that new information
added to the distributed network is authenticated and valid.
II. Integration Testing
The interaction between the different interfaces of the components ensures that the
software will perform as a complete API. The areas addressed during Unit/Component
Testing must now be evaluated in concert with each other and with the computer
operating system, file system, hardware, and any other software system they interact
with. This process will cover some of the same territory covered during the previous
phase and, consequently, primary approaches testing from a white box perspective.

III. System Testing


After the conclusion of Integration Testing, the System Testing phase is ready to
commence. System testing is focused on checking the behavior of the whole system as
defined by the scope of the project, to include the specified requirements of the customer.
This phase is concerned with the functional expectations of the application as a whole
and will primarily approach testing from a gray box perspective.
Performance
Special attention must be paid to performance issues when utilizing blockchain
technology in software applications. Software application lag will be evaluated in context
with details such as network latency based on block size, network size, expected
transaction size, and how long a query takes to return the output with specialized
authentication protocols. It is critical that performance issues are properly addressed prior
to proceeding since they will have one of the largest impacts on the customer’s feedback
during User Acceptance Testing.
Penetration / Security Testing
Security testing is conducted throughout much of the SDLC, typically as early the
analysis and design phases and continuing throughout development and testing. Security
testing becomes even more imperative when the system is conducting supply chain
management transactions with customer sensitive data. Since all blockchain transactions
are cryptographically encrypted, it is critical to validate the integrity of the distributed
network. If the authenticated identity layer is compromised, there will be no barrier to
instantaneous transactions within the system. Penetration testing will be conducted to
ensure secure blockchain transactions with specific attention on the following elements
(Singh, 2018):
• Access and authentication processes
• Secure hash and consensus algorithms
• Wallet signature methods
• Private keys
• OWASP guidelines for mobile and web applications
• Vulnerability assessments
• Validation of provided information

Alpha Testing
Alpha testing is an internal form of acceptance testing conducted by developers and
operational users in a dynamic testing environment. Each component of the system will
be tested for functionality in an operational state with the most current version of the API.
The Alpha test will be conducted in-house with buy-in from key stakeholders, developers
and representatives from potential end users. Problems encountered with the API will be
corrected on site and addressed during this iterative process. As a collaborative effort
between developers and end-users, Alpha Testing will be approached primarily from a
gray box perspective.

IV. User Acceptance Testing


The final phase of validation will determine if the software meets the customer’s needs,
requirements, and business processes, and determines whether the software can be
handed over to the client. Client representation is critical to this effort to ensure that the
client confidence in the system is well established at delivery. User Acceptance Testing
will be accomplished just prior to the software production/launching stage and will
address the readiness of the product by testing for backups, recovery techniques,
shutdown and resumption, component failure, etc.
Beta Testing
After conclusion of the Alpha Testing phase the API will be relocated to the customer’s
operating environment to begin Beta Testing. The Beta phase will continue with multiple
iterative versions until the end-state requirements of the client have been addressed. Each
beta version will be tested for their functionality by the end users with all discrepancies
recorded and submitted to the developers for corrective action. As an external acceptance
test, the Beta process will primarily be approached from a Black Box perspective.

MITIGATION
Anticipating and mitigating security threats is critical during software development. To
assist in the software development process, it is crucial to investigate vulnerabilities and
explore mitigation strategies to help build secure applications. Failing to conduct
adequate testing to identify outstanding vulnerabilities affords a potential attacker the
opportunity to compromise the confidentiality, integrity, or availability of sensitive data
(Microsoft, 2011, pg. 2). Vulnerabilities that go unaddressed enable attackers to
potentially run malicious software on a victim’s machine and, in some situations, elevate
permissions to unrestricting levels. Vulnerabilities that are identified post-release can be
mitigated through software updates; however, in order to mitigate such vulnerabilities,
the software designers need to have knowledge of the vulnerability. Modifying the
program code to address the vulnerability request a whole new round of testing to ensure
no other vulnerabilities are introduced.
While updates are being developed for release, software designers can focus on breaking
the technique that the attackers would use to exploit the vulnerability. “Breaking or
destabilizing these techniques essentially removes a valuable tool from an attacker‘s
toolbox and can make exploitation impossible or increase the time and cost of developing
an exploit” (Microsoft, 2011, pg. 3). Several tactics have been employed Microsoft that
exploit mitigations.

Stack buffer overrun detection


This technological method implements a switch within a portion of code that an attacker
would attempt to exploit and create a stack-based buffer overrun. By inserting the “/GS”
switch into the code. When enabled, the developer places a cookie prior the critical data.
When the function completes, the cookie is checked to find a mismatch in data, indicating
the code has been corrupted. If a mismatch exists, the program terminates safely.

Data Execution Prevention (DEP)


Prior to the release of Windows Service Pack 2, arbitrary code could have been injected
and later executed within the program’s memory in places that were meant to only
contain data. Post SP2, DEP leveraged the inability to execute by enabling processors
that support hardware NX bit (No eXecute) (Microsoft, 2011, pg. 8).
Address Space Layout Randomization (ASLR)
This insight is what drives the motivation for Address Space Layout Randomization
(ASLR), which can break numerous exploitation techniques by introducing diversity into
the address space layout of a program. In other words, ASLR randomizes the location of
objects in memory to prevent an attacker from reliably assuming their location. This
mitigation makes the address space layout of a program different across multiple
computers, which ultimately prevents an attacker from developing a successful exploit by
assuming the location of objects in memory.

CONCLUSIONS
The focus of this testing plan is on the most relevant issues in state-of-the-art blockchain
software development. We relied on the review of multiple books and articles
highlighting the issues present in Blockchain software testing. From the results of the
analysis, we have proposed a testing plan for blockchain software engineering,
suggesting a focus on collaboration of large teams, testing activities, and specialized tools
for the creation of smart contracts. The need for new professional roles by having a
dedicated software tester from the start of testing lifecycle and also recommended
enhanced security and reliability by testing critical features of blockchain as well as the
overall system.
REFERENCES
bcs.org. (n.d.). Testing of Blcokchain. Retrieved from bcs:
http://www.bcs.org/content/conWebDoc/56020
Boehm, B. W. (1979). Guidelines for verifying and validating software requirements and
design
specifications. Retrieved http://csse.usc.edu/TECHRPTS/1979/usccse79-501/usccse79-
501.pdf
Enterprise Systems Journal. (2012, September 24). 8 Principles of Better Unit Testing.
Retrieved from https://esj.com/Articles/2012/09/24/Better-Unit-Testing.aspx?Page=2
Feregrino, A. (2016, February 22). Unit testing. Retrieved from
https://thatcsharpguy.com/ post/
unit-testing/
Hart. (2014). Software verification and validation process. Retrieved from
https://www.sos.state.co.us/pubs/elections/VotingSystems/systemsDocumentation/HartIn
tercivic/FullEAC-TDP/2-12-QualityAssuranceProgram/QA-
Processes/SoftwareVerificationAndValidationProcess-1000560-D01-Redacted.pdf
Infosys. (n.d.). Blockchain Implementations Quality and Validatio. Retrieved from
https://www.infosys.com/IT-services/validation-solutions/white-
apers/Documents/blockchain-implementations-quality-validation.pdf
Microsoft (2011). Mitigating Software Vulnerabilities: How exploit mitigation
technologies can help reduce or eliminate risk, prevent attacks and minimize operational
disruption due to software vulnerabilities. Retrieved from
http://download.microsoft.com/download/5/0/5/505646ED-5EDF-4E23-8E84-
6119E4BF82E0/Mitigating_Software_Vulnerabilities.pdf
Kolodiy, S. Unit Tests, How to Write Testable Code and Why it Matters. Retrieved 16
October
2018 from https://www.toptal.com/qa/how-to-write-testable-code-and-why-it-matters
Rouse, M. (2006). Static analysis. Retrieved from the Search Win Development website
at:
https://searchwindevelopment.techtarget.com/definition/static-analysis
Steve. (2010, November 29). The difference between verification and validation.
Retrieved from
the Serendipity website at: https://www.easterbrook.ca/steve/2010/11/the-difference-
between-verification-and-validation/
usblogs.pwc.com. (n.d.). The blockchain challenge nobody is talking about. Retrieved
Oct 2018,
from usblogs: http://usblogs.pwc.com/emerging-technology/the-blockchain-challenge/
Williams, J. (2018, May 22). Hackers Are Targeting Your Software Supply Chain: A
Guide To
Securing It. Retrieved from https://www.forbes.com/sites/forbestechcouncil/2018/05/22/
hackers-are-targeting-your-software-supply-chain-a-guide-to-securing-it/#6dd767de3510

Das könnte Ihnen auch gefallen