Sie sind auf Seite 1von 13

Analysis of data

It is a process of inspecting, cleaning, transforming, and modeling data

with the goal of highlighting useful information, suggesting conclusions,
and supporting decision making. Data analysis has multiple facets and
approaches, encompassing diverse techniques under a variety of names,
in different business, science, and social science domains.

Data mining is a particular data analysis technique that focuses on

modeling and knowledge discovery for predictive rather than purely
descriptive purposes. Business intelligence covers data analysis that relies
heavily on aggregation, focusing on business information. In statistical
applications, some people divide data analysis into descriptive statistics,
exploratory data analysis, and confirmatory data analysis. EDA focuses on
discovering new features in the data and CDA on confirming or falsifying
existing hypotheses. Predictive analytics focuses on application of
statistical or structural models for predictive forecasting or classification,
while text analytics applies statistical, linguistic, and structural techniques
to extract and classify information from textual sources, a species of
unstructured data. All are varieties of data analysis.

Data integration is a precursor to data analysis, and data analysis is

closely linked to data visualization and data dissemination. The term data
analysis is sometimes used as a synonym for data modeling, which is
unrelated to the subject of this article

Ethical Issues

There are a number of key phrases that describe the system of ethical
protections that the contemporary social and medical research
establishment have created to try to protect better the rights of their
research participants. The principle of voluntary participation requires
that people not be coerced into participating in research. This is especially
relevant where researchers had previously relied on 'captive audiences'
for their subjects -- prisons, universities, and places like that. Closely
related to the notion of voluntary participation is the requirement of
informed consent. Essentially, this means that prospective research
participants must be fully informed about the procedures and risks
involved in research and must give their consent to participate. Ethical
standards also require that researchers not put participants in a situation
where they might be at risk of harm as a result of their participation.
Harm can be defined as both physical and psychological. There are two
standards that are applied in order to help protect the privacy of research
participants. Almost all research guarantees the participants
confidentiality -- they are assured that identifying information will not be
made available to anyone who is not directly involved in the study. The
stricter standard is the principle of anonymity which essentially means
that the participant will remain anonymous throughout the study -- even
to the researchers themselves. Clearly, the anonymity standard is a
stronger guarantee of privacy, but it is sometimes difficult to accomplish,
especially in situations where participants have to be measured at
multiple time points (e.g., a pre-post study). Increasingly, researchers
have had to deal with the ethical issue of a person's right to service.
Good research practice often requires the use of a no-treatment control
group -- a group of participants who do not get the treatment or program
that is being studied. But when that treatment or program may have
beneficial effects, persons assigned to the no-treatment control may feel
their rights to equal access to services are being curtailed.

Even when clear ethical standards and principles exist, there will be times
when the need to do accurate research runs up against the rights of
potential participants. No set of standards can possibly anticipate every
ethical circumstance. Furthermore, there needs to be a procedure that
assures that researchers will consider all relevant ethical issues in
formulating research plans. To address such needs most institutions and
organizations have formulated an Institutional Review Board (IRB), a
panel of persons who reviews grant proposals with respect to ethical
implications and decides whether additional actions need to be taken to
assure the safety and rights of participants. By reviewing proposals for
research, IRBs also help to protect both the organization and the
researcher against potential legal implications of neglecting to address
important ethical issues of participants.

Foot Noting:

The Traditional footnoting system provides within the text

superscripted numerals that direct the reader to references at the bottom
of the page. A bibliography is also provided with this system.

When to footnote

You should indicate the source of quotations, information, ideas or

interpretation in a footnote. In the case of information, only substantial
information or possibly contentious statements of fact need to be
documented. For example, the following three sentences would not
normally require footnotes.
(a) Louis IX, also known as St Louis, succeeded to the French throne in
(b) The First Fleet arrived at Botany Bay in January 1788.
(c) Menander’s plays are the sole surviving exemplars of New Comedy.
Footnotes are normally needed for statements such as the following.
(a) Despite Louis IX’s victory at Mansourah, the death of Robert of Artois
and the losses suffered by Robert’s forces in the streets of the town
ensured that any further advance along the Nile would be impossible.
(b) The First Fleet carried 736 convicts, 188 of whom were female.
(c) In contrast to Aristophanes’ fantasy comedies, Menander’s are
grounded in realism – a feature for which ancient commentators admired
him greatly.
How to footnote

When you need to supply a footnote you should insert a superscripted

numeral (i.e. above the line) at the end of the relevant clause or
sentence after the punctuation mark. Place the numbered footnote at the
foot of the page and number consecutively throughout the essay. Do not
use endnotes.
A footnote begins with a capital letter and ends with a full stop. Book titles
should be cited as they appear on the title page, not on the front cover or
dust jacket of the text. The first letters of major words are always
capitalized in book titles despite the possible use of lower case on the title
page. With articles, chapters in edited books and unpublished theses (see
footnotes 20 and 21 below), lower case is used except for proper nouns.


1. Title Page. First page of report. Try to find a title that clearly describes
the work you have done and be as precise as possible. Mention your
name, role number, guide’s (and co-guide’s) name, name of the
department (i.e. Energy Systems Engineering), name of the institute,
place and month and year of the report.
2. Abstract. On a separate page, immediately following the title page,
summarize the main points of the report. Persons getting interested in the
report after reading the title, should be able to judge from the abstract
whether the report is really interesting for them. So, briefly formulate the
problem that has been investigated, the solutions derived, the results that
have been achieved, and your conclusions. The abstract should not
occupy more than one page (about 150 to 200 words). This page should
precede the content page.
3. Table of Contents (TOC). Should list only those items that follow it
appearing in the following order.
- List of tables (1.1, 1.2, 1.3.., 2.1, 2.2, .. etc.)
- List of figures (1.1, 1.2, 1.3.., 2.1, 2.2, .. etc.)
- Nomenclature : necessary whenever the number of symbols exceeds 0.
This is in order of English (i.e., Roman) letters (Uppercase followed by
lowercase), Symbols in Greek letters (see Appendix for the alphabetical
order of Greek letters), subscripts and superscripts used, Special Symbols,
followed by acronyms (i.e., Abbreviations) if any; everything in
alphabetical order. All entries in nomenclature should have appropriate
units in SI system.
- The chapters (1, 2, … N, followed by the name of the chapter),
- Sections within chapters (e.g. 1.1, 2.4, etc. + name)
- Subsections within sections (e.g. 1.1.1 + name)
- Appendices (I, II, III, IV, .. etc. + name), if any
- References
- Acknowledgements : if you feel like it. Remember that
acknowledgements are in order only in the final report. That is, not in
M.Tech. I and II stage reports, but in the final M.Tech. dissertation.
- The page numbers where they start. Do not include the abstract and the
table of contents itself in the table of contents. The acknowledgements, if
any, should follow the appendices and should be the last page of the
report. Every page of the report other than the title page and abstract
should be numbered. Pages of Table of Contents, Nomenclature, List of
Tables and List of Figures should be numbered with lower case Roman
numerals (i, ii, iii, iv, …etc.). From the first page of the first chapter
onwards, all the pages should be numbered using Hindu-Arabic numerals
(1, 2, 3, … etc.).
4. The Chapters. The number of chapters you need and their contents
strongly depend on the topic selected and the subject matter to be
presented. Roughly the following chapters may be included. However, it is
your own report and you have to structure it according to the flow of
overall logic and organization.

In this chapter you formulate the problem that you want to address, the
initial goals you
had, etc. without going into details. Here you also describe the structure
of the rest of your report, indicating which chapter will address which
ESE : Seminar instructions 2
Literature Survey. The discussion on the literature may be organized
under a separate chapter & titled suitably. Summarize the literature that
you have read. Rather than literally copying the texts that you have read,
you should present your own interpretation of the theory. This will help
you in developing your own thinking discipline and technical language.
Theory-Oriented Chapters. The basic theory necessary to formulate the
subject matter may be presented under a separate chapter & titled
Practice-Oriented Chapters. Depending on the work that you have done, it
might be important to write about the system specifications, practical
details, system behaviour and characteristics and cross links of the
selected topic.
Conclusions. This is one of the most important chapters and should be
carefully written. Here you evaluate your study, state which of the initial
goals were reached and which not, mention the strong and weak points of
your work, etc. You may point out the issues recommended for future
Each chapter, section, subsection, etc. should have a title. An identical
entry should exist in the TOC.
Each chapter is numbered using Hindu-Arabic numerals: 1, 2, 3, ...
Sections within a chapter are numbered using a two-level scheme,
(chapter no).(section no); for example, sections in chapter 3 are
numbered 3.1, 3.2, 3.3, ...
Subsections within a section are numbered using a three-level scheme,
(chapter no).(section
no).(subsection no); for example, subsections in chapter 3, section 2 are
numbered 3.2.1, 3.2.2, 3.2.3, ... Generally, there should not be a need for
sub-subsections !
5. Equations. Each equation should be numbered using a two-level
scheme, (chapter no).(eq no). While typing, the equation numbers should
be flush right. (LaTeX does this by default.) This number (e.g. 2.4, with 2
as chapter number and 4 as equation number) should be used (as Eqn.
2.4) whenever the equation is referred in the text. The equations should
be clearly written. Symbols used in the equations should be explained
immediately after the equation when they are referred first as well as in
the nomenclature. SI units must be used through out the report. Example:
a = b + c (3.14)
6. Acronyms. Avoid acronyms (short forms) in the report except the
following standard ones.
Equation(s) : Eq(s), Figure(s) : Fig(s). The words 'Table' and 'Chapter' are
not shortened. If any other acronyms have to be used, list them
separately at the beginning (after nomenclature). Mention the acronym in
the brackets following its full form, whenever it occurs first. The first
word in a sentence is never a short form.
7. Tables and figures. Tables and figures should be numbered and
captioned. Each table or figure should be numbered using a two-level
scheme, (chapter no).(table no) or (chapter no).(figure no). This number
(e.g. Table 4.8, or Fig. 3.7) should be used whenever the equation is
referred in the text. Each table as well as figure should have a title. An
identical entry should exist in List of Tables or List of Figures respectively.
Title of a table is given at the top of the table following its number. Title of
a figure is given at the bottom of the figure following its number. Tables
and figures should be on separate pages immediately following the page
where they are referred first. Photocopied tables should not be included.
Photocopied figures should be avoided as far as possible and if included
they should be large enoughand clear. If taken from any reference, the
reference should be cited within the text as well as at the caption of the
figure or table.
8. References. Each entry in the reference has a label. Any reference
from the main text to the entry should use this label. All references cited
in the text-body should be there in the Reference list and all entries in the
Reference list should be there in the text-body. Established acronyms may
be used. E.g.
AC, DC, ASME, ASTM, IIT, Jnl, etc., provided there is no likelihood of any
Labeling. One of the following systems can be used for labeling the cited
System 1 : A numeric label arranged in a order of citation in the main text.
This label is used in square brackets or as superscript at the point of
citation, e.g. [34] or 34. The references should be arranged together in
the order of this numeric label.
System 2: A label derived from the authors name and the year of
publication. For entries with 2 authors, include the surnames of both the
authors followed by the year of publication. For entries with multiple
authors, include the surnames of the first author followed by ‘et al.’ and
the year of publication. This label is used in round brackets at the point of
citation, e.g. (Taylor, 1982) or (Taylor et al., 1982) or (Taylor and Morgan,
1982). The references should be arranged together in the alphabetical
order of the authors surname (1st priority) and the year of publication
(2nd priority).
Details of entries. The reference list thus compiled together should be
included after the main text but before the Appendices, if any. In the
reference list, you should provide the details of each entry in the following
manner. These details differ depending on the type of bibliographic entry.
Note the italics.

Bibliography (from Greek βιβλιογραφία, bibliographia, literally "book

writing"), as a practice, is the academic study of books as physical,
cultural objects; in this sense, it is also known as bibliology (from Greek
-λογία, -logia). On the whole, bibliography is not concerned with the
literary content of books, but rather the "bookness" of books.[citation needed]

A bibliography, the product of the practice of bibliography, is a

systematic list of books and other works such as journal articles.
Bibliographies range from "works cited" lists at the end of books and
articles to complete, independent publications. As separate works, they
may be in bound volumes such as those shown on the right, or
computerised bibliographic databases. A library catalog, while not referred
to as a bibliography, is bibliographic in nature. Bibliographical works are
almost always considered to be tertiary sources.

Bibliographic works differ in the amount of detail depending on the

purpose, and can be generally divided into two categories: enumerative
bibliography (also called compilative, reference or systematic), which
results in an overview of publications in a particular category, and
analytical, or critical, bibliography, which studies the production of
books.[1][2] In earlier times, bibliography mostly focused on books. Now,
both categories of bibliography cover works in other formats including
recordings, motion pictures and videos, graphic objects, databases, CD-
ROMs[3] and websites

A bibliographic index is an "open-end finding

guide to the literature of an academic field or discipline (example:
Philosopher's Index), to works of a specific literary form (Biography Index)
or published in a specific format (Newspaper Abstracts), or to the
analyzed contents of a serial publication (New York Times Index). Indexes
of this kind are usually issued in monthly or quarterly paperback
supplements, cumulated annually. Citations are usually listed by author
and subject in separate sections, or in a single alphabetical sequence
under a system of authorized headings collectively known as controlled
vocabulary, developed over time by the indexing service"[1]. "From many
points of view an index is synonymous with a catalogue, the principles of
analysis used being identical, but whereas an index entry merely locates a
subject, a catalogue entry includes descriptive specification of a document
concerned with the subject.

Sampling is that part of statistical practice concerned with the selection

of an unbiased or random subset of individual observations within a
population of individuals intended to yield some knowledge about the
population of concern, especially for the purposes of making predictions
based on statistical inference. Sampling is an important aspect of data

Researchers rarely survey the entire population for two reasons (Adèr,
Mellenbergh, & Hand, 2008): the cost is too high, and the population is
dynamic in that the individuals making up the population may change
over time. The three main advantages of sampling are that the cost is
lower, data collection is faster, and since the data set is smaller it is
possible to ensure homogeneity and to improve the accuracy and quality
of the data.

Each observation measures one or more properties (such as weight,

location, color) of observable bodies distinguished as independent objects
or individuals. In survey sampling, survey weights can be applied to the
data to adjust for the sample design. Results from probability theory and
statistical theory are employed to guide practice. In business and medical
research, sampling is widely used for gathering information about a

A probability sampling scheme is one in which every unit in the

population has a chance (greater than zero) of being selected in the
sample, and this probability can be accurately determined. The
combination of these traits makes it possible to produce unbiased
estimates of population totals, by weighting sampled units according to
their probability of selection.

Example: We want to estimate the total income of adults living in a given

street. We visit each household in that street, identify all adults living
there, and randomly select one adult from each household. (For example,
we can allocate each person a random number, generated from a uniform
distribution between 0 and 1, and select the person with the highest
number in each household). We then interview the selected person and
find their income. People living on their own are certain to be selected, so
we simply add their income to our estimate of the total. But a person
living in a household of two adults has only a one-in-two chance of
selection. To reflect this, when we come to such a household, we would
count the selected person's income twice towards the total. (In effect, the
person who is selected from that household is taken as representing the
person who isn't selected.)

In the above example, not everybody has the same probability of

selection; what makes it a probability sample is the fact that each
person's probability is known. When every element in the population does
have the same probability of selection, this is known as an 'equal
probability of selection' (EPS) design. Such designs are also referred to as
'self-weighting' because all sampled units are given the same weight.

Probability sampling includes: Simple Random Sampling, Systematic

Sampling, Stratified Sampling, Probability Proportional to Size Sampling,
and Cluster or Multistage Sampling. These various ways of probability
sampling have two things in common:

1. Every element has a known nonzero probability of being sampled

2. involves random selection at some point.

Nonprobability sampling is any sampling method where some

elements of the population have no chance of selection (these are
sometimes referred to as 'out of coverage'/'undercovered'), or where the
probability of selection can't be accurately determined. It involves the
selection of elements based on assumptions regarding the population of
interest, which forms the criteria for selection. Hence, because the
selection of elements is nonrandom, nonprobability sampling does not
allow the estimation of sampling errors. These conditions give rise to
exclusion bias, placing limits on how much information a sample can
provide about the population. Information about the relationship between
sample and population is limited, making it difficult to extrapolate from
the sample to the population.

Example: We visit every household in a given street, and interview the

first person to answer the door. In any household with more than one
occupant, this is a nonprobability sample, because some people are more
likely to answer the door (e.g. an unemployed person who spends most of
their time at home is more likely to answer than an employed housemate
who might be at work when the interviewer calls) and it's not practical to
calculate these probabilities.

Nonprobability Sampling includes: Accidental Sampling, Quota Sampling

and Purposive Sampling. In addition, nonresponse effects may turn any
probability design into a nonprobability design if the characteristics of
nonresponse are not well understood, since nonresponse effectively
modifies each element's probability of being sampled.

A hypothesis (from Greek ὑπόθεσις; plural hypotheses) is a proposed

explanation for an observable phenomenon. The term derives from the
Greek, ὑποτιθέναι – hypotithenai meaning "to put under" or "to suppose."
For a hypothesis to be put forward as a scientific hypothesis, the
scientific method requires that one can test it. Scientists generally base
scientific hypotheses on previous observations that cannot satisfactorily
be explained with the available scientific theories. Even though the words
"hypothesis" and "theory" are often used synonymously in common and
informal usage, a scientific hypothesis is not the same as a scientific
theory. A working hypothesis is a provisionally accepted hypothesis.

In a related but distinguishable usage, the term hypothesis is used for

the antecedent of a proposition; thus in proposition "If P, then Q", P
denotes the hypothesis (or antecedent); Q can be called a consequent. P
is the assumption in a (possibly counterfactual) What If question.

The adjective hypothetical, meaning "having the nature of a

hypothesis," or "being assumed to exist as an immediate consequence of
a hypothesis," can refer to any of these meanings of the term

In its ancient usage, hypothesis also refers to a summary of the plot of a

classical drama.

Concept ,Measurement of hypothesis:

Concepts, as abstract units of meaning, play a key role in the

development and testing of hypotheses. Concepts are the basic
components of hypotheses. Most formal hypotheses connect concepts by
specifying the expected relationships between concepts. For example, a
simple relational hypothesis such as “education increases income”
specifies a positive relationship between the concepts “education” and
“income.” This abstract or conceptual hypothesis cannot be tested. First,
it must be operationalized or situated in the real world by rules of
interpretation. Consider again the simple hypothesis “Education increases
Income.” To test the hypothesis the abstract meaning of education and
income must be derived or operationalized. The concepts should be
measured. Education could be measured by “years of school completed”
or “highest degree completed” etc. Income could be measured by “hourly
rate of pay” or “yearly salary” etc.

When a set of hypotheses are grouped together they become a type of

conceptual framework. When a conceptual framework is complex and
incorporates causality or explanation it is generally referred to as a
theory. According to noted philosopher of science Carl Gustav Hempel “An
adequate empirical interpretation turns a theoretical system into a
testable theory: The hypothesis whose constituent terms have been
interpreted become capable of test by reference to observable
phenomena. Frequently the interpreted hypothesis will be derivative
hypotheses of the theory; but their confirmation or disconfirmation by
empirical data will then immediately strengthen or weaken also the
primitive hypotheses from which they were derived.”[7]

Hempel provides a useful metaphor that describes the relationship

between a conceptual framework and the framework as it is observed and
perhaps tested (interpreted framework). “The whole system floats, as it
were, above the plane of observation and is anchored to it by rules of
interpretation. These might be viewed as strings which are not part of the
network but link certain points of the latter with specific places in the
plane of observation. By virtue of those interpretative connections, the
network can function as a scientific theory”[8] Hypotheses with concepts
anchored in the plane of observation are ready to be tested. In “actual
scientific practice the process of framing a theoretical structure and of
interpreting it are not always sharply separated, since the intended
interpretation usually guides the construction of the theoretician.”[9] It is,
however, “possible and indeed desirable, for the purposes of logical
clarification, to separate the two steps conceptually

Statistical hypothesis testing

When a possible correlation or similar relation between phenomena is

investigated, such as, for example, whether a proposed remedy is
effective in treating a disease, that is, at least to some extent and for
some patients, the hypothesis that a relation exists cannot be examined
the same way one might examine a proposed new law of nature: in such
an investigation a few cases in which the tested remedy shows no effect
do not falsify the hypothesis. Instead, statistical tests are used to
determine how likely it is that the overall effect would be observed if no
real relation as hypothesized exists. If that likelihood is sufficiently small
(e.g., less than 1%), the existence of a relation may be assumed.
Otherwise, any observed effect may as well be due to pure chance.

In statistical hypothesis testing two hypotheses are compared, which are

called the null hypothesis and the alternative hypothesis. The null
hypothesis is the hypothesis that states that there is no relation between
the phenomena whose relation is under investigation, or at least not of
the form given by the alternative hypothesis. The alternative hypothesis,
as the name suggests, is the alternative to the null hypothesis: it states
that there is some kind of relation. The alternative hypothesis may take
several forms, depending on the nature of the hypothesized relation; in
particular, it can be two-sided (for example: there is some effect, in a yet
unknown direction) or one-sided (the direction of the hypothesized
relation, positive or negative, is fixed in advance).

Proper use of statistical testing requires that these hypotheses, and the
threshold (such as 1%) at which the null hypothesis is rejected and the
alternative hypothesis is accepted, all be determined in advance, before
the observations are collected or inspected. If these criteria are
determined later, when the data to be tested is already known, the test is


Scientific hypothesis
People refer to a trial solution to a problem as a hypothesis — often called
an "educated guess"[5] — because it provides a suggested solution based
on the evidence. Experimenters may test and reject several hypotheses
before solving the problem.

According to Schick and Vaughn,[6] researchers weighing up alternative

hypotheses may take into consideration:

• Testability (compare falsifiability as discussed above)

• Simplicity (as in the application of "Occam's razor", discouraging the
postulation of excessive numbers of entities)
• Scope – the apparent application of the hypothesis to multiple cases
of phenomena
• Fruitfulness – the prospect that a hypothesis may explain further
phenomena in the future
• Conservatism – the degree of "fit" with existing recognized

Evaluating hypotheses

Karl Popper's formulation of hypothetico-deductive method, which he

called the method of "conjectures and refutations", demands falsifiable
hypotheses, framed in such a manner that the scientific community can
prove them false (usually by observation). According to this view, a
hypothesis cannot be "confirmed", because there is always the possibility
that a future experiment will show that it is false. Hence, failing to falsify a
hypothesis does not prove that hypothesis: it remains provisional.
However, a hypothesis that has been rigorously tested and not falsified
can form a reasonable basis for action, i.e., we can act as if it were true,
until such time as it is falsified. Just because we've never observed rain
falling upward, doesn't mean that we never will—however improbable, our
theory of gravity may be falsified some day.

Popper's view is not the only view on evaluating hypotheses. For example,
some forms of empiricism hold that under a well-crafted, well-controlled
experiment, a lack of falsification does count as verification, since such an
experiment ranges over the full scope of possibilities in the problem
domain. Should we ever discover some place where gravity did not
function, and rain fell upward, this would not falsify our current theory of
gravity (which, on this view, has been verified by innumerable well-formed
experiments in the past) – it would rather suggest an expansion of our
theory to encompass some new force or previously undiscovered
interaction of forces. In other words, our initial theory as it stands is
verified but incomplete. This situation illustrates the importance of having
well-crafted, well-controlled experiments that range over the full scope of
possibilities for applying the theory.
In recent years philosophers of science have tried to integrate the various
approaches to evaluating hypotheses, and the scientific method in
general, to form a more complete system that integrates the individual
concerns of each approach. Notably, Imre Lakatos and Paul Feyerabend,
both former students of Popper, have produced novel attempts at such a


Research can be defined as the search for knowledge or as any

systematic investigation to establish facts. The primary purpose for
applied research (as opposed to basic research) is discovering,
interpreting, and the development of methods and systems for the
advancement of human knowledge on a wide variety of scientific matters
of our world and the universe. Research can use the scientific method, but
need not do so.

Scientific research relies on the application of the scientific method, a

harnessing of curiosity. This research provides scientific information and
theories for the explanation of the nature and the properties of the world
around us. It makes practical applications possible. Scientific research is
funded by public authorities, by charitable organizations and by private
groups, including many companies. Scientific research can be subdivided
into different classifications according to their academic and application

Generally, research is understood to follow a certain structural process.

Though step order may vary depending on the subject matter and
researcher, the following steps are usually part of most formal research,
both basic and applied:

• Formation of the topic

• Hypothesis
• Conceptual definitions
• Operational definition
• Gathering of data
• Analysis of data
• Test, revising of hypothesis
• Conclusion, iteration if necessary

A common misunderstanding is that by this method a hypothesis could be

proven or tested. Generally a hypothesis is used to make predictions that
can be tested by observing the outcome of an experiment. If the outcome
is inconsistent with the hypothesis, then the hypothesis is rejected.
However, if the outcome is consistent with the hypothesis, the experiment
is said to support the hypothesis. This careful language is used because
researchers recognize that alternative hypotheses may also be consistent
with the observations. In this sense, a hypothesis can never be proven,
but rather only supported by surviving rounds of scientific testing and,
eventually, becoming widely thought of as true (or better, predictive), but
this is not the same as it having been proven. A useful hypothesis allows
prediction and within the accuracy of observation of the time, the
prediction will be verified. As the accuracy of observation improves with
time, the hypothesis may no longer provide an accurate prediction. In this
case a new hypothesis will arise to challenge the old, and to the extent
that the new hypothesis makes more accurate predictions than the old,
the new will supplant it.

Artistic research, also seen as 'practice-based research', can take form

when creative works are considered both the research and the object of
research itself. It is the debatable body of thought which offers an
alternative to purely scientific methods in research in its search for
knowledge and truth.

Historical research is embodied in the scientific method.