Sie sind auf Seite 1von 43

AMME1961

Prokaryotic cells (bacteria and blue-green algae) are unicellular organisms


usually 1-10 microns in diameter. They lack a mitochondria and a nucleus but
have a complex cell wall and have a single DNA strand wound circularly and in a
loop.
Eukaryotic cells (animals and plants) are mainly multicellular organisms
approximately 10-100 microns in diameter and contain mitochondria and
nucleus. Plant cells have chloroplasts and a simple cell wall whilst animal cells do
not. The DNA is found in multiple chromosomes (not single stranded).
Both cell types can be grown in controlled culture systems if provided nutrients
(for growth) and waste products removed (as they can inhibit growth over time).
Bacterial cells act as factories for making drugs. Eukaryotic are larger and
generally require more support for production. Some cells grow suspended in
solutions whilst some adhere to culture flask surfaces. These can be complex for
mammalian cells (eukaryotic) and thus growth factors are needed which help
them grow (derived from foetal calves) whilst for prokaryotic cells and some
eukaryotic cells (such as yeast cells) it is simple. Cells which grow on surfaces
often stop dividing when in contact with surrounding cells (confluent) and this
results in a sheet of cells one cell thick.
This can be used to make things (such as beer and wine with yeast, therapeutic
drugs and proteins using bacteria or mammalian cells), study cell function in
health and disease as well as to generate cells to manufacture human tissues for
implantation and to replace mutated cells in patients with genetic diseases.
Mammalian cells grown in culture include stem cell cultures (found in
developing embryos and at low levels in human tissues such as fat stem cells
which can become some cells but not all types). Generally, stem cells can
differentiate into some or all cell types in the body. Primary cell cultures are
from cells isolated from mammal tissues which have a finite ability to divide,
after which they undergo cell death. Mammalian cells can also be altered to
allow them to continue dividing indefinitely (immortalised). This can be seen in
the case of cells originally isolated from human cancers.
Microscopy
History
Middle Ages (11th Century): Arabs used polished beryl (gemstone) as plano-
convex lenses or reading stones in order to magnify their manuscripts.
In the Early 1500s, many scientists built magnifying instruments using two
glass lenses positioned in front of each other. Galileo made a microscope by
converting a telescope. This microscope had a diverging (concave) lens as the
eyepiece, and a converging (convex) lens as an objective lens.
Hans Janssen and his son Zacharias built the first compound microscope in
the 1590s, which had a magnification of 3-9x.
Christopher Cock improved on this as he used oil lamp illumination as well as
a glass flask filled with water, allowing light to focus on the specimen and
magnify 50x.
In 1665, Robert Hooke published Micrographia in which he detailed his
observations, coining the term cell to describe the basic unit of life. This was
through his observations of cork, stating that they looked like the box-like cells in
a monastery. Issues arose as image aberrations are more pronounced with two
lenses.
Antoine von Leeuwenhoek developed the simple single lens microscope which
could magnify 200x. He would view erythrocytes (red blood cells) as well as a
poly-morphonuclear leukocyte in an unmounted, unstained blood smear using his
simple, single lens microscope during his research in the mid to late 1600s. He
became the first to observe living and moving cells such as bacteria and
sperm.
In the 1700s, Achromatic (without colours) microscope objectives were made,
which ensured that they were free of major chromatic aberrations. In the 1800s,
there were further improvements due to the quality of glass used. Also, water
and oil immersion objectives were developed. In 1873, Ernst Abbe provided the
scientific basis for the production of powerful light microscopes and in 1893,
August Koehler standardised microscope illumination. Then in the 1900s, the
fundamental principles for fluorescence and single/two photon microscopy were
established. In 1941, Zernike developed Phase contrast and in the 1950s,
Nomarski developed differential interference contrast.
Microscopes have led to great scientific discoveries. In the late 1600s, Marcello
Malpighi made crucial contributions in the fields of embryology, physiology and
practical medicine. It led to the exploration of the microcosmos,
discovery/development of cell theory, blood circulation etc. Robert Koch used a
microscope to discovery the bacilli which cause tuberculosis and cholera in
1905. Neurology also benefited as Sakmann and Neher studied the function of
nerve cells in 1991. In developmental biology, studies were done on the fruit fly
to add to the existing knowledge of how an egg cell grows and develops into an
organism.
Resolution: This is defined as the minimum distance (in microns) between two
points such that they can be clearly distinguished from each other.


Abbes equation states that: d= where nsin is the numerical
2 nsin
aperture of the lense, lambda is the wavelength of illumination, n is the
refractive index of the medium between the lens and the object. Alpha itself if
defined as half the angular intake of the lens. As the maximum value for sin
alpha is 1, the resolution is limited to half the wavelength of light. Light
microscopes have a limit of resolution of 200nm.
Parts of a light microscope
Objective Lens
These are the most important components of an optical microscope as they are
responsible for primary image formation. They are instrumental in determining
the magnification of a particular specimen and the resolution under which fine
specimen detail can be observed in the microscope. They also play a central role
in determining the quality of the images which the microscope produces. Despite
this, they are the most difficult component to design and assemble. They are the
first component which the light encounters as it proceeds from the specimen the
image plane. They are called objectives as they are closest to the specimen
(object).

Condenser Lens
This gathers light from the source and concentrates it into a cone of light which
illuminates the specimen with uniform intensity over the entire field of view. As it
provides a cone of light to the objective and thus, must be properly adjusting to
optimise the intensity and angle of light entering the objective front lens.

This requires two matching NAs with the condenser NA matching the objective
NA. If the condenser NA is larger than the objective, contrast will be lost due to
flare in the objective. If less than the objective, the objective NA will not be filled,
meaning the objective will have its performance compromised, making the
effective NA less than the true NA. Generally, a condenser of a high NA is chosen
so that it can have its NA reduced by an iris diaphragm whenever a lower NA
objective is used.
Setting up for bright-field illumination
The first step is to ensure that the condenser aperture (iris) diaphragm is in the
correct position. This is recognised by looking at the back of the objective and
then removing the eyepiece and looking down the tube of the microscope. Inside
the objective, there is an image of the aperture iris diaphragm which should be
adjusted such that 7/8 of the diameter of the objective is filled with light.
This adjustment is made with each change of the objective, thereby controlling
the effective NA of the condenser. Closing this diaphragm reduces the effective
NA of the condenser, increases contrast, depth of field whilst reducing intensity.
When condenser and objective NAs are nearly equal, much of the fine detail is
visible and the image is brighter but with glare and scattering. When condenser
aperture produces a NA which is 70% of the objective, glare is reduced, image is
very sharp and find image detail is present without diffraction artifacts. Despite
this, when closed to the smallest setting (25% of objective NA), the image is
generally darker and colour hues are shifted.
The second step is the Kohler illumination (1893), which is the most important
variable in achieving high-quality images in microscopy and critical
photomicrography. It is defined as a method of illuminating objects in which an
image of the source is projected by a collector into the plane of the aperture in
the front focal plane of the condenser. This latter, in turn, projects an image of an
illuminating field diaphragm at the opening of the collector into the object plane.
Modern microscopes are designed so that the collector lens projects an enlarged
and focussed image of the lamp filament onto the plane of the aperture
diaphragm of the sub stage condenser.
Oil immersion lens
The NA can be increased by employing immersion fluids with a high refractive
index. Air is not used as it has the lowest possible RI (1). As such, oils are used
with a RI of 1.515, to match that of glass and thus maximise image contrast. As
the refractive index is the same, the light rays are not refracted. When there is a
mismatch in RI, the result is a spherical aberration.
To check resolution, set it up with a 40x/0.65 objective lens using Kohler
illumination. The outer wall of the diatom Pleurosigma angulatum is observed as
the diatom markings are just within the resolving limits of the objective.
Therefore, if you cannot see the pattern, either the condenser is too low,
aperture iris is closed, auxiliary lenses in the illumination are chosen incorrectly
or a red filter are being used.
Visible Light
This is made of sinusoidal waves (peaks and troughs). When passing through
glass, the different properties mean that the waves are no longer in phase and as
such, a phase contrast microscope is used to transform these phase differences
seen when light passes through unstained live cells into intensity differences and
thus increasing contrast.
Phase Contrast Microscopy
This was first developed by Zernike in 1938, where the change in phase of light
waves is increased to half a wavelength by a transparent phase plate within the
microscope, causing a difference in brightness. This revolutionised light
microscopy which allowed live cell imaging and is currently a routine tool in
biological and medical research.
Advantages: It is high contrast microscopy without the need to fix or stain cells.
It also allows for quantitative phase imaging and is an affordable method.
Disadvantages: It is not ideal for thick specimens and the halo effect can
obscure details.
Fluorescent Microscope
Monardes in 1565 identified that certain plant extracts are naturally fluorescent
whilst Vincenzo Casciarolo (1603) discovered that barium sulfate, after baking,
emitted a purple-blue light in the dark. Then in 1612, Galileo described the
emission of light from this stone as phosphorescence, stating that the light is
conceived into the stone and then given back after time.
Stokes further investigated these discoveries. In 1852, he used a prism to
disperse light and illuminate quinine solution. He reported that there was no
effect until the solution was placed in the UV region of the spectrum. As such, he
believed that fluorescence is of a wavelength longer than the exciting light and
this displacement is called the Stokes Shift. In 1864, it was first proposed as an
analytical tool.
The wavelengths of light from highest to lowest are ROYGBIV. All molecules
absorb light with microwave absorbed due to molecular rotations and infrared
due to molecular vibrations. UV and visible light are absorbed when electrons in
the molecule are promoted into higher states. This only occurs if there is a long
conjugated system in the molecule. Highly delocalised electrons lead to lower
energy levels.
Fluorescence occurs as a photon of energy is absorbed whilst some of the energy
is dissipated as heat etc. Finally, the electrons in the fluorophore return to the
ground state and as a result, a photon of lower energy (and thus longer
wavelength) is emitted.
Properties of Fluorophores:
The Extinction Coefficient is the light capturing ability of the molecule. The
quantum yield is the efficiency of the fluorescence and is found as the ratio of
emitted photons to absorbed photons. Stokes shift is the difference between
the energies of the absorbed photon and the emitted photon.
Principles of Fluorescence
The high Stokes shift means that more heat is produced whilst the low
quantum yield means that other non-radioactive processes are more likely to
occur. Also, photobleaching results from the reaction with oxygen in an excited
state. Low intensity excitation, anti-fade agents and a sensitive detection system
are used.
Secondary Fluorescence
This was found in 1959 by Haitinger and co. as they applied exogenous
fluorescent chemicals to samples, coining the term fluorochrome. These are
essential for fluorescence microscopy, as was the development of the
epifluorescence microscope. This is commonly used today and is useful as the
light source lies on the same side of the sample as the objective (unlike light
microscopes). Fluorochromes are routinely applied to fixed and living cells.
In the 1960s, Shimomura isolated GFP (fluorescent protein) from the jellyfish
Aequorea Victoria, with excitation peaks at 395 and 475nm, emitting at 508nm.
It is highly stable over a range of pH and temperature and is encoded by a single
gene. It can be transfected into the host genome and expressed permanently.
There are many types of fluorescent molecules including semiconductor
nanocrystals, synthetic organic fluorescein, fluorescent proteins, fluorescent
nanodiamonds, crystals and naturally occurring molecules.
To image fluorescence in a microscope, the emitted radiation must be visible. An
intense light source is needed, as well as light separation and light detection.

Light Sources
Advances in fluorescence microscopy can be seen with the use of LED light
sources. Broad spectrum lamps generate ample light at desired wavelengths but
only a small percentage of the projected light is useful. Other wavelengths are
suppressed to avoid background noise which reduces image contrast and
obscures the fluorescent light emissions. Advances in LED technology have
meant that high-intensity monochromatic LEDs are available in a variety of
colours which match the excitation bandwidth of many commonly-used
fluorescent dyes and proteins.
Light Separation
Filters are used to separate light by blocking, absorbing or reflecting certain
wavelengths. This is done with a filter cube which have an exciter filter, beam
splitter (dichroic) and an emission filter. Short Wave Pass Filters (SWP) are
specified by the cut-off wavelength of 560nm. It passes shorter wavelengths and
absorbs/reflects longer wavelengths. Long Wave Pass filters have a cut-off
wavelength of 570nm but pass longer wavelengths and absorb/reflect shorter
wavelengths. Bandpass filters pass a band of wavelengths (such as 520nm +/-
50nm).
Light Detection
Epifluorescence is detected with a camera, providing spatial information, very
efficient light capture but there is no gain control. The CCD is cooled in order to
remove noise. The Confocal microscope uses a photomultiplier which captures
photons and is a less efficient light capture. Due to its high gain, it is more
sensitive to low light.
Confocal microscopy allows for finer detail to be resolved in thick specimens
as the blur of being out of focus is removed. It allows for samples to be optically
sectioned, for sections to be reconstructed into 3D models and for specific cell
reactions to be followed.
This was initially patented by Minsky in 1957 and it was only used in the end of
the 80s when lasers were developed. In 1978, the Cremers designed a laser
scanning process which scans the 3D surface of an object using a focussed laser
beam, producing an overall picture electronically. This was the first time laser
scanning and 3D detection with fluorescent markers were combined.
Confocal Microscope
This is a light
microscope where the
point source,
illuminated volume in
the specimen and
pinhole in front of the
detector are in
conjugate planes. This
prevents light from out
of focus regions
contributing to image
formation, producing
clear optical sections of
thick and fluorescently
labelled specimens.

Lasers are almost the ideal point source of intense light. They consist of an
optical cavity, pumping system and an appropriate medium. The wavelength
depends on the medium itself. Argon produces lower wavelengths whilst helium-
neon is much larger.
A confocal laser scanning microscope illuminates the specimen with a focussed
laser beam in a pattern of parallel lines. The signals emitted are then collected
by a photomultiplier detector, one pixel at a time in order to build up an image.
Therefore, optical sections are collected and 3D projections of cells and tissues
are examined.
For conventional fluorescence microscopy, the wavelength emitted is 525nm
with NA=1.4 and n=1.515. There is a lateral resolution of 191nm and depth
resolution of 678nm. When imaging a tiny fluorescent object, we can see the
object but it appears much larger than it is due to the poor axial resolution of
optical microscopes. This is called an Airy Disk and it is surrounded by a series of
progressively fainter halos. Diffraction causes the scattering of light at the edge
of the lens.
The Rayleigh criterion is that it is possible to distinguish two such disks when the
centre of one lies on the first minimum of the next. The disk radius is given by
r=0.61(lambda)/sin(alpha).
The limit of resolution of a light microscope can be depressed using incident light
of shorter wavelengths (UV or electron microscope). Electron microscopy uses
electrons instead of light and magnetic lenses instead of glass. Despite this, it
must be held inside a vacuum chamber as electrons do not travel far in air and
thus the substances are not living. It allows for the cell surface, mitochondria,
cytoplasm, Golgi complex and nucleus to be seen much better than with light
microscopy. Despite this, they are expensive and the specimen must be coated
with a metal to reflect the electrons. The sample must also be completely dry
and colour is not observed.
Lecture 3: Biopolymers
Biopolymers are seen throughout daily life, in food, cosmetics and textiles. In
terms of the macromolecules found within cells, Nucleic Acids comprise 7% of
the cell by weight and have a key role in the storage and transmission of
information. Proteins comprise 15% whilst providing structural support, enzymes
and antibodies whilst carbohydrates aid in the storage of energy.
For all biopolymers, a series of subunits called monomers join to form the
polymer. In the case of nucleic acids, nucleotides are the monomers whilst amino
acids are the monomers for proteins. For carbohydrates, sugars are the
monomers whilst for lipids, fatty acids are the monomers.
Nucleic Acid polymers are responsible for storing genetic information and contain
phosphate, a sugar (either deoxyribose or ribose) and four different organic
bases. There are two types of nucleic acid polymers which are DNA or RNA.
DNA: This is the information storage molecule for living things and are tightly
formed into x-shaped chromosomes to keep information safe. The chained
nucleotide structure forms a twisted ladder called the double helix. It was
discovered in the 1950s by Maurice Wilkins and Rosalind Franklin through
studying the structure of DNA crystals using x-rays which suggested that the
structure of DNA was helical. In 1953, Watson and Crick described this structure
and they, along with Wilkins, were presented the 1962 Nobel Prize (not given to
Rosalind Franklin because she died).
The rails of the ladder are made up of
alternating molecules of phosphate
and sugar whilst the steps are made
of nitrogenous bases with the base pairs
connected by hydrogen bonds.
There are four types of nucleotides which
differ depending on the base present whilst the
phosphate group and sugars are the same. The four
bases are Adenine, Thymine, Guanine and Cytosine.
Deoxyribose (C5H10O4) is a monosaccharide and lacks a
hydroxyl group at the 2 position, hence the name deoxyribose. The hydroxyl
groups on the 5 and 3 carbon link to the phosphate group, forming the DNA
backbone.

Polynucleotides are formed as there is a beta-glycosidic linkage between the


base and the sugar. Between the phosphate and the sugar, there is a
phosphodiester bond. When the sugar forms the phosphodiester bond, water is
removed, removing a hydrogen from the hydroxyl group on the 3 chain on the
sugar, and the OH from the phosphate.

A-T and G-C pair together, with A-T having 2 H bonds and G-C having 3. The
sugar phosphate backbone has different polarities and therefore different
directions.
The chemical structure of DNA is such that there is a nitrogenous base pair which
is formed using either 2 or 3 Hydrogen bonds. The two rails of the helix are
antiparallel as one is in the 5-3 direction and the other is in the 3-5 direction.
As such, there are two complementary polynucleotide chains.
DNA has two asymmetric grooves called the Major groove and the minor groove.
These strands complete a turn every 3.4nm whilst the distance between them is
0.34nm with 10 base pairs in each turn. DNA is right handed and these grooves
are found alternatively.
RNA: RNA serves as a genetic messenger as it passes the information stored in
the cells DNA from the nucleus to other parts of the cell for protein synthesis.
The nucleotide polymer has a similar structure to DNA, except with a Ribose
sugar instead of a deoxyribose sugar. It is generally a single stranded molecule
which can form various shapes but this impacts stability. It is read from 5 to 3.
The nitrogenous base thymine found in DNA is replaced by Uracil. Every cell
needs protein and DNA takes the responsibility of getting that protein for the cell.
It replicates, transcribes (RNA synthesis), leaves nucleus, attaches onto ribosome
(translation) and then forms a protein.
DNA replication occurs as the double helix unwinds and the two strands
separate. Next to each separated strand, an enzyme called DNA polymerase
lines up nucleotides to form new second strands according to the base-pairing
rules. Hydrogen bonds form between the base pairs whilst bonds form between
the sugar-phosphate components of the newly aligned nucleotides. The new
double stranded molecules then twist up into double helices.
RNA is transcribed from one strand of DNA by RNA polymerase. RNA synthesis
always progresses from 5 to 3 and is synthesised in the nucleus and then
transported to the cytoplasm. There are three types of RNA: mRNA which copies
instructions in the DNA and takes it to the ribosomes, tRNA which carries amino
acids to the ribosomes, and finally rRNA which reads mRNA to group into codons.
DNA Analysis:
Gel electrophoresis is a laboratory technique to separate large molecules such as
DNA based on size, using electricity to carry them through a gel. DNA can be
separated using this technique in order to verify amplification by sequencing
reactions, check the quality and quantity of genomic DNA after DNA extraction
and to separate DNA fragments in order to clone a specific band.
Agarose is a linear polymer extracted from seaweed. In order to make agarose, it
needs a certain concentration and temperature. Then poured into container and
allowed to set. The gel is covered with a buffer and then the chamber is
connected with the power supply. Sample volume is 10 (microliters). High
V=faster but compromises quality.
DNA is negatively charged and thus, when current is applied, the negatively
charged fragments migrate towards the positive electrode. The gel is used to
slow the movement of DNA and separate by size. Small strands move further
into the gel than large DNA fragments and this also depends on the strength of
the electric field, buffer and the density of the agarose gel.
The bands can be visualised as bands by applying dyes or by autoradiography,
where a radioactive molecule is applied to photographic film.
Applications:
DNA is an attractive biopolymer for designing structures and devices due to its
molecular recognition properties. DNA is genetic and generic, is robust and can
be engineered and processed efficiently. It can also be manipulated with
precision unlike any other natural or synthetic polymer. Because DNA is small
(nano-sized), we can start at this size and build up.
With stable impact between implant and bone, scar tissue forms at interface
after 15yrs, making it wobbly and fall apart. Scientists aim to find a way to stop
scar tissue forming. Single DNA strand is absorbed into allow so it is entrapped to
a certain layer (immobilisation). Hybridisation is when hybrid double helix is
formed and conjugated into drugs. The fixation of nucleic acids into a titanium
layer and functionalisation of the surface with the bioactive molecules
conjugated to the nucleic acid strands.
In Tissue Engineering
For injured tissue, body can fix it to a certain extent but after that, it needs
intervention. To get from injured to repaired, we need a scaffold, cells and
biomolecules.
Tissue is like building. Scaffold is needed for structural support. It also provides a
place for cell attachment and growth and is usually biocompatible. As
building/tissue goes up, scaffold goes down (biodegradable). For gene delivery, it
is designed so it is released at a controlled rate.
The surface absorption of DNA to tissue engineering scaffolds is done for efficient
gene delivery. This has the potential to promote localised transgene expression
which can induce the formation of functional tissue. The scaffolds capable of
controlled DNA delivery can provide a fundamental tool for directing progenitor
cell function, which has applications with the engineering of numerous tissue
types.
The scaffold has polymeric release (encapsulation and release of DNA into local
environment) and substrate mediated delivery (employs immobilisation of DNA
to scaffold surface).
Hydrogel looks like powder but in water, it swells as it has the ability to absorb a
lot of water. In many cases, its 95% water. Also lots of crosslinking in the polymer
chain. Used for soft tissues. DNA Hydrogels: ligases help in linkage. DNA polymer
uses many biological and chemical techniques to cross link and then form a gel
and is used for drug delivery. DNA hydrogels use DNA monomers as crosslinkers
and the branched DNA which formed itself into sticky ends could link to each
other with the help of ligases. This is by linking together into sheets of tiny
squares which tangle in 3D.
Nucleic acid dopants within an electrically conducting polymer network include
genoelectric devices (DNA diagnostics based on the interface of electronic and
nucleic acid recognition system) and the diagnosis of infectious disease, genetic
mutations, drug discovery, forensics and food technology.
Conducting polymers are popular because they can conduct charges (won Nobel
Prize). Integrated with DNA to look at biomedical applications. Can immobilise
single strand within polymer. Target DNA is hybridised. Signal from it reaches
electrode and can be seen through transducer. Conducting polymer helps to send
charge and be seen.
Through chemical synthesis, fluorescent nanobarcodes were attached onto
polystyrene (through hybridisation), amplifying it to give a certain colour.
Lecture 4:
Proteins are multipurpose molecules which are the most abundant of all cellular
components. The human body contains at least 10000 different kinds of protein.
They can be classified as fibrous (polypeptides arranged in long strands,
insoluble in water and provide structural support) or globular (polypeptide
chains folded into spherical or globular form, soluble in water and provide
diverse functions such as enzymatic or regulatory).
Structural proteins provide support and are the proteins of ligaments, tendons
and bones. Collagen is an example of a structural protein.
Enzymatic proteins catalyse chemical reactions in the body, speeding them up.
An example is catalase. The toxic chemicals (H 2O2) break down into water and
oxygen. Examples of enzymatic proteins include mylase and ligase.
Defensive proteins protect from pathogens (antibodies). Fibrinogen (blood
clotting proteins) acts in a defensive way.
Hormonal proteins coordinate an organisms activities. Other protein types
include transport proteins (move molecules from one place to another),
membrane proteins (cell growth and adhesion) such as integrin, storage proteins
(store amino acids) as well as contractile and motor proteins (provide movement)
like actin. Haemoglobin has transport proteins.
Proteins are biopolymers as they are naturally composed of biological monomers
called amino acids. A chain of amino acid containing two or more amino acids
joined by peptide bonds are called peptides. Polypeptides contain less than 40
amino acids whilst proteins have more than 40. A combination of 20 amino acids
make up a diverse range of proteins.
The general structure of an amino acid is as follows:

Amino acids are known by their common name, three letter abbreviation or one
letter abbreviation to describe the amino acid sequence.
Amino acids are classified based on a number of their characteristics with the
most common being polarity. Glycine is the smallest amino acid and has no
hydrophilic or hydrophobic properties.
Non polar amino acids have an equal number of amino and carboxyl groups and
are thus neutral. They are hydrophobic and have no charge on their R group.
These amino acids can be aliphatic (Alanine, Proline, Valine, Leucine, Isoleucine
and Methionine) or Aromatic (Phenylalanine or Tryptophan).
Polar amino acids dont have a charge on the R group and participate in the
hydrogen bonding of protein structure. These can also be broken down into
amide (Asparagine or Glutamine), -OH (Serine, Threonine and Tyrosine) or SH
(Cysteine). Polar amino acids which are positively charged have more amino
groups as compared to carboxyl groups, making it basic. The amino acids with a
positively charged R group are in this category. These include Lysine, Arginine
and Histidine. Contrastingly, those with a negative charge have more carboxyl
groups than amino groups which make them acidic. These include Aspartate and
Glutamate.

Out of the twenty amino acids, eight cant be synthesised by humans. These
include valine, leucine, isoleucine, threonine, lysine, tryptophan, methionine and
phenylalanine.
The growth of a polypeptide occurs in a condensation reaction as a hydrogen
from the amine group interacts with the hydroxyl group from another amino acid,
producing water whilst leaving behind a peptide bond. The direction of this chain
is defined as being from N-terminus (being the NH 2 end) to the C-terminus
(COOH end). The repeated N-C-C sequence is the backbone of the polypeptide
which can only grow in one direction.
Ribosomes are the sites of protein synthesis and are made up of a small and
large subunit. They are
large complexes of RNA
and protein.
With Translation, the first
step is protein synthesis
where the incoming tRNA
carries amino acids to the
ribosomes where they
find the anticodon and
form weak hydrogen
bonds whilst forming a
peptide bond with the
adjacent amino acid. This
process continues,
increasing the length of the polypeptide until a stop codon is reached at the 3
end, forming a newly synthesised protein.
If even one base pair is incorrectly replicated and this leads to a change in the
amino acid formed, mutations can occur which have drastic effects on the
human body.

Protein Structure:
Primary structure is not very stable. It doesnt give a structureno real
functionality. Its the most basic form as it is just a sequence of amino acids.
Even in secondary form, you still cant determine the functionality. In this, there
are local and spatial interactions between functional groups of the protein
backbone. The sequence of amino acids converts it into either an alpha helix or a
beta pleated sheet. They are defined by the patterns of the hydrogen bonds
between the main chain peptide groups.
Alpha helix is a spiral structure in which the C=O of one amino acid forms a H-
bond with the NH of the fourth one. The NH groups of all peptide bonds point in
the same direction whilst the C=O groups of all peptide bonds point in the
opposite direction. On average, they have 3.5-6 amino acids.
Beta sheets are formed by multiple side by side beta strands. There are two or
more polypeptides linked together by a hydrogen bond between the H- or NH- of
one chain, and the CO- of an adjacent chain. They generally have around 20
amino acids. Look at bond location (which elements).

Tertiary is the most important. It is the 3D folding of a polypeptide and there are
many bond types/interactions between side groups. Disulphide bridges are the
most significant. These structures often dictate biological activity. It is much
more stable and has three bond types.
Quaternary structure dictate function and are the result of the aggregation of two
or more polypeptide subunits held together by non-covalent interaction such as
collagen, haemoglobin and insulin. Tells us orientation and functionality.
Protein Analysis
Gel electrophoresis is a technique used for protein analysis which relies of
separation by molecular weight. For large molecules such as DNA and RNA,
agarose gel is used but for proteins, a polyacrylamide gel (PAGE). As proteins
have no constant charge, they can have any charge at a given pH and so are
coated with a detergent (sodium dodecyl sulphate), making them negatively
charged, allowing for separation by molecular weight.
Blotting refers to the transfer of biological samples from a gel to a membrane
and their subsequent detection on the surface of the membrane. Western
Blotting is a method used to detect a specific protein with antibodies which have
an affinity for that protein. An enzyme linked immune-sorbed assay is used to
indicate the presence of a particular protein detected by the antibodies. With
blotting, the Plot protein binds onto membrane and we then try to find whatever
protein we want to assess by adding antibody to detect.
Chromatography aims to separate the components of the mixture through a
matrix. Those with a lesser affinity towards the stationary phase move faster
and thus, are eluted first. The Chromatograph is often built in the lab (if not too
complex) so that its custom made for the experiment. Based on the
types/properties of the molecules involved, different types of chromatography
are used. Maintenance is expensive and data collection is time consuming. The
separation of biomolecules is based on their physicochemical characteristics
such as polarity (solubility, volatility and adsorption) requires hydrophobic
interaction chromatography, size/mass (diffusion, sedimentation) requires gel
filtration or size exclusion. Ionic characteristics such as charge are used to
separate biomolecules with ion exchange chromatography whilst shape (ligand
binding, affinity) use affinity chromatography.
With a mass spectrometer, the molecule is identified based on its mass. They
essentially break the cell and extract protein. They then collect the cell, put a
buffer, quantify the protein and then fractionate it (add digestion enzyme like a
scissor which cuts it into peptides). It is fed into chromatograph (if looking at
particular molecular weight) and then put into mass spectrometer.

Ionise the samples by bombarding with inert gases (Ar or He). Sensor detects
when complete. Charged molecules are accelerated into mass analyser.
With Nuclear Magnetic Resonance
(NMR), the structure and dynamics
of the protein are found. If an
external B field is applied, an energy
transfer is possible between the
base energy to a higher energy
level. This takes place at a
wavelength which corresponds to
radio frequencies and when the spin
returns to its base level, energy is
emitted at the same frequency. The
signal matching this transfer is
measured and processed to yield an
NMR spectrum for the nucleus.
In x-ray crystallography, we find the structural orientation of the protein (3D
position of atoms), allowing us to find the functionality. In this, a protein solution
is placed on a glass plate atop a precipitant. The crystal of protein is then
irradiated with an x-ray beam. Crystals are used because this amplifies the
diffraction signal. X-rays then interact with the electrons around the molecule,
scattering the beam which is then detected to measure the electron density. An
X-ray diffraction pattern is then observed which allows for an electron density
map to be formed which is used to build a model to get a crystal structure. For
NMR and x-ray crystallography, it takes 2-3 days.
Applications:
Collagen is a major component of the extracellular matrix and there are over 20
types with the most common being 1 and 3. They give structural support whilst
the tertiary structure gives high tensile strength. They are not so good for
orthopaedics as it is not strong enough. If used, then in combination with
something else. Their applications are mainly cardiovascular (heart valves or
vessel replacements), dermatology (tissue creams), dressings (wound repair,
burn treatment and sutures), ophthalmology (corneal grafts, vitreous
replacement).
Gelatin is a denatured collagen obtained from partial hydrolysis. It is used in food
(gelling agent or thickener), coating pills, cosmetics and ointments.
Laminin is a major glycoprotein of the basement membrane and is in the
biologically active part of the basal lamina, influencing cell differentiation,
migration, adhesion as well as phenotype and survival. It is also used in neural
tissue engineering to increase bioactivity.
Biosynthetic hydrogels combine natural and synthetic hydrogels, having the
advantage of a biological signal and proteolytic degradation from the natural
side, whilst being easy to control with high mechanical strength from the
synthetic side. To increase the biological activity of synthetic polymers, proteins
are used. Synthetic is stronger than natural hydrogels. Cells dont grow on the
synthetic polymer but adding 1% of natural polymer huge increase in the
growth.
Elastin provides elasticity to tissues as a result of the crosslinking of lysine
residues. It is produced by fibroblasts and smooth muscle cells. They are used in
tissue engineering with cartilage, intervertebral discs, vascular grafts as well as
liver, ocular and cell sheet engineering.
Protein based nanotubes have a layer by layer assembly of proteins and amino
acids into polycarbonate membranes. This has a high efficiency of virus trapping.
Protein nanocages use an engineered form of ferritin (cage like iron storage
protein) to synthesise and deliver iron oxide nanoparticles to tumours. These are
important in targeting and delivering drugs to tumours.
Lecture 5:
Carbohydrates are energy molecules and thus have an important role in cell
energy. The building blocks of carbohydrates are sugar. The size and structure of
these carbohydrates are fundamental to their biological activity. The role of
carbohydrates is to provide energy as part of our diet.
Glucose is stored as glycogen, not glucose. This is because the osmotic pressure
of glucose is too high and so it is kept in a form where the pressure is not
excessive (glycogen). It is stored in the muscles (energy) and liver (maintain
blood sugar balance). It provides structural support with cell wall in plants. It is
also the exoskeleton for crustaceans.
It is on the cell surface as it can be covalently bonded with lipids. They play an
important role in transportation and cell-cell communication, as well as
modulation of the immune system. Sugar molecules on the cell membrane are
important for communication between the cell and external environment.
Carbohydrates can be simple (fruits, milk or vegetables) or complex (starch and
fibres). All such compounds contain C, H and O, as well as having C=O and OH
functional groups. They can be classified based on the number of sugar units,
location of carbonyl groups (C=O), size of the base carbon chain as well as
stereochemistry.
In terms of the number of sugar units, they can be monosaccharides (simple
sugar units), disaccharides (two sugar units/complex sugars), oligosaccharides
(2-10 sugar units) and polysaccharides (more than 10 units).
They can also be classified based on the location of the carbonyl group (C=O). If
the carbonyl group is at the end of the chain, they are classified as an aldose
whilst if it is in the middle of the group, it is called a ketose.
They can also be classified based on the number of carbon atoms in the chain
(such as triose, tetrose, pentose and hexose). Stereochemistry is the study of the
spatial arrangement of molecules. Stereoisomers have different bond types,
spatial arrangements and therefore properties.
Pairs of stereoisomers are designated by D- (right) or L- (left) at the start of the
name. They are mirror images that cannot be overlapped. LOOK FOR THE OH.
These are designated based on the chiral carbon furthest from the carbonyl
carbon. The chiral centre is an asymmetric carbon with 4 different things
attached to it. There must be at least one chiral carbon to have stereoisomers.
Changing the orientation of molecules will change the biochemical and physical
properties.
Chiral carbon atoms accounts for the large number of different monosaccharides.
For n chiral carbon atoms, there are 2n stereoisomers which can be divided
amongst 2n-1 enantiomers (D- and L-). For example, glucose has 4 chiral carbons.
Cyclization refers to a change in spatial arrangement, allowing chains to bend
and rotate. The cyclisation of glucose produces a new asymmetric centre at C1
(rightmost carbon). The two stereoisomers are called anomers (alpha and beta).
These cyclic sugars have planar rings with alpha having the OH below the right
whilst B has the OH above the ring.
Monosaccharides are either aldoses or ketoses and can cyclise to form alpha or
beta isomers. Their derivatives (things you get from monosaccharides) include
aldonic acids, uronic acids, deoxysugars and amino sugars. Monosaccharides can
be linked to each other, or other molecules by glycosidic bonds (C-O-C bonds).
Examples of monosaccharides include glyceraldehyde, glucose, fructose and
mannose.
Sugar acids are formed from the oxidation of the aldose and conversion of
aldehyde into carboxylic acid (for example D glucuronic acids). Sugar alcohols
are formed from the reduction of the carbonyl group to the hydroxyl group
(sorbitol and xylitol). Amino sugars are formed from the replacement of the
hydroxyl group to the amine group (D-Glucosamine and N-acetylneuraminic acid)
whilst deoxysugars are formed by replacing the OH with a H group
(Deoxyribose).
Examples of Disaccharides include sucrose (glucose and fructose), lactose
(glucose and galactose) and maltose (glucose and glucose). These are formed
through a condensation reaction as R-OH + HO-R R-O-R + H2O, forming an
alpha (12) glycosidic bond.
A homopolysaccharide is composed of a single type of monosaccharide unit.
They are storage forms of food and energy whilst being a structural component
of cells (cell wall has cellulose). Most polysaccharides are insoluble in water
(cotton).
Starch is an example of energy storage used by plants. Starch is a long repeating
chain of alpha D glucose with a chain length of up to 4000 units. It is composed
of a mixture of two major substances.
Amylose and amylopectin have the same composition but a different shape as
amylose is straight chained whilst amylopectin has a branched structure.
Amylose starch forms coils with alpha (14) linkage. It is the most common type
of starch and has 200-20,000 glucose units forming a helix as a result of the
bond angles between glucose units.
Amylopectin starch is highly branched and the glucose residues are linked with
alpha (14) and alpha (16) linkage. Branch points occur about every 12-25
residues along an alpha (14) chain.
Glycogen is the energy storage of animals as it is stored in the liver and muscles
and granules. It is similar to amylopectin as it is highly branched with branch
points occurring about every 8-10 residues along the alpha (14) linkage.
Glycogen branches like a tree and has a globular shape.
Dextrans are a branched chain polysaccharide of D-glucose found in yeast and
some bacteria. The glucose residues are linked by alpha (16) and alpha (13)
linkage. The chains vary in length and the extent of branching. Dextrans has
applications in biomed engineering as it is used in tissue engineering as a
biomaterial to develop some constructs.
Cellulose is the most abundant polysaccharide with glucose chains having beta
(14) glycosidic linkages resulting in long fibres for plant structure. It is non
digestible by humans. Chitin has monosaccharide units linked by beta (14)
glycosidic bonds, forming a linear polymer. There is a repeating disaccharide
containing N-acetyl-D-glucosamine. It is a component of crustacean shells.
Cellulose and Chitin have loads of applications in biomed engineering. Chitin has
an amine sugar. Chitosan is natural biopolymer and commercially available for
cardiac research (first clinical trial). Batch to batch variation with it but this lack
of uniformity makes it hard to use. Chemical reactions done to make it usable.
Heteropolysaccharides are composed of different types of polysaccharides and
thus have a complex structure and a variety of functions. They occur frequently
in combination with non-carbohydrate material
Glycosaminoglycan has repeating disaccharide units which contain hexose amine
and generally a residue of uronic acid and a sulfate group (sugar and acid). The
major function of GAGS is to form a matrix holding together the protein
components of skin and connective tissue. Tendons, cartilage and bone use
GAGS. Hyaluronic acid gives volume to skin and without it, skin shrinks. Used a
lot in tissue engineering. Gives volume to eye and lubricates. Acts as a shock
absorber in cartilage and bone. Used as fillers. Chondroitin-6-sulfate is
necessary in the formation of the extracellular matrix and it interacts with
various growth active molecules and plays a role in the CNS and cartilage.
Heparan sulfate is an anticoagulant, acidic complex polysaccharide found on the
cell surface and in the extracellular matrix. Dermatan sulfate is used in a wide
variety of tissue such as dermis, vascular wall and the cornea. It is used in cell-
cell and cell-matrix interactions as well as anticoagulant activity.
Glycoproteins are conjugated proteins having a covalently linked carbohydrate
component. The covalently linked carbohydrate may be N or O linked glycans.
They have an extracellular location and function. Many proteins are
glycoproteins. Glycoproteins bond covalently with sugar molecules but GAGS
arent on cell.
Proteins and GAGS in the extracellular matrix aggregate to form proteoglycans.
The aggregates have a polyanionic character. They are localised to membranes
in the ER and Golgi and serve as a lubricant whilst supporting elements. In a
collagen matrix, we have proteoglycan. When cartilage degenerates,
proteoglycan structure breakdown (GAGS). In healthy cartilage, the collagen
matrix is not torn and there are many proteoglycans but this is not seen in
unhealthy cartilage.
In order to determine the structure of carbohydrates, we must identify the
sugars, the stereochemistry of each sugar, linkage type, ring structure type,
anomeric configuration of each sugar and the sequence of different sugar
residues.
There are many methods for estimating the MW of GAGS and these include NMR,
gel permeating chromatography, electrophoresis (agarose and Polyacrylamide
gels) and mass spectrometry (tough with small fragments and isomers).
Conventional glycoconjugate characterisation involves isolating individual glycol
conjugate, detaching and purifying them and finally characterising them
structurally.
High throughput technologies include oligosaccharide microarrays-carbohydrate
binding proteins. In this, oligosaccharides are robotically micro-printed onto a
microarray. From here, there is whole cell binding and then carbohydrate binding
proteins which lead to ligand profiling, identification of novel protein-
carbohydrate interactions as well as functional glycomics. Another method is
arrays of known carbohydrate-binding proteins where the aforementioned
proteins are printed onto the array followed by whole cell binding which leads to
the same thing.
Adjuvant=add to enhance treatment. It is also used for targeted drug delivery,
glycan arrays, metabolic labelling of glyco-structures, carbohydrate-derived
drugs and carbohydrate-based vaccines. Cells recognise implants better if coated
with glycoproteins and therefore there is functionalisation of a carbon surface of
a biomedical device with synthetic carbohydrates.
Glyconanoparticles are also used in biomedicine, bio amplifications, bio labels
etc. as they stop the transmigration of tumours through endothelial cells. Carbon
nanotubes arent very good biologically but is very good chemically and so the
surface is coated with glycosylated polymers. Hydrogel mimics extracellular
matrix and are thus used to help understanding the biology of the stem cell
micro-environment.
Osteoarthritis has bone exposed, causing bone to rub against each other. Cant
regenerate itself as it is avascular, nor does it have neurons. Gellan gum is a
polysaccharide from microbial fermentation of sphingomonas paucimobilis and is
non-toxic. It is used as an injectable system in a minimally invasive manner. It
has ophthalmological applications and is structurally similar to native cartilage.

Lecture 6: Gene Therapy


General structure of a gene
Gene Therapy is used to fix problems with genetic structure (predominantly
those stored in chromosomes in nucleus of all cells). We have some DNA in the
mitochondria (37 genes from mother only). This is because mitochondria were
once a prokaryote which was engulfed by a eukaryote and then began to live in a
symbiotic relationship. One in every 10 billion cell divisions, there can be a
mutation. Depending on their location and nature, they can cause cancer/other
diseases.
In chromosomes, there are regions called promoters, followed by exons and
introns and then finally a stop codon. Proteins are made from this info where
there is a process of splicing, removing introns and left with mRNA sequence of
exons. Defects can occur in any part of the gene. Mutation in the promoter
affects the where and how of it being turned on. If regulated gene (by hormone
like testosterone) has the promoter damaged, there can be a resistance to
hormone and in stem cells, it can lead to an inability to regulate that cell
function. Exon mutation has a protein of a different sequence. Mutations in
introns can cause errors in splicing of the gene.
Different forms of DNA mutation that can cause disease, (dominant, recessive,
mitochondrial)
With an autosomal recessive inheritance (such as cystic fibrosis), one copy of
each gene is found in both parents, causing them both to be carriers. However,
with Autosomal Dominant inheritance (such as Huntington disease), one
mutation is enough for it to affect the individual.
With X-linked Recessive Inheritance (like haemophilia), males only require one
mutation to have the illness but females need it in both. With X-linked dominant,
one copy of the gene in each can cause the disorder.
Mitochondrial DNA is solely dependent on the mother and thus if the mother is
affected, the children will receive it and if the father is affected, then the children
will not be affected.
Different strategies for repairing or replacing mutated genes
Gene therapy is the use of genetic material to change gene expression to target
cells, tissues and organisms to treat disease. It is based on the growing
understanding of how to manipulate the genetic information in humans, animals,
bacteria and viruses. Also, it is reliant upon understanding the role of genetic
variation and mutation in human disease, the ability of viruses to hijack the cell
machinery for viral protein manufacture, the ability for a virus to incorporate its
own DNA into the host cell. The first clinical trial was undertaken in 1989 but no
gene therapy products have been approved by the FDA.
Strategies include embryonic selection with IVF techniques, embryo repair,
nuclear or mitochondrial DNA, stem cell and mature cell DNA repair. Applications
include treating diseases due to inherited mutations (e.g. haemophilia, muscular
dystrophy, cystic fibrosis, SCID syndromes etc.), treating cancer (P53 repair and
targeted induction of cell death), treating acquired diseases using local gene
delivery to provide release of therapeutic proteins (e.g. revascularisation in heart
disease with VEGF and the suppression of inflammatory arthritis with TNF
antagonists) as well as vaccines, which enable enhanced expression and
presentation of antigens to the human immune system to elucidate effective
immune responses and memory.

Difference between germline (inherited mutations) and somatic mutations


Somatic mutations (most common) occur after conception and throughout life.
These occur in cells throughout the body and can lead to diseases such as
cancer. A typical cancer is the result of a series of somatic mutations that
progressively lead to loss of regulation of cell division and escape from the
bodies surveillance mechanisms. Germline mutations occur in a patients
forebears, passes on in sperm or ovum and are present in all cells. Like making a
receptor active all the time so the mutated one divides more and more cancer.
Gene Mutations
Missense
Missense mutations occur when a point mutation occurs when a single
nucleotide is mistranslated, leading to a different amino acid code and as such,
the protein formed is different.
Nonsense
These are a specific example of missense mutations where the stop codon is
formed due to mistranslation and thus the polypeptide is prematurely
terminated.
Recombination
These mutations occur when there are problems with the recombination process
with chromosomes. Examples include crossover events and this occurs when
similar genes recombine so that part of one gene attaches to part of another
(Philadelphia translocation) like with chromosomes 9 and 22 (Chronic
Myelogenous Leukaemia). This creates a new gene (mixture of two
genes/chiasma).
Duplication
This, as the name suggests, occurs when a section of DNA is duplicated.
Deletion
These mutations cause the loss of genes due to the recombination effect or
misfolding. Frameshift mutations refer to the insertion or deletion of one or more
nucleotide pairs. These mean that a different amino acid begins to code and this
is a random sequence which goes on for longer than it should. This can be highly
dysfunctional and have bad effects on the properties of the protein.
insertion
This is also an example of a frameshift mutation.
Genetic diseases
Congenital
Can replace defective Factor 8 (in haemophiliacs) with replaced one with
haematopoietic cells (make red blood cells).
Cancer related
Can we change genetic structure of cancers so they are susceptible to bodies
control and then they die. P53= mutation in cancer as they stop cells from
normally dying. Removing/repairing this means they die normally
Mitochondrial DNA
We know human gene sequence and can completely decode it within 24hrs. Can
see what gene mutation may cause that disease. We know how mutations cause
disease. Donor mother gives an egg. Needle inserted to remove DNA from egg
and then the one with mitochondrial DNA has the nucleus inserted into this egg.
That way it has donor mothers mitochondrial DNA and mother and fathers
natural DNA.
Dominant, recessive, autosomal and X-linked inheritance
Haemophilia is a blood clotting problem (Factor 8). Sex linked as this gene is on
the x chromosome. Women have 2 and men have 1. Women can use other x
chromosome to produce enough factor 8 but men only have 1 so they get
haemophilia. With royal family, spontaneous germline mutation leads to offspring
getting it.
Dominant mutation (1 copydisease) like in collagen. In bones, if we have a
mutation in one of these and it makes a collagen which doesnt fold like normal,
leads to brittle bone disease. More commonly is recessive (one gene is fine but if
two are bad, we get disease). One copy = carrier. Both husband and wife are
carriers, 2/4 kids carry, are with it and dont at all. If dominant, 50% have it
and 50% dont.
Gene therapy strategies
Embryonic selection
Embryonic repair, nuclear or mitochondria DNA (donor mother providing DNA
for mitochondria)
Stem cell repair (haemophiliacs)
In vivo repair (most examples)
Gene delivery vectors
Viruses
These offer high efficiency delivery to cells in the body. The gamma-retrovirus
has high efficiency integration and stable gene transfer but it requires cell
division and can cause cancer. It can be used as a vaccine for tumours. Other
examples include adenovirus and non-viral means but these offer the risk of
succumbing to innate immunity. Despite this, the main vector uses in gene
therapy clinical trials is the adenovirus followed closely by the retrovirus and
then naked DNA.
Producing viral vectors is complex and the process includes selecting a virus
depending on its application and gene size. The viral genome is then modified to
make them incapable of reproduction and then the therapeutic gene is
incorporated with a strong promoter. A packaging cell line is prepared (and
helper virus if required) to provide the missing factors for the viral encapsulation
of the therapeutic genes and then viral vector particles are manufactured in the
packaging cell line. These particles are then harvested, purified and tested for
activity, non-infectivity, sterility, purity and safety. In summary, we choose the
vector, load DNA correctly into the virus then get enough of virus particles to
treat human cells. Want to modify it so the virus doesnt grow itself and make
loads of new viruses. Needs help to reproduce so we make a special cell line.

A strong specific immune response takes 2 weeks to develop and this is too slow
to deal with acute viral infections. As such, the body has innate mechanisms to
fight viruses and other invaders. When viruses infect cells, they release cytokines
(distress signals). Also, cell surface receptors on cells can detect bacterial and
viral proteins, sugars and lipids leading to cytokine release. The activation of
these pathways leads to immune cell migration to sites of infection and
activation, and to the death of infected cells. These effects are associated with
inflammation which can even cause death or other diseases.
Adaptive immunity can be humoral (antibodies) which bind to previously seen
viruses and bacteria, targeting them for inactivation and clearance. This is a
problem for the repeated delivery of viral vectors such as adenovirus. Cell
mediated immunity (t cells) detect cells which have a foreign antigen and then
kill these cells. If this is done on the cells receiving the gene therapy, it can
destroy them and so it opposes sustained production of therapeutic proteins.
naked DNA
This has a low efficiency but avoids many virally related immune responses.
Genetically modified adult stem cells
These manipulations are conducted external to the body with the potential for a
selection of cells to reinject into a patient.
Inducible pluripotent stem cells (iPSC)
These have a high potential for repair but the issue is that a complex series of
gene manipulations are needed to convert adult, mature stem cells by gene
activation and then additional changes are required to repair mutations.
Gene therapy translation to commercial products
Success to date
In terms of the numbers of approved gene therapy trials, there has been a
relatively steady rise and currently, there are over 100 approved trials
worldwide. By disease, cancer is the main disease addressed followed by
cardiovascular diseases. Look up vaccine gene therapy. Easier to deal with
cancer based gene therapy as it is DNA based anyway.
Challenges
Hard to get it done with enough cells to cause therapeutic benefit. How to get
enough DNA into enough cells? We try to use viruses as their aim is to get into
cells, take over DNA, manufacture, reproduce cells and escape. If we modify
them to stop them to recreate, theyll put DNA into heaps of cells. Injecting
straight DNA works a bit but short term burst of proteins made. Needs a lot of
DNA but not many cells change. Taking cells out, GM and then put them back.
Done with stem cells in the hope theyll regenerate organs with normal functions
(not damaged due to mutation). Can take fat or skin cells and modify to make
pluripotent stem cells. Aim to use this to repair DNA, culture them to put them
back as muscles etc., put them into a kid with muscular dystrophy and they then
create muscle cells resistant to disease.
These tend to have a low efficiency as they do not repair many cells. These can
also lead to the induction of cancer. Generally, we need safer, ore effective
delivery systems and better design of therapeutic constructs to provide
prolonged activity.
A case study was done with Jesse Gelsinger who suffered from OTCD (causes
problem with lipid breakdown and ammonia) and he signed up for a therapy
consisting of an infusion of corrective genes encased in a dose of weakened cold
virus (adenovirus). The large dose of infused adenovirus caused an activation of
innate immunity which led to elevated IL-6 (an inflammatory cytokine) which led
to a systemic inflammatory response and therefore damaging his vital organs,
giving him lung damage and then death.
Gene therapy is somewhat easier to use kids with X-SCID as they dont have
immune response. The challenge is to get lifelong change (not just a few
months).
There is one approved gene therapy in Europe (Glybera) which is used to treat
patients with LPLD which causes severe pancreatic attacks. The aim is to restore
the enzyme activity which processes fat particles. The product contains an
engineered copy of the human gene packaged with a tissue-specific promoter in
a non-replication AAV1 vector with an affinity for muscle cells.
X-SCID children have immune deficiency and lack t-cells. This can be successfully
treated with a bone marrow transplant if a suitable compatible donor is found. A
transplant with a non-compatible donor has a high risk of graft versus host
disease. The children with no compatible donor have been successfully treated
by removing bone marrow, genetically adding a functional copy of the mutated
gene to marrow cells and returning the marrow cells to the patient. This has an
85% rate of curing the deficiency. Some children received the RSV viral vector
and developed leukaemia and one died but 4 were cured. In two of these cases,
cancer was induced and thus RSV was found to increase the risk of cancer by
disrupting the function of normal genes.
Lecture 7:
The Human Genome Project (HGP) aims to determine the sequence of base pairs
which comprise human DNA as well as map the human genome. This is
undertaken as this will allow them to observe the role it plays in health and
disease and thus, better treat, diagnose it and possibly prevent it.
Initially hard to find DNA sequence (10 years to find out human genome).
Currently identified 27000 genes (99.99% of the genome). Big project is to find
the function of the gene, not just the sequence. How genemRNA and then
mRNAprotein which make up the physical components and lead to a given
function. They aim to find out why different cells have different functions even
though the cells have the same genes.
To identify the function of a gene, scientists examine how up-regulation of genes
affect function. Upregulation and downregulation refer to an increase and
decrease respectively, of a given cellular component (often receptors). As such,
these change the cell function and so we can find out what the initial function
was. Conversely, blocking it (downregulation) and then examining the role it
plays tells us the function.
Changing the gene on DNA changes everything after that. Changing mRNA, we
affect translation into protein and then function. We can also directly change
protein (3 basic ways of gene manipulation).
At genomic level: The transcription factor is a protein which binds to specific DNA
sequence and as such, is able to control the rate of transcription of genetic
information from the DNA to mRNA. It gives the signal (sequence specific
protein) to start or repress transcription. The promoter is region where it is
initiated and is located near the transcription start sites of genes, on the same
strand and upstream on the DNA. It can be either an activator or repressor. For
gene manipulation, we can transfer the known gene promoter and transcription
factor into unknown gene. As such, we are able to control the rate at which the
transcription occurs and thus manipulate the gene.
As we age, we have less stem cells and so we take longer to repair ourselves. As
we grow, 4 genes switch off and so they cant differentiate from fibroblasts into
stem cells. We can force these cells to grow after turning them back on. We force
skin cells to become like stem cells and so these are used when we get injured.
These are called iPS (induced pluripotent stem cells) as somatic cells are
reprogrammed to enter an embryonic cell-like state.
RNAi isnt on genomic level, but is on mRNA level. RNA interference technology
(RNAi) is also called PTGS for Post Transcriptional Gene Silencing. It is the process
by which dsRNA silences gene expression as dsRNA breaks down the mRNA for a
specific gene and thus stops the production of protein, inhibiting translation and
the degradation of mRNA. dsRNA refers to double stranded RNA which tends to
be longer than 30 nucleotides. miRNA is microRNA (21-25 nucleotides) and is
encoded by endogenous/internal genes. siRNA is small-interfering RNA of the
same size but an exogenous origin. Sense RNA is a single strand of DNA and is
essentially the same code as mRNA (except U and T are changed) whilst
antisense RNA is complementary to the mRNA. When sense RNA and antisense
RNA interact, they form dsRNA and this was used to
No mRNA (chopped off) means you cant make protein as you dont have the
code and so it is stopped. miRNA is natural as it is endogenous (sense signal and
then start process miRNA mechanism) but siRNA is artificial and to manipulate.
mRNA used for protein is the sense RNA. The antisense is the complementary
one. mRNA is single strand. If we make antisense RNA and combine with sense
RNA, we can for dsRNA. Once this is formed, we make a protein with no template
(due to the double strand) and so it cannot make another protein (silences it).
For cancer research, the par-1 gene was used as blocking it can have therapeutic
impact since it is responsible for proliferation. Both sense and antisense can work
but in theory, only one can actually work.
Mello reported that sense RNAs mimic the antisense phenotype (physical
characteristics) whilst Fire targeted genes using antisense constructs from
transgenes (genes transferred from one organism to another) and noted that
sense constructs could have a silencing effect.
They believed that the issue was contamination, so that sense RNA was
contaminated with antisense and vice versa. Fire and Mello worked together to
make it pure and then make dsRNA. Found that blocking only comes from dsRNA,
but not from sense and antisense RNA on their own. This was an unexpected
result.
Penicillin= why bacteria decreased in one case. Maybe this produced something
which kills bacteria. This was how penicillin was found. Viagra used for heart but
found effects later. Same was done with the pacemaker and pap smears.
There are two phases in RNAi which are initiation and execution. Initiation
involves the generation of mature siRNA and execution involves the silencing of
the target gene and the degradation of translation.
Steps in RNAi technology: First, make dsRNA and then add it into the cells,
usually using vectors such as viruses, artificial means or short hairpin RNA
(shRNA). Once they enter, theyre recognised by Dicer (enzyme) which binds to it
and then processes it and chopping bits off to make siRNA. Finally, one strand
guides the other one and then they are incorporated into the RNA induced
silencing complex (RISC) which unwinds this duplex, removing a passenger
strand (RISC activation). From here, this, along with an antisense strand, targets
mRNA and then degrades mRNA at sides not bound by siRNA.
This won the Nobel prize because it can be used to identify gene function. We
can transfer a genetic sequence into the cell and observe how dsRNA affects cell
function (as it silences a certain gene) and thus allows us to find gene function.
Another one is looking at therapeutic uses. Cancer as genes are out of control
and proliferate too much.
If gene is in high concentration in cancer cells, more than in normal cells, we
could find out what causes it to proliferate. They silenced the gene and noted the
impact on cell function. Cancer cells grow much slower when it is silenced. They
introduced cancer growth in mice and then treated it with RNAi, finding that
tumour growth was much smaller and so it has the potential to be a cancer killer.
It has a lot of potential but there are two main limitations. One is delivery. Using
virus as a vector to introduce it into cell, it has the risk of the virus activating
something. If we just add the sequence, the impact is transient (only a week) but
it must be stable in the cell for prolonged activity without being degraded.
Second is off-target inhibition as whole sequence is a lot smaller than our
genome. Some genes are very similar and so we can inhibit other genes rather
than the ones we want to due to off-target binding (non-specific interaction).
The CRISPR-Cas9 system is very new and as the research into it increases, so
does funding. It is the mechanism of adaptive immunity in bacteria to defend
against foreign genetic material. B cell produce antibody to kill virus and these
work with t cells. Bacteria also have an immune system. When infected by a
phage/virus, they can trigger immune system to chop off invading virus.
A cell is infected with a DNA plasmid which has both the Cas9 protein as well as
a sequence of guide RNA matching the gene of interest (crRNA). This allows the
protein to identify the corresponding DNA sequence in the host cell and then cut
both strands. Using this is CRISPR-Cas9, as it breaks the double strand in DNA.
When the cell tries to repair the break, they can do it in two major ways. The first
is called Homologous recombination (HR) which uses the sister chromosome as a
source of information/template. The second is Non-homologous end-joining
(NHEJ) which doesnt use a template.
HR creates knock-ins where there is a direct substitution from the donor
template, fusing it from the template into the break. It is inefficient but more
accurate. On the other hand, NHEJ creates knock-outs as they delete or insert
bases (indels) but this makes it more prone to error. If we use the right sequence
as a template, we can repair the gene mutation.
Applications of this can be seen on the farm as there are always problems. GM
crops can resists viruses etc. They can be used in ecosystems to wipe out
disease carrying mosquitoes etc. Medically, they can be used in gene therapy to
edit out disease by deleting or modifying a gene sequence.
Duchenne muscular dystrophy is due to a mutation in the dystrophin gene at
exon 23, producing the expression of a partially functional dystrophin protein. As
they cannot form the right dystrophy protein, there are no strands at the muscle.
Once we delete the mutated 23 exons using CRISPR-Cas9, they can produce the
right protein.
Comparing the two,
Bioreactor Design and Function
Briefly discuss the applications for bioreactors in tissue engineering
Bioreactors are necessary in tissue engineering in order to mimic the tissue
microenvironment in order to improve the efficiency with which specific cell or
tissue types can be produced from undifferentiated cells. They allow for the
regulation of metabolic homeostasis, whilst providing mechanical stimuli and
mimicking the 3D cellular organisation. They are also able to provide the
appropriate chemicals with the correct concentrations when needed (such as
growth factors, cytokines and extracellular matrix components).
With regards to the medical Applications: Want to grow cells outside the body
and make them think that theyre inside the body. By doing this, we can find
how cells work by doing it outside the body. For fine detail, we use microscopes.
It can also be used to produce specific cell types for transfusion, or to seed
cell/biomaterial composites which can then be implanted into the body.
Cancer patients use white cells in order to fight off infection and therefore, if in
low numbers, they will not be able to fight off infection.
In terms of ex vivo cellular therapies, cells (ranging from tumours, t-cells or good
cells) are harvested from patients and then modified in vitro (out of the body),
allowing for useful cells to be enriched and harmful cells removed. They can also
be culture expanded in order to get more. Cells are then transplanted back into
the patient intravenously or through a biomaterial/cellular composite implant.
Despite this, we aim to decrease cost so that it can be used by all patients.
This can be seen with blood stem cell transplants, which are life saving for blood
cancers such as leukaemia. With this, chemotherapy is used to kill the marrow
and then a blood stem cell transplant is given. Haematopoiesis refers to the
production of blood from blood stem cells and this is seen in adults with red cells
and neutrophils, which are seen in a high number and concentration, and
produced at immense rates. Blood transfusion was the first cell therapy because
of its long shelf life. Remove red and white cells, taking off plasma. Donall
Thomas was responsible for the bone marrow transplant. We match tissue types
to graft and used to help deal with leukaemia.
Elements of blood (derived from pluripotent stem cells) are all important in the
body. Bone marrow is very efficient vs how much we need to make red cells
outside body. Without neutrophils, we can die from sepsis? Produce at
astronomical rates.
There are many clinical problems which require haematopoietic cells.
Anaemia: This is caused by either blood loss or reduced production of blood. As
such, there is a low red blood cell count. Treatment is seen by blood transfusion,
supplied from donors of the same blood group. The blood has a shelf life of 2-3
months but can be contaminated by viruses.
Thrombocytopenia: This is an autoimmune disease (bodys immune system
acts against its own healthy cells) which causes disseminated intravascular
coagulation (blood clotting in small vessels) which leads to the spots on the
surface of the skin. After the blood stem cell transplant, there is slow
engraftment (formation of new blood cells). Treatment includes platelet
transfusion which is supplied by matched donors. Despite this, it only has a shelf
life of 1-2 weeks and can be contaminated by viruses. Also, it is an expensive
process as they are filtered from blood.
Neutropenia: This is caused by the slow engraftment after a blood stem cell
transplant and results in mucositis, fever, bacteraemia and septicaemia. The
treatment for this is with neutrophils, as well as antibiotics and treatment for
shock. Stem cells take up to 4 weeks to become something and then white cells
can drop below safe limit. Mucus membranes break down and so cant eat and
need tube. NEED NEUTROPHILS and ANTIBIOTICS but antibiotics on its own is not
good enough. Cord blood transplantation is another possible treatment and is
better in that the risk of Graft vs Host disease is lower (see below).
Immune Deficiency: This is caused by the slow engraftment of lymphocytes
after a blood stem cell transplant, giving rise to opportunistic viral infections
such as CMV pneumonia and chicken pox (zoster). It is treated by antivirals as
well as dealing with the issue of immune reconstitution where the immune
system recovers but then responds to a previously acquired opportunistic
infection with an overwhelming response, making the symptoms worse.
Lymphocytes take months to develop. We always have latent viruses like chicken
pox etc. White cell count drops and then they start (like shingles) in one area as
they get reactivated. Big problem is death from CMV pneumonia.
Graft vs Host Disease: This is caused by blood stem cell transplants from a
donor, where the donor lymphocytes reject the patients tissues. It is prevented
by tissue matching (test for compatibility), and can be treated by immune
suppression, as well as mesenchymal stem cell transplant. The supply is given by
MSC expansion technologies.
Such technologies are responsible for the differentiation of blood stem cells into
platelets, red blood cells etc. With regards to transplantation, there are
autologous transplants (cells from yourself) or allogeneic (from others). With
allogeneic, they can reject recipients cell as they think its foreign (Graft vs Host
disease) and thus, the tissues must be matched. Many strategies are about
decreasing t-cells (as theyre responsible) but recent approach is mesenchymal
stem cells (responsible for tissue repair and are in bone marrow and all organs)
as they have immune suppressing impact.
We can use growth factors with blood stem cells to make all blood cell types in a
dish. Can grow colonies of cells in agar culture. Haematopoiesis makes it hard to
go from one direction to another (only differentiate).
In vitro, we try to maximise renewal, avoid death and therefore produce mature
cells. Very complicated system.
Blood stem cells can be sourced from bone marrow, cord blood or mobilised
peripheral blood stem cells. We can isolate mesenchymal stem cells from cord
blood and can go to bone, fat and cartilage (can be derived from bone and fat).
Anti-immune cell and so used as treatment and tissue regeneration for bone, fat
and cartilage as they differentiate into them. In vivo, theyre found near vessels.
Mobilised peripheral blood stem cells superseded bone marrow as they engraft
much faster. With neutropenia, patients dont have be kept for weeks but can be
discharged quick. Put blood of people going through chemo and found that they
make stem cells in colonies. After chemo, bone marrow mobilises stem cells
which then circulate in blood.
Mesenchymal stem cells can be sourced from bone marrow, the human placenta
(cord blood) and other connective tissues. Cord blood is also a rich source of
blood stem cells. Its banked from placenta (babies got loads of stem cells) and
then this is used for transplants. Take a very long time to graft which is biggest
problem.
With pluripotent stem cell lines, they are sourced from IVF embryos or induced
using the patients own cells, with the potential to become any cell type in the
body. It is used by cell line models to study developmental biology and inherited
disease. Using embryonic stem cells, they can allow for growth of pancreatic
eyelet cells for diabetes, blood stem cells etc. Problem is differentiating them to
make them readily accepted in the body. Can become cancerous. With induced
pluripotent one, the genes essentially dial it back and make the cell think theyre
back at time 0.
Chimeric antigen Receptor T-cells (CART): The CD19 gene tends to be present in
certain damaged cells and removing it will cure it. We genetically engineer part
of an antibody (one which recognises antigen) and fuse it with t-cell receptor and
fool it into thinking it has a t-cell receptor on it but its fake. It lies to the cell and
makes it kill the tumour cell (RESEARCH MORE). First cell therapy licensed by
Novartis and loads of investment into it. People with tumour lysis syndrome were
in ICU. Like having 1000s of cell killed at once. This beats chemo as it is specific,
rather than killing healthy cells as well. When using this to treat B Cell Acute
Lymphoblastic Leukaemia, it was found that 80% responded to chemotherapy
whilst the remaining 20% had no prospect of a cure. 90% have responded to
CART cell therapy and currently, 60% are in complete remission.
The primary technical challenges for bioreactor design engineers are automation,
scalability and the cost of manufacture. If it is not automated, then it becomes
expensive to run. The most difficult challenge to deal with is scalability which
deals with how to scale from several orders of magnitude as a result of the
efficiency of bone marrow.
Understand the fundamental requirements for in vitro growth of mammalian cells
The constituents of the culture media include inorganic salts. This is seen as
sodium, potassium and calcium maintain osmotic balance whilst divalent cations
are involved in protein/protein interactions and EDTA is used to detach cells from
culture dishes. Trace elements such as Zn, Cu and Se are all necessary as well as
sugars, which provide the main cellular energy source (glucose). Also, amino
acids must be added, along with vitamins (B12, thiamine, riboflavin, biotin and
vitamin A are needed for cell proliferation and enzyme function), fatty acids,
lipids (often with a serum), proteins and peptides (albumin is needed for
detoxification, transferrin for iron transport and other growth factors). We cant
use bovine serum in humans due to the risk of mad cow disease. The serum
culture medium must be fully defined so that if there is an issue, we can find out
what is causing it.
pH must also be regulated (between 7.2-7.4) using a bicarbonate buffer: H +
+HCO3- = H2CO3 = H2O+CO2. Phenol red is added as a pH indicator, providing the
culture with its red tint, allowing for visual feedback of pH.
Incubators are needed to maintain temperature at 37 oC, whilst providing CO2 at
5-10% to aid with pH regulation (bicarbonate buffers). Special incubators can
control oxygen concentration as well.
The commonly cultured cell types include primary (direct from normal animal
tissue) and transformed cell lines (often tumours). Primary cells tend to have a
limited number of divisions before senescence whilst transformed cell lines can
be maintained indefinitely in culture. These include fibroblasts, embryonic cell
lines and induced pluripotent stem cells. Genetically modified primary cell lines
include CART and Anti-HIV modified blood stem cells or T-cells. Primary cells are
harder to culture because they arent immortal. Cancer cells lose normal
regulation and grow autonomously, making it easy to grow in cultures.
Cryopreservation refers to the ability to store cells (but not entire tissues) for
long periods of time by freezing them. This is done using liquid nitrogen which
doesnt cause ice crystals or contamination. The freeze process is slow but the
thaw is quick.
Oxidative stress (pathways which remove free radicals). When growing close,
they remove all stress from this. In culture, they dont always survive due to free
radical stress.
Adherent cultures need charge as they dont grow with hydrophobic materials.
Cells grow on protein covering (integrin binds onto protein) the flask/dish.
The types of mammalian cell culture are:
Adherent: These anchor to the surface of the culture in order to survive (such
as prostate cancer cells).
Suspension: These mainly grow in suspension (haemopoietic cell lines) such as
many tumours.
Co-Culture: In this, one cell type is needed to support another (such as
fibroblasts and embryonic stem cells). These are usually adherent.
KG1a= acute myloid leukaemia. Very active.
Breast cancer: GM with green fluorescent protein incorporated into histone of
genome (protein core DNA wraps around), making it easier to rack.
Describe the various bioreactor geometries
The main considerations are the regulation of environmental factors, scalability
and ease of implementation.
Static culture is a single compartment system in which cells are passaged before
reaching confluence. They are exposed to oxidative damage and their growth is
limited by metabolite supply or waste products. We did static culture in our lab. 1
x 10^9 cells per mL in our body (depends on organ). Brains got loads. Scars
have few because theyre basically just collagen.
In terms of the equipment used for cell culture, the polystyrene is modified the
hydrophobic polymer with -COOH or -NH2. These ionise at a neutral pH and aid
with the absorption of adhesion proteins from the serum (-nectins).
Gas permeable bags are only suitable for suspension cell cultures and are used
for scale-up and clinical applications. They are permeable to O 2 and CO2. If gas
permeable, no need to loosen lid increase efficiency. Cells didnt grow beyond
2 million per mL.
Apply the principles of mass transfer to the design of bioreactors
In essence, the biomass (sink) receives mass via either convection or diffusion
from the source. To aid this process, many different bioreactors have been
designed. Biomass=biological material which produces or consumes substrates
Need to provide nutrient, remove waste product and give substrates or remove
elements/compounds. Problem to do this (diffusion or convection).
Spinner Flasks: These contain an impeller which mixes the culture media and
cells. If the cells are bound to macroporous particles, then the media can be
filtered from cells. We agitate cells with propeller (forced convection increases
transfer). Rotating well= rotating barrel (taken from NASA). Wave reactor=bag
on a rocker. Gas permeable bag. Column: worked for molecular separation but
not for cells. Parallel plates and Hollow fibre technology (based on SA). Great for
recombinant proteins but we dont want them bound to a particle and so its a
dead end. Cant infuse macroporous particles into bloodstream (dangerous). Can
have filter to stop macroporous (big ones) from going in.
Parallel Plate: Media is continuously passed over the cells which grow at the
base of a flow channel. There is a silicon membrane for gas exchange, as well as
automated cell inoculation and harvesting. The Aastrom Replicell used this
method but was very expensive and not very effective. This was the first attempt
for convection. Silicon is permeable to O2, solving problem of giving it to cells.
Aastrom = 80 mil. Automated system, computer controlled but cost 10K and
didnt really work. Great engineering but pretty useless clinically. Failed product.
There is also the wave, column and rotating well bioreactor designs, which are
not effective enough.
Hollow Fibre Bioreactor: In this process, the culture media are dialysed
against fresh media, with proteins restricted to the intra-capillary space and cells
are grown inside fibres. This was tested with cord blood, noting that they bind to
the surface. Hollow fibre is used in renal dialysis. Sifts below certain limit like
kidney. If less than 10K, it crosses and vice versa for glucose in blood. Expensive
bit is the serum, albumin, lipids etc. and are large molecular weight. 500mL of
complete media and rest are media base? Start to grow at high density.
Fibronectin increases adherence for cell lines. Used kidney dialyser (cheap as its
about $50) and is 2 sqm so large SA at low cost. Use dialysis membrane already
in it. Grew cell in hollow fibres, recirculated and then voila. Gambro make these
modules as they do renal dialysis and blood component therapy (purifying
platelets from blood) as they all use the same equipment. Cells attach and
proliferate onto inside of fibres.
Microfluidics: This deals with the design of bioreactors at the microscale
through the use of soft lithography which refers to a way of replicating
structures using elastomeric materials (idea is from integrated circuits). In this,
mass transport is by diffusion and viscous flow. In such bioreactors, there are
microwells which confine the cells in a given space. This was tested with the
growth of KG1a cells over 6 days, noting its efficacy.
With bioreactor design, we note the performance specification (cell type, pH,
oxygen etc.) and in terms of iterative design, we define the geometry of the
bioreactor, apply equations of continuity and diffusion to model performance and
then experiment to verify the model.
Be able to model cell growth kinetics and the consumption or production of a
metabolite
As mammalian cell growth is done through binary division, the growth is
geometric with the cell cycle regulated by environmental factors (apoptosis) at
an average time of greater than 8 hours per cycle.
Each cell has different requirements, how many cells we grow etc. We need to
know how to make them divide and know why they die.

dN
The simple model is = ( S ) N where N is the cell number, mu (S) is the
dt
growth rate and as such, depends on S which is the substrate concentration. If S
(like oxygen etc.) gets below critical value, doesnt grow well.
t
N (t)=N 0 e

The specific uptake or production of a metabolite is the rate of uptake per cell.
Mathematically, specific uptake is equal to the uptake rate (change in mass over
time) divided by the number of viable cells. Cumulative consumption is found by
integrating U(t) wrt time.
High substrate concentrations can be toxic to cells but low concentrations may
slow metabolism leading to apoptosis. Lactate is a degradation product of
hydrolysis and accumulates for anaerobic metabolism. Cells choose this in cell
culture. The oxidation of glucose gives lactate. Aerobic is TCA cycle. High
concentrations of lactate make the media acidic, thus toxic to cells.
S
Monod growth kinetics states that the growth rate is given by:
(S)=m
S + Ks
where Ks is the half saturation constant.
When calculating the total amount of substrate consumed, one must first
measure the rate of consumption/production per cell, then measure the growth
rate of cells and then integrate wrt time the total consumption, accounting for
the growth rate and cell consumption rate.
Main problem with oxygen is solubility (very low) and thus, there is a need for
circulation and red blood cells. In the cell, they are metabolised by mitochondria,
producing ATP. If concentration is low, flux is limited.
Need to bring cells closer to cell (gas permeable bags and microfluidics are
good).
Oxygen in silicon rubber and in water basically have the same diffusion
coefficient. Solubility in silicon rubber is 6-7x higher as its hydrophobic and
porous (general rule).
Boundary layer is resistance to diffusion. Then membrane has resistance in
series. Biomass has a consumption rate related to specific uptake of substrate
and cell concentration (product), then diffusion and flux across system.
Microfluidics makes it smaller and more efficient (less clunky). All about
miniaturisation. Aim to use it on a large scale.

Bioethics
In vivo research refers to those done in animals. In vitro is in glass (like cell
cultures). In vivo studies have the ethical issue of the pain, stress, fear,
environmental deprivation and isolation felt by animals. The university has its
own animal ethics committee and all studies must agree with the approved
protocol.
When addressing ethical issues in research, there are stringent guidelines to be
met. The use of animals is regulated by the Animal Welfare Committees and all
animal studies must be conducted strictly according to an approved protocol.
Animal care and usage is monitored with outcomes reported.
The questions to ask regarding whether an animal study should be done includes
whether it can be done another way as we always aim to do computer or stuff
before using animal research. Also, whether the results of the study will be
scientifically valuable or have significant value, whether they justify the potential
pain and suffering of the animal and whether investigators have the skill,
knowledge and facilities to ethically conduct the study.
The investigator must ensure that all facets of animal care and use meet the
requirements of the Australian Code of Practice for the Care and Use of Animals
for Scientific Purposes, including a responsibility to protect and promote the
welfare of the animals. Animals cant defend themselves and so we (govt) takes
responsibility using a legislation regarding animal ethics. Researchers and local
animal staff need to take responsibility about monitoring whats going on.
All of these groups need to include the principles of the 3 Rs (will be in exam) as
the code of practice embodies these principles of the Reduction, Replacement
and Refinement of animal use.
When planning a study, ethical approval must be obtained from the Institutional
Animal Care and Use Committee. This includes all planned procedures and
amendments for any change in protocols. The more invasive procedures receive
higher scrutiny and need more rigorous justification. Appropriate and minimal
numbers of animals should be used.
We should also aim to prevent pain and suffering pre-emptively by planning to
use appropriate anaesthesia, analgesia. We should kill animals humanely, as well
as plan euthanasia for animals showing distress. Endpoints should be created
such that they precede the induction of distress. Environmental enrichment
should also be provided for the animals.
The animals should be monitored frequently for signs of pain and distress, as
well as when anaesthetised for temperature, respiration and recovery. They
should be checked for changes in appearance, behaviour (eating, drinking and
grooming), as well as for weight loss. They should be checked to ensure they do
not respond adversely to euthanasia, analgesia or else the experiment should be
suspended.
Plan study: Give background about why we need to do it, what we need to do it.
Describe power calculation and give a flowchart of the exact steps we need to
take all the time, and also giving standard operating procedures which are
approved for everything we do to an animal (weighing to drawing blood/surgery).
Need to put tactics to minimise pain into the thing we give to ethics committee.
Monitoring sheets help us decide whether or not to kill the animal (euthanasia). If
lost too much weight (15%), quality of their coat, eyes, consistency of fluids,
behaviour etc. We look to see if animals can be used for other experiments or
whether theyve already been used in another experience. To cause death as
part of experiment (give it brain tumour etc.), have to make sure that you dont
cause pain. Called a lifespan experiment as they can be killed as they reach their
life span.
Usyd has IRMA (Integrated Research Management Application). It is the system
used for managing grants and contracts, collecting and managing information
about research at the Uni, as well as managing human and animal ethics. The
data in the IRMA is maintained by the research portfolio. All research done is
managed in terms of ethics.
Advantage is integrated whole physiology (use the whole body) etc. Cant find
optimal dosing time etc. (pharmacokinetics). Problems are that it could be
because drug affects hormoneeffect and thus doesnt reflect humans.

In vivo tools include genetic manipulation (knockout of gene to produce no


expression or transgenic overexpression). Disease induction includes dietary,
metabolic or hormonal perturbations.
Vitamin D receptor knockout makes mice become white and despite growing and
surviving, they develop sever hypocalcaemia (low calcium levels in blood). Mice
develop rickets which can be cured by a high Ca diet but not by vitamin D.
Pharmacokinetics: done to make sure drug doesnt affect something else in
humans. Animals are given a drug (orally, intravenously or subcutaneously) and
time dependent changes in drug concentration are determined by frequent timed
blood sampling. The kinetics of absorption can be calculated from the
concentration curves. Usually tested on monkeys before humans.
Toxicology studies are used less as ethics is very rigorous. All organs are
examined, usually with rats and a larger species such as rabbits or dogs. Animals
receive increasing doses of the drug. Dosing continues until the maximum
planned dose is reached (often causing death). Post euthanasia, animals are
investigated to identify tissue pathologies which results in direct dosing for
humans whilst monitoring for adverse effects during clinical trials.
The ultimate in vivo methods involve human studies. Phase 1 studies test for
safety in human subjects whilst Phase 2 studies test for safety and biological
activity in human subjects. Finally, Phase 3 studies test for safety and efficacy in
a large human population.
With human clinical trials, young healthy volunteers tend to be used (except with
chemo where subjects have the disease to be treated) where they sequentially
receive increasing doses of the drug and are continuously monitored for adverse
responses. At the maximum planned dose, the dosing is stopped (or earlier if
toxicity is observed).
Checklist: Students should understand the ethical issues in using animals. They
should understand the questions researchers should ask themselves before
undertaking animal research. The principles of the three Rs; Replacement,
Reduction and Refinement, in animal experimentation should be understood. The
students should understand the concept of using animals to model human
diseases and the advantages and disadvantages of these animal models.
Stem Cell Technology
Stem cell research has increased hugely as funding has also been increased.
Stem cells are unspecialised and have the capacity to self-renew (copy) to make
additional stem cells by mitosis, or to differentiate into more mature cell types
(specialisation). They are activated when there is a disease.
Potency refers to the number of phases it can have. Totipotent cells can form all
cell types, including extra embryonic or placental cells. Pluripotent cells can give
rise to all cells that make up the body except embryonic stem cells. Multipotent
stem cells can give rise to multiple cell types, but all within a particular tissue,
organ or physiological system. These can become very specific as they are
usually within an organ system.
Embryonic stem cells: Initially, there is fertilisation and then the cell divides (1-3
days). After 3 days, they all come together and form a blastocyst (hollow ball of
cells) of 50-100 cells. At this stage, they become pluripotent (can differentiate
into 200 types of cells). This is why they are important.
In 1998, human ESC were first derived. We can get from IVF but have to have
consent because once extracted from blastocyst, we destroy embryo. Some
companies isolate these cells and supply to researchers. Issues because these
can become babies but when used, they are essentially killed (ethical debate).
Embryonic are pluripotent (and thus differentiate into many things), proliferate
fast and easy to harvest.
Tissue-specific stem cells (adult stem cells) divide into whatever is needed in
that area. For example, neural stem cells divide into neurons when needed.
Ethics is less stringent when compared with embryonic stem cells. Examples
include neural stem cells, dental pulp stem cells, hematopoietic and MSC.
Haematopoietic stem cells are found in bone marrow and blood. These are
capable of producing all cells which make up the blood and immune system but
have to be used straight away (max is 21 days).
MSC: These are adult stem cells found in bone marrow, skin and fat tissue. They
are generally adherent and elongated cells and are used to treat myocardial
infarction (heart attack), peripheral arterial disease (artery blockages cause
them to narrow), spinal cord injury and skin wounds.
Neural Stem Cells are responsible for repairing myelin in the brain. They ensure a
life-long contribution of new neurons to the olfactory bulb.
Stem Cells are important as they allow us to understand tissue growth and
development, stem cell differentiation and the role of gene and proteins. They
also allow us to understand cell signalling, the role of its microenvironment as
well as the role of stem cell secretory molecules. They can be used to treat
various diseases and test the efficacy of drugs. Finally, they can be used in tissue
repair and regeneration. We can use media from cultured adipose cells and put it
into other cells to see if we actually need stem cells or just the molecules which
are secreted.
Stem cell niche refers to the role of their microenvironment. Stem cells respond
differently to different cues. Support cells help it do what it does. Work is done to
develop cues in vitro. Use material as a platform and give conditions to see how
they perform. There are structural, physical and metabolic cues, as well as
soluble protein cues. With an artificial stem cell niche, we aim to understand the
cues which drive stem cell fate decision and to what extent these cues influence
stem cell function and how they activate or direct them towards self-renewal or
differentiation.
Stem cell free regenerative medicine: culture different cells and put secreting
molecules in and see how well they work. Stem Cell secretory molecules can be
used to treat injuries and diseases, when compared with cell based therapies.
Clinical trials are being done for cardiac disease (acute myocardial infarction)
where bone marrow and adipose derived stem cells are used. It has shown
ongoing benefit for 18 months and so a larger, randomised trial is being done in
Europe. For cerebral palsy, cord blood is being used to treat 96 people whove
shown improvements in their motor skills and cognitive function. For spinal cord
injuries, oligodendrocytes progenitor cells are used from human embryonic cells
without adverse effects.
For example, in the field of tissue engineering, MSC combined with sodium
hyalronate should speed up osteoarthritis recovery. Cranial reconstructions are
done using MSCs and biomaterials as cells make extracellular matrix and when
washed away, we are left with the matrix which is then used as a scaffold. It is
also used for issues with the upper respiratory system as partial laryngeal
implants are done with stem cells and decellularised scaffolds.
iPS: viruses used to reprogram
adult cells by altering 4 genes. As
such, stem cells can be created
directly from a patient and these
can be kept for a long time with the
stem cells being similar to
embryonic stem cells. Potential
uses are shown on the right:
In terms of its future, $2.4 million
worth of funding has been given
from the California Institute for
Regenerative medicine to support
testing of iPSC-derived cell therapy

for Parkinsons.
However, ethics is
also needed. In
2005, Hwang Woo
Suk published a
paper stating that
he had created
stem cell lines using therapeutic cloning but this was proved to be fake.
In 2014, the discovery of Stimulus-triggered acquisition of pluripotency was
significant. Obokata and Haruko expressed that by merely putting differentiated
cells under stress, they can be reprogrammed and within 30mins, there are
pluripotent stem cells without any gene manipulation. Nobody could replicate it
and so this was retracted as they expressed that the data was falsified with
photo-shop used to alter images. As a result, the supervisor committed suicide
and the Riken Centre was treated with disdain and looked down upon.
Bioinformatics
Bioinformatics is an interdisciplinary science linking the life sciences to the
computational, mathematical and statistical sciences. The term was coined in
1970 by Ben Hesper, defining it as the study of informatics processes in biotic
systems. Computer scientists use databases for bioinformatics and we can also
use statistics to look at the data.
To sequence the first human genome took 11-13 years but we are currently in
the post-genomic era (1990s-) as we try to find out the function of the genome
rather than merely the sequence. We have a huge amount of data as a result of
technological advances in molecular biology, but dont know what it is.
Computers have also improved, meaning that we are able to compute this data.
Bioinformatics aims to personalise translational medicine. We are at next
generation sequencing.
Bioinformatics tries to bring increase storage and computer skills, together with
advances in bio leading to more data. They develop new algorithms and
statistics in order to express whatever data is found, as well as analyse and
interpret data (such as genetic sequences). Also, tools are developed and
implemented, enabling us to analyse computationally large problems based on
biological data, leading to new knowledge.
Bioinformatics is all about matching the need with the current technology. The
human genome sequence has over 3 billion DNA bases and therefore there is a
lot of data to be analysed. Also, there are over 100 billion DNA bases which have
been found in databases. However, only 2% of the gene is informative. In order
to identify genes, the hidden markov model is used. The field of functional
genomics aims to determine when genes are switched on or off.
In Australia, we have an aging population. All individuals go through a phase
called insulin resistance prior to Type 2 diabetes. If you know someones in this
phase, then it is reversible but when they have diabetes, you cant do anything.
People could be at risk for 5-10 years, before the actual evidence comes. We try
to find a biomarker (collection of genes) that tells us that a person has insulin
resistance after comparing with people who are at risk of it (obesity). To do this,
we need the molecular profile which is done using bioinformatics.
Medically, bioinformatics is used to help understand life processes in healthy and
disease states, as well as understand the genetic basis of disease which thus
allows for better diagnosis and treatment. In pharmaceuticals, it allows for gene
based drug design which allows for new and better drugs to be developed.
Agriculturally, drought and disease resistant plants can be grown, as well as
higher yield crops. With cancer, we look for predictive markers. Bioinformatics is
also used in bovine industry.
Microarray technology is a very broad field which started with SAGE, which is
about measuring the activity of the genes as the output of data is gene
expression (at the transcript level for tissue or cell) in all cases. Nylon membrane
looked at 100s of genes in one go. 1996 was Affymetrix which looked at probe-
specific samples (1.5cm slice). At the same time, printed microarray began (like
inkjet except print genes instead). We are currently developing a protein array
which prints peptides. It was used to print long oligos (200 base pairs) but in the
early 2000s, scientists came up with long oligo ink jet which restricts it to 60
base pairs. As the size is restricted, the morphology (shape) of the spots was
more uniform. Illumina bead array is one of the best ones. Nextgen sequencing
differs because instead of measuring fluorescence of each gene, we just
sequence the RNA.
As expected, the cDNA microarray is at the transcript level and profiles
expression, whilst both DNA and oligonucleotides are at the genome level.
The probe is whatever is immobilised and this is usually a strand of nucleic acid
complementary to the target, which is what we query about (sample of interest)
= target.
In the early 2000s, spotted arrays were fabricated using an arrayed library of
bacterial glycerol stocks which were amplified using PCR and then used to spot
as microarray on glass slides. Expression profiling with DNA microarrays involves
taking DNA from both, hybridising, scanning with a laser which yields two images
that can be analyse. If the colour is yellow, the gene is the same between them.
This can be used for cancer patients to compare non-tumorous cells from the
same patient to know what genes mutate to cause the cancer.
High density oligonucleotide arrays use hybridized probe cells with a single
stranded RNA target and an oligonucleotide probe which are seen on a GeneChip
Probe Array. On each microarray, there will be a given strand of nucleic acid
which will only bind to its complementary strand which has a fluorescent tag,
allowing for it to be easily detected. The probe intensity shows the amount of the
specific gene present.
The Illumina Genome analyser runs for 3 days for 7 samples and
is capable of producing 12 million reads per sample of 36 bp
length. It takes layers of images and different colours
represent different sequences. We take the first
genome that took 15 yrs. to do, as our template. We
map the new one back to the old one and
compare. The image acquisition tells us what gene
sequence is found in which layer and hence
allowing us to find out the total sequence. There is then image
analysis using firecrest, then base calling with
Bustard. The Bowtie is a fast q file which allows for
it to mapped to a reference and then annotated,
normalised (to make it comparable) and analysed. There is
usually 20GB worth of data which is then summarised down to 30000 genes.
Popular because cost decreased a lot.
Experimental design is thinking about if I was doing something, what would
happen in the image analysis/ pre-processing stage. Look at data of 50000 by
30 matrix. First thing to do is to do 30 boxplots (one per column). Once you do
this, you can find if there are any outliers and the investigate why. Look for
spatial patterns. Histograms are also useful tools in order to understand the
spread of data, whilst scatter plots are useful in order to be able to tell the trend
of data. Log transformations help you look at skewed data, especially with ratio
data. Always look at log ratio data rather than ratio data alone. Can use spatial
graphs as we usually use visualisation tools.
When pre-processing Affymetrix data, the three stages
include subtracting the background, normalising and then
summarising.
There are different levels of information which are gene
annotations, sample annotations and gene expression levels.
For the gene expression data, the gene expression level of
the gene in the sample is equal to RNA, gene expression and
RPKM (reads per kilobase per million).
Data is multifaceted as it has the gene annotation component (biological
information). Doesnt come from the technology, but from the internet. To get
samples, we need collaboration (interdisciplinary). Gene expression levels can be
found on public repositories on the internet. This can be used in tissue
engineering to look at molecular differences.
People dont tell us what test to use, but we figure it out by translating the
question (hardest thing). The most common techniques include differential
expression which involves observing how gene expression changes across
different cell types and conditions. Class prediction is identifying whether there
are genes which are predictive of a certain disease. Clustering is incorporating
sequence and other information whilst classification is using it to find which
types of disease it causes.
Linear models include t-tests, f-tests, empirical Bayes and SAM. This is used if
trying to identify differential expression genes among two or more tumour
subtypes or different cell treatments, if were looking for genes with different
time profiles between mutants and if were looking for genes associated with
survival.
When we combine data across arrays, we get linear models with added layers.
When looking at the fold change, we need test statistics such as the mean
difference and t-statistics. However, the best solution is found by using the t
statistics (paired t-test (welch)) except replacing the mean with the median and
replacing the SD with median absolute deviation. However, we cannot trust the t-
statistic alone, nor can we trust average fold change alone because averages
can be driven by outliers while t statistics can be driven by tiny variances.
Interested in ratio, we can take one group and find mean difference. Find mean
added together on x-axis. Rotating a scatter plot by 45 o. Rank test statistic. With
every gene, rank all the t-stats. Highest t-stat is the most interesting. Look for
big difference between T and C. Issue with mean is outlier. Thats why we use t-
stat as it accounts for SE and therefore better. T-stat quite good until microarray.
Use test stats to rank the gene and then choose a critical value for significant
(this is the hard bit). Bioinformatics/statistics world use moderated t-stats as it
looks like a t-stat, but adds a fudge factor. Identifying DE genes between T2D
and lean patients is done by selecting a statistic ranking genes in order of
strength of evidence for differential expression, choosing a critical value of
significance.
Clustering is multivariate exploratory data and uses hierarchical clustering, self-
organising maps and partitions around medoids. We can cluster cell samples and
genes. Things are grouped based on colour to let us see information. Colour
coding matrix gives heat map. Every column is a patient and row is a different
gene. Every patient has 15000 genes, some of which are high/up regulated (red)
and some low/down regulated (green). Usually used for symmetric data. Found
two groups where some are high in one gene but low in other, even though they
have the same thing (DLBCL), telling us that there are two sub groups of DBLCL
(activated B-like and GC B-like). One group has high and other has low survival
rates.
Hierarchical clustering usually uses correlation or distance between two vectors
to merge similar pairs of clusters and uses association to build up a tree. K-
means: partition data into different blocks. All data clusters and so it could be
due to random noise (unsupervised learning).
Technique for classification has two components. Discrimination: Start with data
with known classes. Together with data, we build a rule. Computer takes info
from bad and good prognosis (learning sets) and creates a classification rule
which can predict.
Data complexity is hurting us atm (in biology). Can networks be predictive. Can
look for mutation network and impact on expression. Melanoma gene expression
data was used, comparing patients with good and poor clinical outcomes whilst
forming gene co-expression networks using biomarkers. We also try to bring in
environmental factors into analysis. Compare mice and humans and utilise
mouse to subset info we look at. Use it to focus on certain genes and link it to
clinical info and predict metabolic class.

Das könnte Ihnen auch gefallen