Beruflich Dokumente
Kultur Dokumente
"Neural network" redirects here. For networks of living neurons, see Biological neural network. For
the journal, see Neural Networks (journal). For the evolutionary concept, see Neutral network
(evolution).
"Neural computation" redirects here. For the journal, see Neural Computation (journal).
Problems[show]
Supervised learning
(classification regression)
[show]
Clustering[show]
Dimensionality reduction[show]
Structured prediction[show]
Anomaly detection[show]
Neural nets[show]
Reinforcement Learning[show]
Theory[show]
An artificial neural network is an interconnected group of nodes, akin to the vast network of neurons in a brain.
Here, each circular node represents an artificial neuron and an arrow represents a connection from the output
of one neuron to the input of another.
Neural networks (also referred to as connectionist systems) are a computational approach which
is based on a large collection of neural units loosely modeling the way a biological brain solves
problems with large clusters of biological neurons connected by axons. Each neural unit is
connected with many others, and links can be enforcing or inhibitory in their effect on the activation
state of connected neural units. Each individual neural unit may have a summation function which
combines the values of all its inputs together. There may be a threshold function or limiting function
on each connection and on the unit itself such that it must surpass it before it can propagate to other
neurons. These systems are self-learning and trained rather than explicitly programmed and excel in
areas where the solution or feature detection is difficult to express in a traditional computer program.
Neural networks typically consist of multiple layers or a cube design, and the signal path traverses
from front to back. Back propagation is where the forward stimulation is used to reset weights on the
"front" neural units and this is sometimes done in combination with training where the correct result
is known. More modern networks are a bit more free flowing in terms of stimulation and inhibition
with connections interacting in a much more chaotic and complex fashion. Dynamic neural networks
are the most advanced in that they dynamically can, based on rules, form new connections and even
new neural units while disabling others.
The goal of the neural network is to solve problems in the same way that the human brain would,
although several neural networks are much more abstract. Modern neural network projects typically
work with a few thousand to a few million neural units and millions of connections, which is still
several orders of magnitude less complex than the human brain and closer to the computing power
of a worm.
New brain research often stimulates new patterns in neural networks. One new approach is using
connections which span much further and link processing layers rather than always being localized
to adjacent neurons. Other research being explored with the different types of signal over time that
axons propagate which is more complex than simply on or off.
Neural networks are based on real numbers, with the value of the core and of the axon typically
being a representation between 0.0 and 1.
An interesting facet of these systems is that they are unpredictable in their success with self
learning. After training some become great problem solvers and others don't perform as well. In
order to train them several thousand cycles of interaction typically occur.
Like other machine learning methods systems that learn from data neural networks have been
used to solve a wide variety of tasks, like computer vision and speech recognition, that are hard to
solve using ordinary rule-based programming.
Historically, the use of neural network models marked a directional shift in the late eighties from highlevel (symbolic) artificial intelligence, characterized by expert systems with knowledge embodied
in if-then rules, to low-level (sub-symbolic) machine learning, characterized by knowledge embodied
in the parameters of a dynamical system.
Contents
[hide]
1History
o
1.1Hebbian learning
2Models
o
2.1Network function
2.2Learning
2.3.1Supervised learning
2.3.2Unsupervised learning
2.3.3Reinforcement learning
2.4Learning algorithms
4Applications
o
4.1Real-life applications
4.2.1Types of models
7Theoretical properties
o
7.1Computational power
7.2Capacity
7.3Convergence
8Criticism
o
8.1Training issues
8.2Theoretical issues
8.3Hardware issues
8.5Hybrid approaches
10Gallery
11See also
12References
13Bibliography
14External links
History[edit]
Warren McCulloch and Walter Pitts[1] (1943) created a computational model for neural networks
based on mathematics and algorithms called threshold logic. This model paved the way for neural
network research to split into two distinct approaches. One approach focused on biological
processes in the brain and the other focused on the application of neural networks to artificial
intelligence.
Hebbian learning[edit]
In the late 1940s psychologist Donald Hebb[2] created a hypothesis of learning based on the
mechanism of neural plasticity that is now known as Hebbian learning. Hebbian learning is
considered to be a 'typical' unsupervised learning rule and its later va
Artificial intelligence
From Wikipedia, the free encyclopedia
"AI" redirects here. For other uses, see AI and Artificial intelligence (disambiguation).
Part of a series on
Science
Formal
[show]
Physical
[show]
Life
[show]
Social
[show]
Applied
[show]
Interdisciplinary
[show]
Philosophy
History
[show]
Outline
Portal
Category
Complex systems
Topics
Emergence[show]
Self-organization[show]
Collective behaviour[show]
Networks[show]
Pattern formation[show]
Systems theory[show]
Nonlinear dynamics[show]
Game theory[show]
1History
2Research
o
2.1Goals
2.2Approaches
3Tools
o
3.2Logic
3.5Neural networks
3.8Control theory
3.9Languages
3.10Evaluating progress
4Applications
o
4.3Automotive industry
5Platforms
5.1Partnerships between Big 5 companies to improve AI
6.4Superintelligence
6.5Existential risk
7In fiction
8See also
9Notes
10References
o
10.1AI textbooks
10.2History of AI
10.3Other sources
11Further reading
12External links
History[edit]
Main articles: History of artificial intelligence and Timeline of artificial intelligence
While thought-capable artificial beings appeared as storytelling devices in antiquity,[13] the idea of
actually trying to build a machine to perform useful reasoning may have begun with Ramon Llull (c.
1300 CE). With his Calculus ratiocinator, Gottfried Leibniz extended the concept of the calculating
machine (Wilhelm Schickard engineered the first one around 1623), intending to perform operations
on concepts rather than numbers.[14] Since the 19th century, artificial beings are common in fiction, as
in Mary Shelley's Frankenstein or Karel apek's R.U.R. (Rossum's Universal Robots).[15]
The study of mechanical or "formal" reasoning began with philosophers and mathematicians in
antiquity. In the 19th century, George Boole refined those ideas into propositional logic and Gottlob
Frege developed a notational system for mechanical reasoning (a "predicate calculus").[16] Around
the 1940s, Alan Turing's theory of computation suggested that a machine, by shuffling symbols as
simple as "0" and "1", could simulate any conceivable act of mathematical deduction. This insight,
that digital computers can simulate any process of formal reasoning, is known as the ChurchTuring
thesis.[17][page needed] Along with concurrent discoveries in neurology, information theory and cybernetics,
this led researchers to consider the possibility of building an electronic brain. [18] The first work that is
now generally recognized as AI was McCullouch and Pitts' 1943 formal design for Turingcomplete "artificial neurons".[14]
The field of AI research was founded at a conference at Dartmouth College in 1956.[19] The
attendees, including John McCarthy, Marvin Minsky, Allen Newell, Arthur Samuel and Herbert
Simon, became the leaders of AI research.[20] They and their students wrote programs that were, to
most people, simply astonishing:[21] computers were winning at checkers, solving word problems in
algebra, proving logical theorems and speaking English.[22] By the middle of the 1960s, research in
the U.S. was heavily funded by the Department of Defense[23] and laboratories had been established
around the world.[24] AI's founders were optimistic about the future: Herbert Simon predicted,
"machines will be capable, within twenty years, of doing any work a man can do." Marvin
Minsky agreed, writing, "within a generation ... the problem of creating 'artificial intelligence' will
substantially be solved."[25]
They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974,
in response to the criticism of Sir James Lighthill[26] and ongoing pressure from the US Congress to
fund more productive projects, both the U.S. and British governments cut off exploratory research in
AI. The next few years would later be called an "AI winter",[27] a period when funding for AI projects
was hard to find.
In the early 1980s, AI research was revived by the commercial success of expert systems,[28] a form
of AI program that simulated the knowledge and analytical skills of human experts. By 1985 the
market for AI had reached over a billion dollars. At the same time, Japan's fifth generation
computer project inspired the U.S and British governments to restore funding for academic research.
However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into
disrepute, and a second, longer-lasting hiatus began.[30]
[29]
In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical
diagnosis and other areas.[12] The success was due to increasing computational power (see Moore's
law), greater emphasis on solving specific problems, new ties between AI and other fields and a
commitment by researchers to mathematical methods and scientific standards. [31] Deep Blue became
the first computer chess-playing system to beat a reigning world chess champion, Garry
Kasparov on 11 May 1997.[32]
Advanced statistical techniques (loosely known as deep learning), access to large amounts of
data and faster computers enabled advances in machine learning and perception.[33] By the mid
2010s, machine learning applications were used throughout the world. [34] In a Jeopardy! quiz
show exhibition match, IBM's question answering system, Watson, defeated the two greatest
Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin.[35] The Kinect, which
provides a 3D bodymotion interface for the Xbox 360 and the Xbox One use algorithms that
emerged from lengthy AI research[36] as do intelligent personal assistants in smartphones.[37] In March
2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the
first computer Go-playing system to beat a professional Go player without handicaps.[5][38]
According to Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with the
number of software projects that use AI within Google increasing from a "sporadic usage" in 2012 to
more than 2,700 projects. Clark also presents factual data indicating that error rates in image
processing tasks have fallen significantly since 2011.[39]He attributes this to an increase in
affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in
research tools and datasets. Other cited examples include Microsoft's development of a Skype
system that can automatically translate from one language to another and Facebook's system that
can describe images to blind people.[39]
Arthropod
From Wikipedia, the free encyclopedia
Arthropod
Temporal range: 5400 Ma
Pre
O
S
D
C
P
T
J
K
Pg
CambrianHolocene
Scientific classification
Kingdom:
Animalia
Subkingdom:
Eumetazoa
(unranked):
Bilateria
Superphylum:
Ecdysozoa
(unranked):
Tactopoda
Phylum:
Arthropoda
von Siebold, 1848[1]
Subphylum
Trilobitomorpha
Trilobita
trilobites (extinct)
Subphylum Chelicerata
Arachnida spid
ers, scorpions, etc.
Merostomata h
orseshoe
crabs, eurypterids
Pycnogonida
sea spiders
Subphylum Myriapoda
Chilopoda cent
ipedes
Diplopoda mill
ipedes
Pauropoda
sister group to
millipedes
Symphyla
resemble centipedes
Subphylum Crustacea
Branchiopoda
brine shrimp etc.
Remipedia
blind crustaceans
Cephalocarida
horseshoe shrimp
Maxillopoda b
arnacles, copepods, fish
lice, etc.
Ostracoda seed
shrimp
Malacostraca l
obsters, crabs, shrimp,
etc.
Subphylum Hexapoda
Insecta insects
Entognatha spr
ingtails, etc.
Incertae sedis
Camptophyllia (
extinct)[2]
Marrellomorpha
(extinct)
Acanthomeridio
n (extinct)
An arthropod (from Greek arthro-, joint + podos, foot) is an invertebrate animal having
an exoskeleton (external skeleton), a segmented body, and jointed appendages (paired
appendages). Arthropods form the phylum Arthropoda, which includes
the insects, arachnids, myriapods, and crustaceans. Arthropods are characterized by their jointed
limbs and cuticle made of chitin, often mineralised with calcium carbonate. The arthropod body
plan consists of segments, each with a pair of appendages. The rigid cuticle inhibits growth, so
arthropods replace it periodically by moulting. Their versatility has enabled them to become the most
species-rich members of all ecological guilds in most environments. They have over a million
described species, making up more than 80% of all described living animal species, some of which,
unlike most animals, are very successful in dry environments.
Arthropods range in size from the microscopic crustacean Stygotantulus up to the Japanese spider
crab. Arthropods' primary internal cavity is a hemocoel, which accommodates their internal organs,
and through which their haemolymph analogue of blood circulates; they have open circulatory
systems. Like their exteriors, the internal organs of arthropods are generally built of repeated
segments. Their nervous system is "ladder-like", with paired ventral nerve cords running through all
segments and forming paired ganglia in each segment. Their heads are formed by fusion of varying
numbers of segments, and their brains are formed by fusion of the ganglia of these segments and
encircle the esophagus. The respiratory and excretory systems of arthropods vary, depending as
much on their environment as on the subphylum to which they belong.
Their vision relies on various combinations of compound eyes and pigment-pit ocelli: in most species
the ocelli can only detect the direction from which light is coming, and the compound eyes are the
main source of information, but the main eyes of spiders are ocelli that can form images and, in a
few cases, can swivel to track prey. Arthropods also have a wide range of chemical and mechanical
sensors, mostly based on modifications of the many setae (bristles) that project through their
cuticles. Arthropods' methods of reproduction and development are diverse; all terrestrial species
use internal fertilization, but this is often by indirect transfer of the sperm via an appendage or the
ground, rather than by direct injection. Aquatic species use either internal or external fertilization.
Almost all arthropods lay eggs, but scorpions give birth to live young after the eggs have hatched
inside the mother. Arthropod hatchlings vary from miniature adults to grubs and caterpillars that lack
jointed limbs and eventually undergo a total metamorphosis to produce the adult form. The level of
maternal care for hatchlings varies from nonexistent to the prolonged care provided by scorpions.
The evolutionary ancestry of arthropods dates back to the Cambrian period. The group is generally
regarded as monophyletic, and many analyses support the placement of arthropods
with cycloneuralians (or their constituent clades) in a superphylum Ecdysozoa. Overall however,
the basal relationships of Metazoa are not yet well resolved. Likewise, the relationships between
various arthropod groups are still actively debated.
Arthropods contribute to the human food supply both directly as food, and more importantly
as pollinators of crops. Some specific species are known to spread severe disease to
humans, livestock, and crops.
Contents
[hide]
1Etymology
2Description
o
2.1Diversity
2.2Segmentation
2.3Exoskeleton
2.4Moulting
2.5Internal organs
2.6Senses
2.6.1Optical
2.6.2Olfaction
4Evolution
o
4.2Fossil record
5Classification
7See also
8Notes
9References
o
9.1Bibliography
10External links
Seminole
Seminole
Seminole portraits
Total population
est. 18,600
Seminole Nation of Oklahoma
15,572 enrolled
Seminole Tribe of Florida
Miccosukee Tribe of Indians of Florida
Regions with significant populations
United States (
Oklahoma,
Languages
Florida,
Georgia)
The Seminole are a Native American tribe that emerged in a process of ethnogenesis from various
Native American groups who settled in Florida in the 18th century, most significantly Creeks from
what is now Georgia and Alabama.
They comprise three federally recognized tribes and independent groups, most living
in Oklahoma with a minority in Florida. The word Seminole is a corruption of cimarrn, a Spanish
term for "runaway" or "wild one".[1]
During their early decades, the Seminole became increasingly independent of other Creek groups
and established their own identity. They developed a thriving trade network during
the British and second Spanish periods (roughly 17671821).[2] The tribe expanded considerably
during this time, and was further supplemented from the late 18th century by free black people and
escaped enslaved people who settled near and paid tribute to Seminole towns. The latter became
known as Black Seminoles, although they kept their own Gullah culture of the Low Country.[3] They
developed the Afro-Seminole Creole language, which they spoke through the 19th century after the
move to Indian Territory.
Seminole culture is largely derived from that of the Creek; the most important ceremony is the Green
Corn Dance; other notable traditions include use of the black drink and ritual tobacco. As the
Seminole adapted to Florida environs, they developed local traditions, such as the construction of
open-air, thatched-roof houses known as chickees.[4] Historically the Seminole
spoke Mikasuki and Creek, both Muskogean languages.[5]
After the independent United States acquired Florida from Spain in 1819, its settlers increased
pressure on Seminole lands. During the period of the Seminole Wars (18181858), the tribe was first
confined to a large reservation in the center of the Florida peninsula by the Treaty of Moultrie
Creek (1823) and then evicted from the territory altogether according to the Treaty of Payne's
Landing (1832).[3] By 1842, most Seminoles and Black Seminoles had been coerced or forced to
move to Indian Territory west of the Mississippi River. During the American Civil War, most of the
Oklahoma Seminole allied with the Confederacy, after which they had to sign a new treaty with the
U.S., including freedom and tribal membership for the Black Seminole. Today residents of the
reservation are enrolled in the federally recognized Seminole Nation of Oklahoma, while others
belong to unorganized groups.
Perhaps fewer than 200 Seminoles remained in Florida after the Third Seminole War (18551858),
but they fostered a resurgence in traditional customs and a culture of staunch independence. [6] In the
late 19th century, the Florida Seminole re-established limited relations with the U.S. government and
in 1930 received 5,000 acres (20 km2) of reservation lands. Few Seminole moved to reservations
until the 1940s; they reorganized their government and received federal recognition in 1957 as
the Seminole Tribe of Florida. The more traditional people near the Tamiami Trail received federal
recognition as the Miccosukee Tribe in 1962.[7]
The Oklahoma and Florida Seminole filed land claim suits in the 1950s, which were combined in the
government's settlement of 1976. The tribes and Traditionals took until 1990 to negotiate an
agreement as to division of the settlement, a judgment trust against which members can draw for
education and other benefits. The Florida Seminole founded a high-stakes bingo game on their
reservation in the late 1970s, winning court challenges to initiate Indian Gaming, which many tribes
have adopted to generate revenues for welfare, education and development.
Semiconductor
From Wikipedia, the free encyclopedia
For devices using semiconductors and their history, see Semiconductor device. For other uses,
see Semiconductor (disambiguation).
This article needs additional citations for verification. Please help improve this
article by adding citations to reliable sources. Unsourced material may be challenged and
removed. (June 2013) (Learn how and when to remove this template message)
Semiconductors are crystalline or amorphous solids with distinct electrical characteristics.[1] They
are of high electrical resistance higher than typical resistance materials, but still of much lower
resistance than insulators. Their resistance decreases as their temperature increases, which is
behavior opposite to that of a metal. Finally, their conducting properties may be altered in useful
ways by the deliberate, controlled introduction of impurities ("doping") into the crystal structure,
which lowers its resistance but also permits the creation of semiconductor junctions between
differently-doped regions of the extrinsic semiconductor crystal. The behavior of charge
carriers which include electrons, ions and electron holes at these junctions is the basis
of diodes, transistors and all modern electronics.
Semiconductor devices can display a range of useful properties such as passing current more easily
in one direction than the other, showing variable resistance, and sensitivity to light or heat. Because
the electrical properties of a semiconductor material can be modified by doping, or by the application
of electrical fields or light, devices made from semiconductors can be used for amplification,
switching, and energy conversion.
The modern understanding of the properties of a semiconductor relies on quantum physics to
explain the movement of charge carriers in a crystal lattice.[2] Doping greatly increases the number of
charge carriers within the crystal. When a doped semiconductor contains mostly free holes it is
called "p-type", and when it contains mostly free electrons it is known as "n-type". The
semiconductor materials used in electronic devices are doped under precise conditions to control the
concentration and regions of p- and n-type dopants. A single semiconductor crystal can have many
p- and n-type regions; the pn junctions between these regions are responsible for the useful
electronic behavior.
Although some pure elements and many compounds display semiconductor
properties, silicon, germanium, and compounds of gallium are the most widely used in electronic
devices. Elements near the so-called "metalloid staircase", where the metalloids are located on the
periodic table, are usually used as semiconductors.
Some of the properties of semiconductor materials were observed throughout the mid 19th and first
decades of the 20th century. The first practical application of semiconductors in electronics was the
1904 development of the Cat's-whisker detector, a primitive semiconductor diode widely used in
early radio receivers. Developments in quantum physics in turn allowed the development of
the transistor in 1947[3] and the integrated circuit in 1958.
Contents
[hide]
1Properties
2Materials
3Physics of semiconductors
o
3.3Doping
5See also
6References
7Further reading
8External links
Properties[edit]
Variable conductivity
Semiconductors in their natural state are poor conductors because a current requires the
flow of electrons, and semiconductors have their valence bands filled, preventing the entry
flow of new electrons. There are several developed techniques that allow semiconducting
materials to behave like conducting materials, such as doping or gating. These modifications
have two outcomes: n-type and p-type. These refer to the excess or shortage of electrons,
respectively. An unbalanced number of electrons would cause a current to flow through the
material.[4]
Heterojunctions
Heterojunctions occur when two differently doped semiconducting materials are joined
together. For example, a configuration could consist of p-doped and n-doped germanium.
This results in an exchange of electrons and holes between the differently doped
semiconducting materials. The n-doped germanium would have an excess of electrons, and
the p-doped germanium would have an excess of holes. The transfer occurs until equilibrium
is reached by a process called recombination, which causes the migrating electrons from the
n-type to come in contact with the migrating holes from the p-type. A product of this process
is charged ions, which result in an electric field.[2][4]
Excited Electrons
A difference in electric potential on a semiconducting material would cause it to leave thermal
equilibrium and create a non-equilibrium situation. This introduces electrons and holes to the
system, which interact via a process called ambipolar diffusion. Whenever thermal
equilibrium is disturbed in a semiconducting material, the amount of holes and electrons
changes. Such disruptions can occur as a result of a temperature difference or photons,
which can enter the system and create electrons and holes. The process that creates and
annihilates electrons and holes are called generation and recombination.[4]
Light emission
In certain semiconductors, excited electrons can relax by emitting light instead of producing
heat.[5] These semiconductors are used in the construction of light-emitting diodes and
fluorescent quantum dots.
Thermal energy conversion
Semiconductors have large thermoelectric power factors making them useful
in thermoelectric generators, as well as high thermoelectric figures of merit making them
useful in thermoelectric coolers.[6]
Materials[edit]
Main article: List of semiconductor materials
Certain pure elements are found in Group 14 of the periodic table; the most
commercially important of these elements are silicon and germanium.
Silicon and germanium are used here effectively because they have 4
valence electrons in their outermost shell which gives them the ability to
gain or lose electrons equally at the same time.
such as silicon. They are generally used in thin film structures, which do not
require material of higher electronic quality, being relatively insensitive to
impurities and radiation damage.
Semi-automatic transmission
From Wikipedia, the free encyclopedia
This article needs additional citations for verification. Please help improve this
article by adding citations to reliable sources. Unsourced material may be challenged
and removed. (April 2010) (Learn how and when to remove this template message)
Transmission types
Manual
Sequential manual
Non-synchronous
Preselector
Automatic
Manumatic
Semi-automatic
Electrohydraulic
Saxomat
Dual-clutch
Continuously variable
Bicycle gearing
Derailleur gears
Hub gears
Contents
[hide]
3Operation
4History
o
4.1Alfa Romeo
4.2Chevrolet
4.3Chrysler
4.5Daihatsu
4.6Ferrari
4.7Ford
4.8General Motors
4.9Honda
4.10Hudson
4.11Isuzu
4.12Mercedes-Benz
4.13NSU
4.14Opel
4.15Packard
4.16Plymouth
4.17Renault
4.18Reo
4.19SAAB
4.20Simca
4.21Smart
4.22Volkswagen
5Other applications
o
5.1Racing
5.3Bristol/Daimler/Leyland buses
5.4Motorcycles
5.5ATVs
6Marketing names
7Types
8See also
9References
gears manually, often via paddle shifters, can also be found on certain automatic transmissions
(manumatics such as Tiptronic) and continuous variable transmissions (CVTs) (such
as Lineartronic).
Despite superficial similarity to other automated transmissions, semi-automatic transmissions differ
significantly in internal operation and driver's "feel" from manumatics and CVTs. A manumatic, like a
standard automatic transmission, uses a torque converter instead of clutch to manage the link
between the transmission and the engine, while a CVT uses a belt instead of a fixed number of
gears. A semi-automatic transmission offers a more direct connection between the engine and
wheels than a manumatic and this responsiveness is preferred in high performance driving
applications, while a manumatic is better for street use because its fluid coupling makes it easier for
the transmission to consistently perform smooth shifts,[4][5] and CVTs are generally found in gasolineelectric hybrid engine applications.
Typically semi-automatic transmissions are more expensive than manumatics and CVTs, for
instance BMW's 7-speed Double Clutch Transmission is a CAD 3900 upgrade from the standard 6speed manual, while the 6-speed Steptronic Automatic was only a CAD 1600 option in 2007. [6] In a
given market, very few models have two choices of automated transmissions; for instance the BMW
545i (E60) and BMW 645Ci/650i (E63/64) (standard 6-speed manual) had as an option a 6-speed
automatic "Steptronic" transmission or a 7-speed Getrag SMG III single-clutch semi-automatic
transmission until after the 2008 model year, when the SMG III was dropped. [7] Many sport luxury
manufacturers such as BMW offer the manumatic transmissions for their mainstream lineup (such as
the BMW 328i and BMW 535i) and the semi-automatic gearbox for their high-performance models
(the BMW M3 and BMW M5).[6]
The semi-automatic transmission may be derived from a conventional automatic; for
instance Mercedes-Benz's AMG Speedshift MCT semi-automatic transmission is based on the 7GTronic manumatic, however the latter's torque converter has been replaced with a wet, multi-plate
launch clutch.[8] Other semi-automatic transmissions have their roots in a conventional manual; the
SMG II drivelogic (found in the BMW M3 (E46) is a Getrag 6-speed manual transmission, but with
an electrohydraulically actuated clutch pedal, similar to a Formula One style transmission.[9][10][11] The
most common type of semi-automatic transmission in recent years has been the dual clutch type,
since single-clutch types such as the SMG III have been criticized for their general lack of
smoothness in everyday driving (although being responsive at the track). [12]
Operation[edit]
Semi-trailer truck
From Wikipedia, the free encyclopedia
"18 wheeler" and "eighteen wheeler" redirect here. For other uses, see 18 wheeler (disambiguation).
"Big Rig" redirects here. For other uses, see Big Rig (disambiguation).
This article needs additional citations for verification. Please help improve this
article by adding citations to reliable sources. Unsourced material may be challenged and
removed. (April 2007) (Learn how and when to remove this template message)
Semi-trailer tractor with sleeper behind the cab and oversize load on lowboy trailer
A semi-trailer truck is the combination of a tractor unit and one or more semi-trailers to carry
freight. It is variously known as a transport (truck) in Canada; semi or single in
Australia; semi, tractor-trailer, big rig, or eighteen-wheeler in the United States; and articulated lorry,
abbreviated artic, in Britain and Ireland.
A semi-trailer attaches to the tractor with a fifth wheel hitch, with much of its weight borne by the
tractor. The result is that both tractor and semi-trailer will have a distinctly different design than a
rigid truck and trailer.
Contents
[hide]
1Regional configurations
o
1.1North America
1.2Europe
1.2.1United Kingdom
1.2.2Continental Europe
1.2.3Scandinavia
1.3Australia
2Construction
o
2.1Types of trailers
2.3Braking
2.4Transmission
2.5Lights
2.7Skirted trailers
2.8Underride guard
2.9Semi-truck manufacturers
3Driver's license
o
3.1Canada
3.2United States
3.3Taiwan
3.4Europe
3.5Australia
3.6New Zealand
4Role in trade
5Media
o
5.1Television
5.2Films
5.3Music
5.4Video games
6See also
7References
8External links
Regional configurations[edit]
North America[edit]
In North America, the combination vehicles made up of a powered truck and one or more
semitrailers are known as "semis", "semitrailers",[1] "tractor-trailers", "big rigs", "semi trucks",
"eighteen-wheelers", or "semi-tractor trailers".
The tractor unit typically has two or three axles; those built for hauling heavy-duty commercialconstruction machinery may have as many as five, some often being lift axles.
The most common tractor-cab layout has a forward engine, one steering axle, and two drive axles.
The fifth-wheel trailer coupling on most tractor trucks is movable fore and aft, to allow adjustment in
the weight distribution over its rear axle(s).
Ubiquitous in Europe, but less common in North America since
Electronic publishing
From Wikipedia, the free encyclopedia
for it enables content and analytics combined - for the benefit of students. The use of electronic
publishing for textbooks may become more prevalent with iBooks from Apple Inc. and Apple's
negotiation with the three largest textbook suppliers in the U.S.[4] Electronic publishing is increasingly
popular in works of fiction. Electronic publishers are able to respond quickly to changing market
demand, because the companies do not have to order printed books and have them delivered. Epublishing is also making a wider range of books available, including books that customers would
not find in standard book retailers, due to insufficient demand for a traditional "print run". Epublication is enabling new authors to release books that would be unlikely to be profitable for
traditional publishers. While the term "electronic publishing" is primarily used in the 2010s to refer to
online and web-based publishers, the term has a history of being used to describe the development
of new forms of production, distribution, and user interaction in regard to computer-based production
of text and other interactive media.
Contents
[hide]
1Process
2Academic publishing
3Copyright
4Examples
5Business models
6See also
7References
8External links
Process[edit]
The electronic publishing process follows some aspects of the traditional paperbased publishing process[5] but differs from traditional publishing in two ways: 1) it does not include
using an offset printing press to print the final product and 2) it avoids the distribution of a physical
product (e.g., paper books, paper magazines, or paper newspapers). Because the content is
electronic, it may be distributed over the Internet and through electronic bookstores, and users can
read the material on a range of electronic and digital devices, including desktop
computers, laptops, tablet computers, smartphones or e-reader tablets. The consumer may read the
published content online a website, in an application on a tablet device, or in a PDF document on a
computer. In some cases, the reader may print the content onto paper using a consumer-grade inkjet or laser printer or via a print on demand system. Some users download digital content to their
devices, enabling them to read the content even when their device is not connected to the Internet
(e.g., on an airplane flight).
Distributing content electronically as software applications ("apps") has become popular in the
2010s, due to the rapid consumer adoption of smartphones and tablets. At first, native apps for each
mobile platform were required to reach all audiences, but in an effort toward universal device
compatibility, attention has turned to using HTML5 to create web apps that can run on any browser
and function on many devices. The benefit of electronic publishing comes from using three attributes
of digital technology: XML tags to define content,[6] style sheets to define the look of content,
and metadata (data about data) to describe the content for search engines, thus helping users to
find and locate the content (a common example of metadata is the information about a
song's songwriter, composer, genre that is electronically encoded along with most CDs and digital
audio files; this metadata makes it easier for music lovers to find the songs they are looking for).
With the use of tags, style sheets, and metadata, this enables "reflowable" content that adapts to
various reading devices (tablet, smartphone, e-reader, etc.) or electronic delivery methods.
Because electronic publishing often requires text mark-up (e.g., Hyper Text Markup Language or
some other markup language) to develop online delivery methods, the traditional roles of typesetters
and book designers, who created the printing set-ups for paper books, have changed. Designers of
digitally published content must have a strong knowledge of mark-up languages, the variety of
reading devices and computers available, and the ways in which consumers read, view or access
the content. However, in the 2010s, new user friendly design software is becoming available for
designers to publish content in this standard without needing to know detailed programming
techniques, such as Adobe Systems' Digital Publishing Suite and Apple's iBooks
e-commerce
From Wikipedia, the free encyclopedia
(Redirected from Electronic commerce)
It has been suggested that Web commerce be merged into this article. (Discuss) Proposed since
September 2015.
Part of a series on
E-commerce
Online goods and services
Digital distribution
E-books
Software
Streaming media
Retail services
Banking
DVD-by-mail
Flower delivery
Food ordering
Grocery
Pharmacy
Travel
Marketplace services
Advertising
Auctions
Comparison shopping
Social commerce
Trading communities
Wallet
Mobile commerce
Payment
Ticketing
Customer service
Call centre
Help desk
Live support software
E-procurement
Purchase-to-pay
Gathering and using demographic data through web contacts and social media
1Timeline
2Business application
3Governmental regulation
4Forms
5Global trends
8Social impact
9Distribution channels
11See also
12References
13Further reading
14External links
Timeline[edit]
A timeline for the development of e-commerce:
1971 or 1972: The ARPANET is used to arrange a cannabis sale between students at
the Stanford Artificial Intelligence Laboratory and the Massachusetts Institute of Technology,
later described as "the seminal act of e-commerce" in John Markoff's book What the Dormouse
Said.[1]
1979: Michael Aldrich demonstrates the first online shopping system.[2]
1982: Minitel was introduced nationwide in France by France Tlcom and used for online
ordering.
Edison cylinder phonograph ca. 1899. The phonograph cylinder is a storage medium. The phonograph may be
considered a storage device.
On a reel-to-reel tape recorder (Sony TC-630), the recorder is data storage equipment and the magnetic tape is
a data storage medium.
A data storage device is a device for recording (storing) information (data). Recording can be done
using virtually any form of energy, spanning from manual muscle power in handwriting, to acoustic
vibrations in phonographic recording, to electromagnetic energy modulating magnetic
tape and optical discs.
A storage device may hold information, process information, or both. A device that only holds
information is a recording medium. Devices that process information (data storage equipment) may
either access a separate portable (removable) recording medium or a permanent component to
store and retrieve data.
Electronic data storage requires electrical power to store and retrieve that data. Most storage
devices that do not require vision and a brain to read data fall into this category. Electromagnetic
data may be stored in either an analog data or digital data format on a variety of media. This type of
data is considered to be electronically encoded data, whether it is electronically stored in
a semiconductor device, for it is certain that a semiconductor device was used to record it on its
medium. Most electronically processed data storage media (including some forms of computer data
storage) are considered permanent (non-volatile) storage, that is, the data will remain stored when
power is removed from the device. In contrast, most electronically stored information within most
types of semiconductor (computer chips) microcircuits are volatile memory, for it vanishes if power is
removed.
Except for barcodes, optical character recognition (OCR), and magnetic ink character
recognition (MICR) data, electronic data storage is easier to revise and may be more cost effective
than alternative methods due to smaller physical space requirements and the ease of replacing
(rewriting) data on the same medium.[2]
Contents
[hide]
2See also
3References
4Further reading
5External links
"ECAD" redirects here. For the Brazilian music licensing organization, see Escritrio Central de
Arrecadao e Distribuio. For other uses, see ECAD (disambiguation).
Electronic design automation (EDA), also referred to as electronic computer-aided
design (ECAD),[1] is a category of software tools for designing electronic systems such as integrated
circuits and printed circuit boards. The tools work together in a design flow that chip designers use to
design and analyze entire semiconductor chips. Since a modern semiconductor chip can have
billions of components, EDA tools are essential for their design.
This article describes EDA specifically with respect to integrated circuits.
Contents
[hide]
1History
o
1.1Early days
2Current status
3Software focuses
3.1Design
3.2Simulation
3.4Manufacturing preparation
4Companies
4.1Old companies
4.2Acquisitions
6See also
7References
History[edit]
Early days[edit]
Before EDA, integrated circuits were designed by hand, and manually laid out. Some advanced
shops used geometric software to generate the tapes for the Gerber photoplotter, but even those
copied digital recordings of mechanically drawn components. The process was fundamentally
graphic, with the translation from electronics to graphics done manually. The best known company
from this era was Calma, whose GDSII format survives.
By the mid-1970s, developers started to automate the design along with the drafting. The first
placement and routing (Place and route) tools were developed. The proceedings of the Design
Automation Conference cover much of this era.
The next era began about the time of the publication of "Introduction to VLSI Systems" by Carver
Mead and Lynn Conway in 1980. This ground breaking text advocated chip design with
programming languages that compiled to silicon. The immediate result was a considerable increase
in the complexity of the chips that could be designed, with improved access to design
verification tools that used logic simulation. Often the chips were easier to lay out and more likely to
function correctly, since their designs could be simulated more thoroughly prior to construction.
Although the languages and tools have evolved, this general approach of specifying the desired
behavior in a textual programming language and letting the tools derive the detailed physical design
remains the basis of digital IC design today.
The earliest EDA tools were produced academically. One of the most famous was the "Berkeley
VLSI Tools Tarball", a set of UNIX utilities used to design early VLSI systems. Still widely used are
the Espresso heuristic logic minimizer and Magic.
Another crucial development was the formation of MOSIS, a consortium of universities and
fabricators that developed an inexpensive way to train student chip designers by producing real
integrated circuits. The basic concept was to use reliable, low-cost, relatively low-technology IC
processes, and pack a large number of projects per wafer, with just a few copies of each projects'
chips. Cooperating fabricators either donated the processed wafers, or sold them at cost, seeing the
program as helpful to their own long-term growth.
In 1981, the U.S. Department of Defense began funding of VHDL as a hardware description
language. In 1986, Verilog, another popular high-level design language, was first introduced as a
hardware description language by Gateway Design Automation. Simulators quickly followed these
introductions, permitting direct simulation of chip designs: executable specifications. In a few more
years, back-ends were developed to perform logic synthesis.
3D PCB layout
Current status[edit]
Current digital flows are extremely modular (see Integrated circuit design, Design closure,
and Design flow (EDA)). The front ends produce standardized design descriptions that compile into
invocations of "cells,", without regard to the cell technology. Cells implement logic or other electronic
functions using a particular integrated circuit technology. Fabricators generally provide libraries of
components for their production processes, with simulation models that fit standard simulation tools.
Analog EDA tools are far less modular, since many more functions are required, they interact more
strongly, and the components are (in general) less ideal.
EDA for electronics has rapidly increased in importance with the continuous scaling
of semiconductor technology.[2] Some users are foundry operators, who operate the semiconductor
fabrication facilities, or "fabs", and design-service companies who use EDA software to evaluate an
incoming design for manufacturing readiness. EDA tools are also used for programming design
functionality into FPGAs.
Software focuses[edit]
Design[edit]
Main article: Design flow (EDA)
Logic synthesis translation of RTL design description (e.g. written in Verilog or VHDL) into
a discrete netlist of logic gates.
Schematic capture For standard cell digital, analog, RF-like Capture CIS in Orcad by
Cadence and ISIS in Proteus
Layout usually schematic-driven layout, like Layout in Orcad by Cadence, ARES in Proteus
Simulation[edit]
Main article: Electronic circuit simulation
Hardware emulation Use of special purpose hardware to emulate the logic of a proposed
design. Can sometimes be plugged into a system in place of a yet-to-be-built chip; this is
called in-circuit emulation.
Technology CAD simulate and analyze the underlying process technology. Electrical
properties of devices are derived directly from device physics.
Electromagnetic field solvers, or just field solvers, solve Maxwell's equations directly for
cases of interest in IC and PCB design. They are known for being slower but more accurate than
the layout extraction above.[where?]
Functional verification
Clock Domain Crossing Verification (CDC check): Similar to linting, but these checks/tools
specialize in detecting and reporting potential issues like data loss, meta-stability due to use of
multiple clock domains in the design.
Formal verification, also model checking: Attempts to prove, by mathematical methods, that
the system has certain desired properties, and that certain undesired effects (such as deadlock)
cannot occur.
Physical verification, PV: checking if a design is physically manufacturable, and that the
resulting chips will not have any function-preventing physical defects, and will meet original
specifications.
Hybrid vehicle
From Wikipedia, the free encyclopedia
Sustainable energy
Energy conservation
Cogeneration
Green building
Heat pump
Low-carbon power
Microgeneration
Anaerobic digestion
Geothermal
Hydroelectricity
Solar
Tidal
Wind
Sustainable transport
Carbon-neutral fuel
Electric vehicle
Fossil fuel phase-out
Green vehicle
Plug-in hybrid
Environment portal
A hybrid vehicle uses two or more distinct types of power, such as internal combustion
engine+electric motor,[1] e.g. in diesel-electric trains using diesel engines and electricity from
overhead lines, and submarines that use diesels when surfaced and batteries when submerged.
Other means to store energy include pressurized fluid, in hydraulic hybrids.
Contents
[hide]
1Power
2Vehicle type
o
2.2Heavy vehicles
3Engine type
o
4.1Parallel hybrid
4.4Series hybrid
5Environmental issues
o
5.4Charging
9Marketing
10Adoption rate
12See also
13References
14External links
Power[edit]
Power sources for hybrid vehicles include:
Electric batteries/capacitors
Overhead electricity
Hydraulic accumulator
Hydrogen
Flywheel
Solar
Wind
Vehicle type[edit]
In a parallel hybrid bicycle human and motor torques are mechanically coupled at the
pedal or one of the wheels, e.g. using a hub motor, a roller pressing onto a tire, or a connection
to a wheel using a transmission element. Most motorized bicycles, mopeds are of this type.[2]
In a series hybrid bicycle (SHB) (a kind of chainless bicycle) the user pedals a generator,
charging a battery or feeding the motor, which delivers all of the torque required. They are
commercially available, being simple in theory and manufacturing. [3]
The first published prototype of an SHB is by Augustus Kinzel (US Patent 3'884'317) in 1975. In
1994 Bernie Macdonalds conceived the Electrilite[4] SHB with power electronics allowing regenerative
braking and pedaling while stationary. In 1995 Thomas Muller designed and built a "Fahrrad mit
elektromagnetischem Antrieb" for his 1995 diploma thesis. In 1996 Jrg Blatter and Andreas Fuchs
of Berne University of Applied Sciences built an SHB and in 1998 modified a Leitra tricycle
(European patent EP 1165188). Until 2005 they built several prototype
SH tricycles and quadricycles.[5] In 1999 Harald Kutzke described an "active bicycle": the aim is to
approach the ideal bicycle weighing nothing and having no drag by electronic compensation.
A SHEPB prototype made by David Kitson in Australia[6] in 2014 used a lightweight brushless DC
electric motor from an aerial drone and small hand-tool sized internal combustion engine, and a 3D
printed drive system an