Sie sind auf Seite 1von 14

Y2K BUG

The Y2K bug was a computer flaw, or bug, that may have caused problems
when dealing with dates beyond December 31, 1999. The flaw, faced by
computer programmers and users all over the world on January 1, 2000, is
also known as the "millennium bug." (The letter K, which stands for kilo (a
unit of 1000), is commonly used to represent the number 1,000. So, Y2K
stands for Year 2000.) Many skeptics believe it was barely a problem at all.
When complicated computer programs were being written during the 1960s
through the 1980s, computer engineers used a two-digit code for the year.
The "19" was left out. Instead of a date reading 1970, it read 70. Engineers
shortened the date because data storage in computers was costly and took
up a lot of space.
As the year 2000 approached, computer programmers realized that
computers might not interpret 00 as 2000, but as 1900. Activities that were
programmed on a daily or yearly basis would be damaged or flawed. As
December 31, 1999, turned into January 1, 2000, computers might interpret
December 31, 1999, turning into January 1, 1900.
Banks, which calculate interest rates on a daily basis, faced real problems.
Interest rates are the amount of money a lender, such as a bank, charges a
customer, such as an individual or business, for a loan. Instead of the rate of
interest for one day, the computer would calculate a rate of interest
for minus almost 100 years!
Centers of technology, such as power plants, were also threatened by
the Y2K bug. Power plants depend on routine computer maintenance for
safety checks, such aswater pressure or radiation levels. Not having the
correct date would throw off these calculations and possibly put nearby
residents at risk.
Transportation also depends on the correct time and date. Airlines in
particular were put at risk, as computers with records of all scheduled flights
would be threatenedafter all, there were very few airline flights in 1900.
Y2K was both a software and hardware problem. Software refers to the
electronic programs used to tell the computer what to do. Hardware is
the machinery of the computer itself. Software and hardware companies
raced to fix the bug and provided "Y2K compliant" programs to help. The
simplest solution was the best: The date was simply expanded to a four-digit
number. Governments, especially in the United States and the United
Kingdom, worked to address the problem.

In the end, there were very few problems. A nuclear energy facility in
Ishikawa, Japan, had some of its radiation equipment fail, but backup
facilities ensured there was no threat to the public. The U.S. detected missile
launches in Russia and attributed that to the Y2K bug. But the missile
launches were planned ahead of time as part of Russias conflict in its
republic of Chechnya. There was no computer malfunction.
Countries such as Italy, Russia, and South Korea had done little to prepare for
Y2K. They had no more technological problems than those countries, like the
U.S., that spent millions of dollars to combat the problem.
Due to the lack of results, many people dismissed the Y2K bug as a hoax or
an end-of-the-world cult.

Y2038 BUG
What is Y2038?
The year 2038 problem is caused by 32-bit processors and the limitations of
the 32-bit systems they power. The processor is the central component that
drives all computers and computing devices. It crunches the numbers and
performs calculations that allow programs to run.
Essentially, when the year 2038 strikes 03:14:07 UTC on 19 March,
computers still using 32-bit systems to store and process the date and time
wont be able to cope with the date and time change. Like the Y2K bug, the
computers wont be able to tell the difference between the year 2038 and
1970 the year after which all current computer systems measure time.
What does 32-bit mean?
Processors come in many difference sizes and capabilities designed for
different applications, but most of them operate and crunch numbers in a
similar manner.
The first desktop computer processors were 16-bit and ran 16-bit software,
which meant they could store and access values up to 216 or 65,536 distinct
values within 64KB of memory. Other notable 16-bit systems include the
1990s gaming consoles the Super Nintendo and Sega MegaDrive, which took
over from 1980s 8-bit systems.
Later, 32-bit processors were developed that ran 32-bit software and
increased the number of values a system could handle 232 different values or
4,294,967,295 different numbers within 4GB of memory. The systems stored
dates and times in 32-bit chucks. In reality that large number of different

values is halved for time keeping and other data storage applications as they
range from -2,147,483,648 through 2,147,483,647 leaving only
2,147,483,647 positive values from zero.
Modern processors that power almost every computer bought today, and are
starting to make their way into smartphones and tablets too, are based on a
64-bit system and 64-bit software. They also have a maximum number of
different values they can address but at 264 or 18 quintillion values within
16 Exabytes of memory, the ceiling is considerably higher at a date that is
over twenty times greater than the estimated age of the universe or 292bn
years from now.
Whats the problem with 32-bit systems?
The basic problem is about a computers capacity to count the time in
seconds past a certain date. As computers measure time in seconds from 1
January 1970, 03:14:07 UTC on 19 January 2038 is equal to 2,147,483,647
seconds after 1 January 1970. As 32-bit date and time systems can only
count up to 2,147,483,647 separate positive values the system cannot
continue counting the seconds past that time.
To continue to count the seconds the values will start to be stored in negative
counting up from -2,147,483,647 to zero. But most systems will not be able
to cope with this change and will likely fail.
A similar issue happened with YouTube, where the number of views of Psys
Gangnam Style passed 2bn and broke the 2,147,483,647 limit of the 32-bit
counter Google supposedly used.
What will happen?
How computer systems will fail is unknown. Some may continue to work fine
just with the date wrong. Others that rely on precise date and time may
simply stop working.
The biggest issue, like the Y2K bug, is that computer systems that control
crucial infrastructure stop working all at the same time. Planes crashing out
of the sky was the common scaremongering example from the Y2K.
Is it really going to happen?
The simple answer is no, not if the computer systems are upgraded in time.
The problem is likely to rear its head before the year 2038 for any system
that counts years in to the future.

A calendar system that counts and stores appointments for 20 years into the
future will start seeing issues in 2018, for instance.
However, almost all modern processors in desktop computers are now made
and sold as 64-bit systems running 64-bit software. Microsofts Windows has
offered a 64-bit version since Windows XP Professional 64-bit released in
2005.
Apples OS X desktop software has been exclusively 64-bit since the release
of Mac OS X 10.7 Lion in 2011.
Many Unix systems that are used to power web servers and other backend
hardware may still use 32-bit date systems, but most should be replaceable
over time.
The computers that have the potential to cause the biggest issues are those
embedded systems that cannot be upgraded. They are used in many
transportation systems and other long-living devices, equipment such as
stability control systems in cars or other isolated computer-based systems.
Not all embedded systems rely on precise dates, however, and so will be
unaffected often just tracking the difference between times, not absolute
dates.
Those embedded systems that are effected are likely to have to be
completely replaced as the software cant simply be upgraded.
Whats going to be done?
The reality of Y2038 being a problem is that many 32-bit systems will
naturally wear out or be replaced in the next 23 years. Those systems that
might not will need changing ahead of time.
Infrastructure is likely to be the biggest headache to fix devices in power
stations for instance but planning the change far enough in advance should
remove most big problems.

DISRUPTIVE TECHNOLOGIES
What is a disruptive technology? According to Clayton M. Christensen,
an Harvard Business School professor a disruptive technology is a new

emerging technology that unexpectedly displaces an established one.


Christensen used this term for the first time in his 1997 best-selling book
entitled The Innovators Dilemma. In it the author established two
categories of new technologies: sustaining and disruptive. Sustaining
technologies corresponds to well-known technologies that undergo
successive improvements, whereas Disruptive technologies means new
technologies that still lack refinement, often have performance problems, are
just known to a limited public, and might not yet have a proven practical
application. Disruption can be seen at a different angle, if we look at how the
word means, something that drastically alters or destroys the structure of
society. Disruptive technologies hold within themselves the capacity to alter
our lifestyle, what we mean by work, business and the global economy. What
are those technologies? And what will be the benefit they will bring to the
world where we live in?
According to a report published by the McKinsey Global Institute, there are
12 technologies that can produce great disruption in our near future as they
might transform the economy and our lives. McKinsey Global Institute report
also offers quantitative data on how the outlined technologies can impact the
economy, and what are the risks these will bring. Curiously, the concepts
behind those technologies have been around for a long time. Here we outline
a list of 12 of the most disruptive technologies:

1. Mobile Internet
Increasingly inexpensive and capable mobile computing devices and Internet
connectivity is already bringing benefits to a wide spectrum of areas, as .
One of these is the treatment of chronic diseases through remote health
monitoring. Another one is the mobile banking platforms that are known to
have produced already radical changes in some countries in Africa such as
Uganda.
2. Automation of knowledge work
There will be more intelligent software systems that can perform knowledge
work tasks. In an interview to McKinsey, Eric Schmidt, executive chairman of
Google, refers to the radical changes these intelligent software systems will
bring to us and the way we connect with the computer, looking at it through
the brighter side:
So I think were going to go from the sort of command-and-control
interfaces where you tell the computer like a dog, Bark, to a situation
where the computer becomes much more of a friend.And, a friend in the
sense that the computer says, Well, we kind of know what you care

about. And again, youve given it permission to do this. And it says, Well,
maybe you should do this or Maybe you should do that.
Intelligent technologies might also provoke fear, and raise the very old
debate of the machine versus human being.
3. Internet of Things
The Internet of Things (or IoT for short), a term coined by Kevin Ashton in
2009, refers to uniquely identifiable objects and their virtual representations
in an Internet-like structure. Equipping all objects in the world with minuscule
identifying devices or machine-readable identifiers can transform daily life.
These are all interconnected in networks of low cost sensors and actuators
for data collection, monitoring, decision making and process optimization.
Such devices can be used for example in manufacturing, health care and
mining. There can be dangers as well, as the connection of billions of smart
devices can represent a real security threat.
4. Advanced robotics
This is an exciting area that promises a lot. Advanced robotics corresponds to
increasingly capable robots or robotic tools, with enhanced senses,
dexterity, and intelligence. They can perform tasks once thought too delicate
or uneconomical to automate. These technologies can bring amazing
benefits to society, including robotic surgical systems that make procedures
less invasive, robotic prosthetics and exoskeletons that restore functions of
amputees and the elderly.
5.Cloud
The use of computer hardware and software resources to deliver services
over the Internet or a network is already there as we all know, and
developing at fast speed.
6. Autonomous or Near-Autonomous Vehicles
This corresponds to vehicles that can navigate and operate autonomously or
semi-autonomously in many situations, using advanced sensors like LIDAR,
and systems of communication from machine to machines. In an article
published in Time Magazine in 2013 Chris Anderson cites some of the new
applications of these advanced robotics devices, speaking about the new
uses of drones:
Some of these new users are farmers, who early on spotted that drones were
a great way to perform crop surveys and field analysis. Unlike satellites,

which fly above the clouds and are too expensive to target on demand, farm
drones can automatically survey crops every morning. () Closer to home,
architects and real estate agents already use drones to scan buildings,
sending them on automatic spirals around the scene to stitch hundreds of
pictures into high-resolution 3D maps. Today you can already have a
personal drone follow you around, tracking the phone in your pocket, and
keeping a camera focused on you for the ultimate selfie.
7. Next-generation Genomics
The new advances in genomics combine the science used for imaging
nucleotide base pairs (the units that make up DNA) with rapidly advancing
computational and analytic capabilities. With an improved understanding of
the genomic structure of humans, t will be possible to manipulate genes and
improve health diagnostics and treatments. Next-generation genomics
knowledge can be extended to a better understanding of plants and animals,
that can radically improve agriculture. Genomics was considered by Erik
Schmidt the most important disruption coming up, according to
an appearance on Bloomberg TV in December of 2013:
The biggest disruption that we dont really know whats going to happen is
probably in the genetics area. The ability to have personal genetics records
and the ability to start gathering all of the gene sequencing into places will
yield discoveries in cancer treatment and diagnostics over the next year that
that are unfathomably important.
8. Next generation Storage
Energy-storage devices or physical systems store energy for later use. These
technologies, such as lithium-ion batteries and fuel cells, already power
electric and hybrid vehicles, along with billions of portable consumer
electronics. Over the coming decade, advancing energy-storage technology
could make electric vehicles cost competitive, bring electricity to remote
areas of developing countries, and improve the efficiency of the utility grid.
There are calculations that state that 40 to 100 percent of vehicles in 2025
can be electric or hybrid.
9. 3D Printing
Manufacturing techniques that create objects by printing successive layers of
material using digital models might become more accessible to the common
person due to the 3D Printers. Consumers use of 3 D printing could make
them save a lot of money in cost per printed product, while allowing the
customer to his or her own personalized customization. It can also be used
in bioprinting of tissue and organs, direct product manufacturing, tool and

mold manufacturing. With a 3D printer you can essentially build your own
thing. Another interesting aspects is the new materials that can be used in
3D printers, which have all sorts of new properties.
10. Advanced Materials
Materials that have superior characteristics such as better strength and
conductivity or enhanced functionality such as memory or self healing
capabilities, will bring various benefits to the most widespread types of
industries.
11. Advanced Oil and Gas Exploration and Recovery
Advances in exploration and recovery techniques that enables extraction of
additional oil and gas.
12. Renewable Electricity
Generation of electricity from renewable sources, such as the sun and the
wind, will reduced harmful climate impact. Its components technologies are
photovoltaic cells, wind turbines, concentrated solar power, hydroelectric
and ocean-wave power, geothermal energy.

What Are Cores?


A processor core is a processing unit which reads in instructions to perform
specific actions. Instructions are chained together so that, when run in real
time, they make up your computer experience. Literally everything you do on
your computer has to be processed by your processor. Whenever you open a
folder, that requires your processor. When you type into a word document,
that also requires your processor. Things like drawing the desktop
environment, the windows, and game graphics are the job of your graphics
card which contains hundreds of processors to quickly work on data
simultaneously but to some extent they still require your processor as
well.

DIFFERENCE BETWEEN DUAL CORE CPU AND QUAD CORE CPU

Key Difference: A dual-core processor is a type of a central processing


unit (CPU) that has two complete execution cores. Quad-core
processors have four independent central processing units that can
read and execute instructions.
The constant evolution of computers requires it to be faster, stronger and
better. This requirement has been getting companies to bud heads trying to
figure out ways to make the computers faster and more power processors.
This has given birth to technologies such as dual-core and quad-core
processors. Dual-core and quad-core are also known as multi-core
processors.

A dual-core processor is a type of a central processing unit (CPU) that has


two complete execution cores. Hence, it has the combined power of two
processors, their caches and the cache controllers onto a single chip. This
makes the dual-core processors well-suited for multitasking. Dual-core
processors have two cores that have an independent interface to the
frontside bus. Each core has its own cache. This allows the operating system
to have sufficient resources to handle intensive tasks in parallel.
Dual Core is a generic name for any processor with two cores from any
manufacture. However, due to marketing and Intels predominance in the
CPU market, dual core has become synonymous with Intel Pentium Dual
Core. It may sometimes also be used to refer to Intels Core 2 Duo line. AMD,
the main competitor of Intel, also had a dual-core processor under the x2
brand.
Quad-core processors are the next step after dual-core. True to its name
quad-core refers to processors having four independent central processing
units that can read and execute instructions. Quad-cores actually comprise
two dual-core processors, where processor 1 and 2 would share the same

memory cache, while processors 3 and 4 would share one. Incase processor
1 needs to communicate with processor 3; it would have to be through an
external frontside bus. Both Intel and AMD have released quad-core
processors.
Though quad-core is a faster and better technology, it also has some
limitations. The true performance of quad-core is often lacking due to
external problems. One such problem is heat, each core generate a lot of
heat while running, so four cores requires powerful cooling measure such as
liquid cooling (which are harder to implant) or to reduce the total speed of
the core. This provides a dip in the performance of the cores. Another
problem that arises is the hardware, though the processors have been
upgraded, the hardware has not yet caught up to the processor. Because of
this, during the execution of heavy tasks, the data of the processors would
become more congested.

Multithreading
Definition - What does Multithreading mean?
Multithreading is a type of execution model that allows multiple threads to
exist within the context of a process such that they execute independently
but share their process resources. A thread maintains a list of information
relevant to its execution including the priority schedule, exception handlers,
a set of CPU registers, and stack state in the address space of its hosting
process.
Multithreading is also known as threading.
Threading can be useful in a single-processor system by allowing the main
execution thread to be responsive to user input, while the additional worker
thread can execute long-running tasks that do not need user intervention in
the background. Threading in a multiprocessor system results in true
concurrent execution of threads across multiple processors and is therefore
faster. However, it requires more careful programming to avoid non-intuitive
behavior such as racing conditions, deadlocks, etc.
Operating systems use threading in two ways:

Pre-emptive multithreading, in which the context switch is controlled


by the operating system. Context switching might be performed at an
inappropriate time, Hence, a high priority thread could be indirectly
pre-empted by a low priority thread.

Cooperative multithreading, in which context switching is controlled by


the thread. This could lead to problems, such as deadlocks, if a thread
is blocked waiting for a resource to become free.

Core i3, Core i5, Core i7


If you want a plain and simple answer, then generally speaking, Core i7s are
better than Core i5s, which are in turn better than Core i3s. Nope, Core i7
does not have seven cores nor does Core i3 have three cores. The numbers
are simply indicative of their relative processing powers.
Their relative levels of processing power are based on a collection of criteria
involving their number of cores, clock speed (in GHz), size of cache, as well
as Intel technologies like Turbo Boost and Hyper-Threading.
Note: Core processors can be grouped in terms of their target devices, i.e.,
those for laptops and those for desktops. Each has its own specific
characteristics/specs. To avoid confusion, well focus on the desktop variants.
Note also that well be focusing on the 4th Generation (codenamed Haswell)
Core CPUs.
Architecture
First, it's important to explain about architecture and codenames. Every year,
Intel releases a newer, faster range of processors. We're currently starting to
see Devil's Canyon chips, a refresh of last year's Haswell. Before that we had
Ivy Bridge and Sandy Bridge. Generally speaking a Core i3, i5 or i7 that has a
newer architecture is faster than the older-architecture processor that it
replaces. You can tell the architecture by the model number, Devil's Canyon
and Haswell start with 4; Ivy Bridge with a 3; and Sandy Bridge with a 2.The
most important thing about different architectures is making sure that you
have a motherboard that supports the type of processor you're interested
in. Processors, regardless of whether they're a Core i3, i5 and i7, based on
the same architecture are fundamentally the same inside. The differences in
performance come from which features are enabled or disabled, the clock
speed and how many cores each one has.

Model

Core i3

Core i5

Core i7

Number of cores

Hyper-threading

Yes

No

Yes

Turbo boost

No

Yes

Yes

K model

No

Yes

Yes

The feature table above shows you how the most popular processors line-up
in terms of features. The differences in Core i3, i5 and i7 are the same for
Sandy Bridge Ivy Bridge, Haswell and Devil's Canyon (a Haswell refresh).
Note that there are exceptions (see below), but you're mostly unlikely to
encounter these odd models when buying a new CPU. Also, mobile
processors are completely different again, so we're focussing on desktop
models here only. What's important is what these different features mean,
which we'll explain.
Number of cores
The more cores there are, the more tasks (known as threads) can be served
at the same time. The lowest number of cores can be found in Core i3 CPUs,
i.e., which have only two cores. Currently, all Core i3s are dual-core
processors.
Currently all Core i5 processors, except for the i5-4570T, are quad cores in
Australia. The Core i5-4570T is only a dual-core processor with a standard
clock speed of 2.9GHz. Remember that all Core i3s are also dual cores.
Furthermore, the i3-4130T is also 2.9GHz, yet a lot cheaper.
At this point, Id like to grab the opportunity to illustrate how a number of
factors affect the overall processing power of a CPU and determine whether
it should be considered an i3, an i5, or an i7.
Even if the i5-4570T normally runs at the same clock speed as Core i34130T, and even if they all have the same number of cores, the i5-4570T
benefits from a technology known as Turbo Boost
Intel Turbo Boost
The Intel Turbo Boost Technology allows a processor to dynamically increase
its clock speed whenever the need arises. The maximum amount that Turbo
Boost can raise clock speed at any given time is dependent on the number of

active cores, the estimated current consumption, the estimated power


consumption, and the processor temperature.
For the Core i5-4570T, its maximum allowable processor frequency is
3.6GHz. Because none of the Core i3 CPUs have Turbo Boost, the i5-4570T
can outrun them whenever it needs to. Because all Core i5 processors are
equipped with the latest version of this technology Turbo Boost 2.0 all
of them can outrun any Core i3.
Cache size
Whenever the CPU finds that it keeps on using the same data over and over,
it stores that data in its cache. Cache is just like RAM, only faster because
its built into the CPU itself. Both RAM and cache serve as holding areas for
frequently used data. Without them, the CPU would have to keep on reading
from the hard disk drive, which would take a lot more time.
Basically, RAM minimises interaction with the hard disk, while cache
minimises interaction with the RAM. Obviously, with a larger cache, more
data can be accessed quickly. The Haswell (fourth generation) Core i3
processors have either 3MB or 4MB of cache. The Haswell Core i5s have
either 4MB or 6MB of cache. Finally, all Core i7 CPUs have 8MB of cache,
except for i7-4770R, which has 6MB. This is clearly one reason why an i7
outperforms an i5 and why an i5 outperforms an i3.
Hyper-Threading
Strictly speaking, only one thread can be served by one core at a time. So if
a CPU is a dual core, then supposedly only two threads can be served
simultaneously. However, Intel has a technology called Hyper-Threading. This
enables a single core to serve multiple threads.
For instance, a Core i3, which is only a dual core, can actually serve two
threads per core. In other words, a total of four threads can run
simultaneously. Thus, even if Core i5 processors are quad cores, since they
dont support Hyper-Threading (again, except the i5-4570T) the number of
threads they can serve at the same time is just about equal to those of their
Core i3 counterparts.
This is one of the many reasons why Core i7 processors are the creme de la
creme. Not only are they quad cores, they also support Hyper-Threading.
Thus, a total of eight threads can run on them at the same time. Combine
that with 8MB of cache and Intel Turbo Boost Technology, which all of them
have, and youll see what sets the Core i7 apart from its siblings.

The upshot is that if you do a lot of things at the same time on your PC, then
it might be worth forking out a bit more for an i5 or i7. However, if you use
your PC to check emails, do some banking, read the news, and download a
bit of music, you might be equally served by the cheaper i3.
At DCA Computers, we regularly hear across the sales counter, I dont mind
paying for a computer that will last, which CPU should I buy? The sales tech
invariably responds Well that depends on what you use your computer for.
If its the scenario described above, we pretty much tell our customers to
save their money and buy an i3 or AMD dual core.
Another factor in this deliberation is that more and more programs are being
released with multithread capability. That is, they can use more than one
CPU thread to execute a single command. So things happen more quickly.
Some photo editors and video editing programs are multi-threaded, for
example.

Das könnte Ihnen auch gefallen