Beruflich Dokumente
Kultur Dokumente
Dell Data Protection Encryption is a number of applications that allow you to:
1. Detect data security risks on desktops, laptops and external media.
2. Protect data on these devices by enforcing access control policies, authentication and
encryption of sensitive data.
3. Manage data centrally with policies using collaborative tools that integrate into existing
user directories.
4. Support key and data recovery, automatic updates and tracking for protected devices.
Encryption
o Endpoint Middleware (abstraction layer)
o Endpoint Software (policy manager and local enforcement)
o Remote Management Console (remote policy manager)
Port Control
o Endpoint Middleware
o Endpoint Software (policy manager and locale enforcement)
o Remote Management Console (remote policy manager)
Intel Rapid Storage Technology provides new levels of protection, performance, and expandability for desktop and
mobile platforms. Whether using one or multiple hard drives, users can take advantage of enhanced performance
and lower power consumption. When using more than one drive, the user can have additional protection against data
loss in the event of a hard drive failure.
Intel Rapid Storage Technology was formerly known as Intel Matrix Storage Manager. Starting with version 9.5, a
brand new user interface makes creating and managing your storage simple and intuitive. Combined with Intel Rapid
Recover Technology, setting up data protection can be accomplished easily with an external drive.
Valuable digital memories are protected against a hard drive failure when the system is configured for any one of
three fault-tolerant RAID levels: RAID 1, RAID 5, and RAID 10. By seamlessly storing copies of data on one or more
additional hard drives, any hard drive can fail without data loss or system downtime. When the failed drive is removed
and a replacement hard drive is installed, data fault tolerance is easily restored.
Intel Rapid Storage Technology can also improve the performance of disk intensive retrieval applications such as
editing home video. By combining from two to six drives in a RAID 0 configuration, data can be accessed on each
drive simultaneously, speeding up response time on data-intensive applications. Also, due to drive load balancing,
even systems with RAID 1 can take advantage of faster boot times and data reads.
Intel Rapid Storage Technology provides benefits to users of a single drive as well. Through AHCI, storage
performance is improved through Native Command Queuing (NCQ). AHCI also delivers longer battery life with Link
Power Management (LPM), which can reduce the power consumption of the chipset and Serial ATA (SATA) hard
drive.
4K Hard Drive
Overview
Historically, hard drives read and write data in fixed chunks. For several decades those chunks have always been
512 bytes sector. As adopting larger drive capacities, there is a requirement to transition to transferring data in larger
chunks. Thus hard drive manufacturers have agreed with a new standard adopted by IDEMA (The International Disk
Drive Equipment and Materials Association) for a new data structure format that defines an increase in the basic
sector size used on the media of hard drives. This new IDEMA standard mandates that a hard drive sector size will
change from 512 bytes to 4096 (4K) bytes.
Sync/DAM (lead-in)
2.
Inter-sector gaps
3.
The crux of the problem is that there are 3 factors that are in constant need of balancing when it comes to hard drive
design: areal density, the signal-to-noise ratio (SNR) in reading from drive platters, and the use of Error Correcting
Code (ECC) to find and correct any errors that occur. As areal density is increases, sectors become smaller and their
SNR decreases. To compensate for that, improvements are made to ECC (usually through the use of more bits) in
order to maintain reliability. So for hard drive manufacturers to add more space, they ultimately need to improve their
error-correction capabilities, which mean the necessary ECC data requires more space.
At some point during this process drive manufacturers stop gaining any usable space - that is, they have to add as
much ECC data as they get out of the increase areal density in the first place - which limits their ability to develop
larger drives. Drive manufacturers dislike this both because it hinders their ability to develop new drives, and because
it means their overall format efficiency (the amount of space on a platter actually used to store user data) drops. Drive
manufacturers want to build bigger drives, and they want to spend as little space on overhead as possible.
But all is not lost. The principle problem here is that ECC correction takes place in 512B chunks, while ECC can be
more efficient when used over larger chunks of data. If ECC data is calculated against a larger sector, even though
more ECC data is necessary than for a single 512B sector, less ECC data than the sum of multiple sectors is needed
to maintain the same level of operational reliability.
Maintaining the multiple blocks of ECC has proven to require a lot of overhead and reducing the number of these per
sector can improve the efficiency of the drive.
Advanced Format technology will help the hard drive industry deliver higher capacity hard drives and also provides an
increase in data reliability by providing a more powerful error correction scheme using longer ECC code words.
Hard drive makers are transitioning specifically to a 4K sector, the rationale is based on the earlier problems. The 4K
sector are much larger than 512B sector, which mean they benefit more from our earlier ECC optimizations, which in
turn brings a greater increase in format efficiency than using smaller sectors.
4K also happens to be a magical number elsewhere when it comes to computers this is primarily rooted in the fact
that a normal page of memory on an x86 processor continues to be 4KB (ed: 4MB pages also exist). The x86 page
size in turn has also lead to file system clusters (the smallest unit of storage in a file system) becoming 4KB as 4KB
clusters neatly fit in to a page of memory, while the need for smaller clusters has subsided amidst a general increase
in file size (i.e. fewer files are smaller than 4KB and waste space). So, 4KB physical sectors map perfectly with 4KB
file system clusters, which in turn map perfectly with 4KB memory pages. And hence 4KB is the largest practical size
for a hard drive sector at this time.
512B Emulation
With all of that said, to make this kind of advancement practical, a transition period is necessary. That transition
period will be done through the use of 512B emulation technology, which will expose Advanced Format drives to the
drive controller and operating system as having 512B sector, when in reality they will have 4K sector as shown below:
With the 512B emulation, theres the risk that a partition could be misaligned compared to the 4K physical sector where it would be unwittingly started in the middle of such a sector(refer to the below images). As a result, the
clusters of a file system on that partition would end up straddling 4K sector, which would cause performance
problems. Specifically, in testing IDEMA found that random writes in would be particularly impacted, as a ReadModify-Write (RMW) would need to take place to update each sector, rather than a straight write. Although this isnt
mechanically or electronically harmful in any way, the performance hit compared to a straight write makes it
undesirable.
512 byte sector emulation: how the drive is laid out (physical) and what the OS sees (logical)
Let's look at more examples of the un-aligned drive and the possible performance degradation discussed earlier. The
following example is partitions created in a Windows XP environment are place on a non-4K aligned boundary will
impact the Read-Modify-Write (RMW) to an additional 11ms based on a 5400RPM hard drive.
1.
2.
3.
a.
b.
c.
d.
e.
4.
Contrary, partitions created in a Windows Vista and Windows 7 place the first partition at sector 2048 by default. With
this, almost all the disk accessing are guaranteed to also fall on 4K boundaries and the performance result is the
similar as 512B.
1.
2.
3.
a.
b.
c.
The following table shows which existing Windows operating systems are 4K aware.
Operating System
Results
Windows XP
No
No
Yes
Windows 7
Yes
Type
Category
Introduction Year
USB 3.0
4.8 Gbps
Super Speed
2010
USB 2.0
480 Mbps
High Speed
2000
USB 1.1
12 Mbps
Full speed
1998
USB 1.0
1.5 Mbps
Low speed
1996
Smart Connect is a feature that periodically wakes the system from the Windows sleep state to refresh email or
social networking applications. When the system is equipped with specific wireless devices, it can detect the
presence of known networks while asleep, waking only when connectivity is available (this feature is called Net
Detect). When properly equipped with specific wireless devices, Smart Connect can also provide quick internet
connection readiness by keeping wireless devices active in a low-power mode during sleep (this feature is called
Quick Connect). Smart Connect may be combined with Rapid Start on some systems to help reduce power
consumption while still keeping email and other application data current.
The Optimus technology is designed to maximize performance and user experience on the computer, while
minimizing the impact on battery life. It combines the graphic processing capability of the integrated Intel graphic
processing unit (GPU) with the discrete nVidia GPU while running graphic-intensive applications such as 3-D games.
The nVidia GPU is turned on only for preset applications and thus extends the battery life.
Discover computing assets on a network regardless of whether the computer is turned on or off Intel AMT
uses information stored in nonvolatile system memory to access the computer. The computer can even be
accessed while it is powered off (also called out-of-band or OOB access).
Remotely repair systems even after operating system failures In the event of a software or operating
system failure, Intel AMT can be used to access the computer remotely for repair purposes. IT
administrators can also detect computer system problems easily with the assistance of Intel AMT's out-ofband event logging and alerting.
Protect networks from incoming threats while easily keeping software and virus protection up to date across
the network.
Solid State
Solid State Drive
The solid state drive (SSD) is a new offering from Dell that replaces a conventional
hard drive. The drive uses only nonvolatile (NV) memory to store data instead of the
spinning platters and moving heads of a conventional hard drive. The result is much
faster access times and data transfer, as well as more reliability.
The drive itself comes in a 2.5-inch and 1.8-inch form factor. The smaller drive is the same size
as the drive used in the Latitude D420 system, but it has a special carrier that allows it to fit
into other systems. In the carrier, it looks just like the ruggedized drive offered on the Latitude
D620 ATG system. The larger drive is the same form factor as standard notebook drives.
The 1.8-inch drive has a PATA interface but comes with an adapter that allows it to connect to a
SATA controller. The 2.5-inch drive uses a standard SATA connection as with other drives of this
size.
Here are some of the features of the SSD:
Faster The SSD has much faster access times than a conventional hard
drive. In some tests, the Windows XP operating system loaded almost twice
as fast.
More reliable Since the drive has no moving parts, it is more reliable than
standard hard drives or other drives that require motion to work.
More durable No moving parts also means less chance of damage from
bumps or impacts.
The drive is available for most systems shipping in 2007 and may become available on other
platforms as well as its popularity increases. The drive is presently available in only two sizes: 32
GB and 64 GB.
Hybrid
The hybrid hard drive is also referred to as just a hybrid drive. The reason is that a small amount of cache memory is
included with the drive that works much the same as Intel Turbo Memory
but without the separate card. The
drive looks identical to standard drives and uses the same form factor.
All hard drives use a small amount of DRAM cache. The hybrid hard drive uses an additional 256 MB of NAND
memory for its nonvolatile (NV) cache. (You cannot replace just the cache without replacing the entire drive.) This
cache allows some data to be read and written to flash memory without the need for spinning up the drive. When the
drive is spinning and data is being transferred, it also improves read times for the drive.
The drive spins at 5400 RPM and comes in 80 GB, 120 GB and 160 GB sizes.
the drive is accessed, the longer the drive life.Faster access times As with other implementations of
cache memory, the goal is to provide faster access to data. Since NAND memory stores and retrieves data
much faster than a hard drive, the speed increase is evident primarily when the computer is performing
repetitive tasks instead of during long data read / writes.
Quicker boots and resumes One of the most noticeable places for speed increases is during the initial
boot of the Windows operating system and when the system resumes from Hibernate.
Increased power savings Using cache memory to store data before accessing the drive itself allows the
drive to spin down more often. This provides a power savings that adds up over the course of several hours.
The result is extended battery life.
Improved hard drive reliability Moving parts have a higher rate of failure than solid-state parts. The less
Up to 6 times faster than existing external storage solutions: USB 2.0 and IEEE 1394
Processor: 1 gigahertz (GHz) or faster with support for PAE, NX, and SSE2 (more info)
Detection Mechanism
Explanation
NOTE:
Please note that Dell BIOS PBA with DEll Control Point does not fully
integrate with AT.
Missed check-ins
IT administrator can define intervals at which the laptop must check in with the
central server via the internet using the built-in timer. If a check-in is missed, the
local timer will expire and the laptop will immediately go into theft mode, even if
it is not connected to the internet.
The IT administrator can flag the laptop in the central server if the loss or theft of
the PC is reported by an individual. The next time the laptop connects to the
central server, the central server can send the poison pill via wired or wireless
LAN and put it into theft mode.
The central server can send an encrypted SMS text message to enter theft
mode if the laptop is 3G-enable. For this option, the laptop need not be
connected to the internet, but it must be within range of a 3G network and its
operating system must be functioning.
IT-Specified Responses
Intel AT provides flexible options for several automated loss/theft responses. Depending on the detection mechanism,
these responses can be activated locally and automatically, or remotely by IT:
Response
Explanation
Disable the PC
Block the boot process. Since the boot process is blocked through the laptops
hardware, this response works even if the boot order is changed, the hard drive
is replaced or reformatted, or other boot devices (e.g. secondary hard drive,
removable drive, CD, DVD, or USB key) are tried.
IT administrators can customize the message that is displayed after the laptop
enters theft mode. For example, an IT administrator could define a message that
says, "This laptop has been reported missing. Please call 1-800 Dell to return
the system to Dell Inc."
Explanation
Reactivation code
PBA-based authentication
process
Some PBA modules allow the IT administrator (or IT service provider) to define
additional re-activation processes. These could include a security question, a set
of challenge-response questions, or a combination of passwords, biometric
authentication, or token authentication.
Hyper-threading
For each processor core that is physically present, the operating
system addresses two virtual or logical cores, and shares the
workload between them when possible. The main function of
hyper-threading is to decrease the number of dependent
instructions on the pipeline
Registered memory
Registered (also called buffered) memory modules have a register between the DRAM
modules and the system's memory controller. They place less electrical load on the memory
controller and allow single systems to remain stable with more memory modules than they would
have otherwise. Registered memory is often more expensive because of the lower volume and
the additional components, so it is usually found only in applications where the need for
scalability and stability outweighs the need for a low price (servers, for example). Although
most memory modules are both ECC and registered, there are also both registered non-ECC
modules and non-registered ECC modules. Non-registered ECC memory is supported and used
in motherboards which do not support the very large amounts of memory used by servers.
ECC memory
Error-correcting code memory (ECC memory) is a type of computer data storage that can
detect and correct the more common kinds of internal data corruption. ECC memory is used in
most computers where data corruption cannot be tolerated under any circumstances, such as for
scientific or financial computing.
ECC memory maintains a memory system immune to single-bit errors: the data that is read from
each word is always the same as the data that had been written to it, even if a single bit actually
stored, or more in some cases, has been flipped to the wrong state. Some non-ECC memory with
parity support allows errors to be detected, but not corrected; otherwise errors are not detected.
What is TPM?
The TPM, or Trusted Platform Module, is a security device that will hold computer
generated keys for encryption. It is a hardware based solution that will help avoid
attack by hackers looking to capture passwords and encryption keys to sensitive
data.
TPM is a unique hardware device on the system board that can be used to handle various security
tasks on a personal computer system. The security features provided by the TPM are internally
supported by the following cryptographic capabilities of each TPM: hashing, random number
generation, asymmetric key generation, and asymmetric encryption/decryption. Each individual
TPM on each individual system has a unique signature initialized during the silicon
manufacturing process that further enhances its trust/security effectiveness. Each individual TPM
must have an Owner before it is useful as a security device. The process of taking ownership is
performed by the TPM customer and must involve physical presence at the particular system that
requires ownership. After this procedure is completed and the TPM has a unique owner, the trust
bond is complete. It is in this state that the TPM can effectively be utilized by TPM-aware
software for security purposes.
What is UEFI?
UEFI is the successor to EFI. Both are Extensible Firmware Interfaces. (the U stands for Unified.) The basic idea is to
provide a data table with platform information and boot/runtime service functions in a standard environment. This way
operating systems can incorporate more generic, native calls that are consistent across all platforms of hardware.
It is important to remember that the BIOS or Basic Input Output System hands over control to the operating systems
and executive software after bootstrapping has been performed to ensure basic system functionality. Historically
speaking older operating systems like MS-DOS would rely on the BIOS to carry out most input/output tasks. 32 bit
operating systems like Linux and Microsoft started to invoke the BIOS directly.
As the standard BIOS began taking on more complexity (e. g. power management, hot swapping, thermal
management), the limitation of the 16-bit 1 MiB addressable space became more and more difficult to consistently
control of the applications and Operating systems to rely on the BIOS. Add in the fact that any card that contains an
option ROM also has bootstrapping code on them, the overlaps and vendor specific coding became more and more
difficult for OS or Execution code to behave universally across all the different BIOS present in the industry.
Extensible Firmware Interface (EFI) was created to replace the runtime interfaces of the legacy BIOS. The EFI
platform was originally created for use with the Itanium processors produced by Intel The platofrm was renamed
UEFI upon the formation of a Unified Forum of PC industry stakeholders whose goal is to standardize the EFI
platform across the industry and allow for eventual replacement of all legacy BIOS platforms.
The new Dell Unified Extensible Firmware Interface (UEFI) versions were named 2.0 and 2.1. These versions
operate in "long-mode" on x64 based architectures. UEFI creates its own environment for using modern C
commands. Therefore, driver support is designed for standard implementation across all new UEFI-enabled systems.
UEFI will provide a clean interface between operating systems and platform firmware during the boot process, and
will act as an architecture-independent mechanism for initializing add-in cards. Windows Server 2008 and Windows
Vista SP1 introduce native UEFI 2.0 support on all 64-bit platforms. In Linux, bot elilo and grub support EFI for x86-32
bit, x86-64 bit and Itanium CPUs. BSD in x86 do not support UEFI at this time.
Unified Extensible Firmware Interface (UEFI) is an interface between PC operating systems and platform firmware.
The UEFI Forum is a collaborative trade organization comprised of key PC industry technological stakeholders with a
direct interest in the EFI platform as an alternative to the legacy BIOS platform. . The goal of these stakeholders is to
develop and govern a standard specification which formally defines the permameters for the UEFI. The UEFI Forum's
long-term goal is to create a new and robust environment for seamlessly booting to an OS and running pre-boot
applications, and eventually to replace the legacy BIOS interfaces with this new standard.
Product Overview
Enterprise organizations are adopting notebooks into their organizations at a much greater rate
than in the past. Notebook users are also demanding more productivity from their notebooks. The
major barrier for improvement in notebook productivity is notebooks spend a large amount of
time turned off and unconnected based on limited battery life, connectivity and portability. The
time it takes to boot to the operating system and logon to the enterprise network can be as long as
15 - 20 minutes. If you just need to check a meeting invite for the meeting location or quickly
check your e-mail, the overhead to boot the computer and connect to the network is just not
worth it sometimes.
Dell Latitude ON solves this problem by providing an interface that boots in just a minute
or two. Dell Latitude ON provides wireless access to e-mail, calendars, contacts and web
browsing. Latitude ON is a chip on the system board which contains its own Linux based
operating system which can connect to either a WLAN (Wireless Local Area Network) or
WWAN (Wireless Wide Area Network) connection in just seconds. Since the Latitude ON is
very basic operating system there are limits to what can and cannot be done. One of the main
limitations is the external ports on the system, USB, VGA and docking, do not function. There is
also no way to store any information such as e-mail or web documents since there is no storage
available.
NOTE:
There are several different Implementations of Dell Latitude ON. The Latitude E4300 and
E4200 systems have a separate card installed for Dell Latitude ON functionality. The
Latitude Z600 implements Dell Latitude ON as an integrated chip on the system board.
Therefore when replacing a Dell Latitude ON module, the module is replaced on the
Latitude E4300 and the E4200 but the system board is replaced on the Latitude Z600.
Dell Latitude ON key components:
Uses main LCD display, keyboard and touch pad of host system
Real time instant access to e-mail, calendar, contacts and the Internet. Users interact with
these applications in a unique state and do not have access to unique applications loaded
in the main operating system or to data stored on the system hard drive.
Alert, review, respond using the full keyboard and display of the system.
NOTE:
Dell Latitude ON does not ship on all systems. Dell Latitude ON is an option on the
E4300 and E4200. The Z600 does not support the Dell Latitude ON for systems sold in
China. Verify Dell Latitude ON was shipped by checking DellServ. The module is listed
separtely for the E4300 and E4200. For the Z600 the system board BOM shows an SDIO
board if Latitude ON is included.
There are two entries in the BIOS that must be configured for Dell Latitude ON to work correctly. The first entry
enables Latitude ON, the second setting enables Instant On Mode. Both options should be checked.
Instant On Mode allows Latitude ON to be instantly available by pressing the Latitude ON button when the system is
off or hibernated. Instant on mode uses a small amount of battery power to ensure instant availability. Without instant
on mode, Latitude ON takes up to a minute or longer to boot, but does not use any battery power when the system is
turned off. Enabling this option is recommended. However, if the customer is concerned about loss of battery power,
this option can be disabled. This option is disabled by default in the BIOS.
The two settings for Latitude ON can be found by booting to the BIOS (press <F2> while the Dell logo is on the
screen). Once the BIOS screen is shown, click System Configuration to expand it, then click Latitude ON.
NOTE:
If the Latitude ON entry does not exist in the BIOS, flash the system to the latest BIOS. You can find the latest
BIOS on support.dell.com.
Troubleshooting
When troubleshooting problems where Latitude ON does not boot or boots to Latitude ON Reader
this setting in the BIOS to ensure Latitude ON is enabled.
, always check
If the customer feels Latitude ON is slow to boot, ensure Instant On Mode is checked.
Here's how to read the notebook LED codes when a possible error occurs:
Diagnostic LED
Fault Description
Storage LED
Power LED
Wireless LED
Blinking
Solid
Solid
Solid
Blinking
Solid
Blinking
Blinking
Blinking
Blinking
Blinking
Solid
Blinking
Blinking
Off
Blinking
Off
Blinking
Solid
Blinking
Blinking
Blinking
Solid
Blinking
Off
Blinking
Blinking
Off
Blinking
Off
USB Powershare
Greater systems reliability and availability reducing corporate risk and real-time losses
from downtime.
Lower hardware acquisition costs with increased utilization of the machines you already
have.