Sie sind auf Seite 1von 31

DISKS

Hardware Disks
There are four basic types of hardware disks: Serial Advanced Technology
Attachment (SATA), Small Computer System Interface (SCSI), Serial
Attached SCSI (SAS), and Solid State Drives (SSD).
SATA, SCSI, and SAS are mechanical Hard Disk Drives (HDDs). HDDs
consist of circular disks and a head that can read and write information to
the disk. Solid State Drives (SSDs) are based on semiconductors and
have no moving parts.
[1]

Hardware Disk Types


Serial Advanced Technology Attachment (SATA) SATA drives are
very common because of the low cost and larger disk sizes. SATA is a
good choice for general-purpose needs and are common in the consumer
market. They are also frequently used for long-term archival needs and in
scenarios that do not require high performance throughput.
Small Computer System Interface (SCSI) SCSI drives are almost
obsolete. Most servers and storage devices on todays market do not offer
a SCSI storage option. SCSI interface allowed for connecting up to 16
devices to the same computer system, but it has been replaced by SAS,
which not only supports a much higher number of devices but also offers
considerably better performance.
Serial Attached SCSI (SAS) SAS drives combine good performance
and economical pricing. The best uses for SAS drives are high
performance applications such as Microsoft SQL Server.
Solid State Drives (SSD) SSDs provide fast disk access, use less
power, and are less susceptible to mechanical failures than traditional
hard disks. SSD drives are the highest performing hard drives in use
today, but they are also significantly more expensive than HDDs. The best
uses for SSD drives are high-performance low latency applications where
drive performance is critical, such as, for example, enterprise database
and virtualization servers.

Note that even though we are using the term disk when referring to
these different HDD technologies, their difference is not actually based on
the disks mechanical characteristics, but rather on the characteristics of
the disk controller and its interface (as well as the corresponding bus), via
which the disk attaches to and communicates with the computer system.
[1]

Monitoring Disk Performance

Disk Speed
Disks are categorized by capacity and speed. Disk speed is measured in
Input/Output Operations Per Second (IOPS). Here is a general overview of
disk speeds by drive type.

Disk Performance Tools


There are three tools that can help you determine how your disks are
performing: Task Manager,Performance Monitor, and Resource
Monitor. These tools capture similar information but are slightly different.

Task Manager
Task Manager provides information about:

Applications currently running on your system.

Individual process information, like how much memory and CPU


they are using.

Statistics about memory, processor and network

Resource Monitor
Resource Monitor provides for detailed information about real time
performance on Windows Servers. You can use Resource Monitor to view
real time data on CPU, Memory, Network and Disk performance. Resource
Monitor can help you identify and resolve conflicts and bottlenecks.

Performance Monitor
Windows Performance Monitor examines how programs you run affect
your computer's performance, both in real time and by collecting log data
for later analysis. Performance Monitor uses:

Performance counters. Measurements of a system state or


activity.

Event trace data. Collected from trace providers, which are


components of the operating system or of individual applications that
report actions or events. Output from multiple trace providers can be
combined into a trace session .

Configuration information. Collected from key values in the


Windows registry. Windows Performance Monitor can record the value of a
registry key at a specified time or interval as part of a log file.
All of this information can be combined into a Data Collector Set.

DISK STRUCTURE

Partition Table Formats


No matter what type of physical disk drive you select the storage must be
initialized with a partition table.
The partition table defines the method an operating system uses to
organize and size partitions and volumes on a disk. For computers
running Windows operating systems, you can use either the Master Boot
Record (MBR) format or the globally unique identifier (GUID) Partition
Table (GPT) format.
This table compares the older MBR format with the newer GPT format.

Feature

MBR

GPT

Maximum number of partitions per disk

128

Maximum partition size

2 TB

18 EB

Maximum volume size (NTFS)

2 TB

256 TB

Data can be written across multiple disks

No

Yes

If your hard disk is larger than 2 TB, and you want to be able to access all of it, you must use
the GPT partition table format. Also, you can convert between the two formats, however this
will result in all the data on the disk being lost

File Systems

A file system is a required part of the operating system that determines


how files are named, stored, and organized on a volume. A file system
manages files and folders, and the information needed to locate and
access these items by local and remote users. When you configure your
disks in Windows Server 2012, you can choose
between FAT, FAT32, NTFS, and ReFS (next topic).

File Allocation Table (FAT)


FAT is the simplest of the file systems that Windows operating systems
support. FAT is characterized by a table that resides at the very top of the
volume. To increase resiliency, two copies of the file allocation table are
maintained in case one becomes damaged. There is no organization to the
FAT directory structure, and files are given the first open location on the
drive.

New Technology File System (NTFS)


NTFS offers several improvements over FAT, such as providing better
support for metadata and using advanced data structures to improve
performance, reliability, and disk space utilization. NTFS also has
additional extensions such as security access control lists (ACLs),
auditing, file-system journaling, and encryption. By supporting ACLs and
encryption, NTFS also provides a significantly higher level of security than
FAT. NTFS is required for a number of Windows Server 2012 roles, role
services, and features such as Active Directory Domain Services (AD DS)
and Volume Shadow Copy Service (VSS) [1].
Shadow Copy (also known as Volume Snapshot Service, Volume Shadow Copy
Service or VSS) is a technology included in Microsoft Windows that allows taking manual or
automatic backup copies or snapshots of computer files or volumes, even when they are in
use.
[1]

Resilient File System (ReFS)


ReFS is a new file system available in Windows Server 2012
that improves on the NTFS file system. It is designed to keep pace with
larger files sizes and future virtualization requirements as well as
providing for better data integrity.

ReFS Advantages

Metadata integrity with checksums.

Expanded protection against data corruption.

Maximizes reliability, especially during a loss of power (while NTFS


has been known to experience corruption in similar circumstances).
Large volume, file, and directory sizes.
Storage pooling and virtualization, which makes creating and
managing file systems easier.

Redundancy for fault tolerance.

Disk scrubbing for protection against latent disk errors.

Resiliency to corruptions with recovery for maximum volume


availability.

Shared storage pools across machines for additional failure


tolerance and load balancing.

Compatibility with NTFS


ReFS uses a subset of NTFS features, so it maintains backward
compatibility with NTFS. Therefore, programs that run on Windows Server
2012 can access files on ReFS, just as they would on NTFS. However, a
ReFS-formatted drive is not recognized when placed in computers that are
running Windows Server operating systems that were released previous
to Windows Server 2012. You can use ReFS drives with Windows 8.1, but
not with Windows 8.

Securing Files and Folders


File and folder permissions specify which users, groups, and
computers can access and interact with files and folders on an NTFS or
ReFS volume. These permissions are combined to create the access
control list (ACL). An ACL is an ordered list of access control entries
(ACEs) that define the protections that apply to an object and its
properties. Each ACE identifies a security principal and specifies a set of
access rights allowed, denied, or audited for that security principal.

Standard vs. Advanced permissions

Standard. Standard permissions are the most commonly used


permissions. These can be viewed and accessed through the
Properties of an object. For example, right-click on a file or folder,
selectProperties and then go to the Security tab.

Advanced. Advanced permissions provide a finer degree of control


for assigning access to files and folders. However, advanced
permissions are more complex to manage than standard
permissions. In this course we will cover the standard NTFS file and
folder permissions.

Standard NTFS file and folder permissions


You can choose whether to allow or deny each of the permissions.

File Permissions

Description

Full Control

Gives complete control of the file/folder and control


of permissions.

Modify

Gives read, write, and delete access.

Read and Execute

Allows a file to be read; programs can be started.


Allows folder content to be seen; programs can be
started.

List Folder Contents

Allows users to view the contents of a folder.

(Folders Only)

Read

Gives read-only access.

Write

Allows file content to be changed. Allows folder


content to be changed.

Special Permissions

Allows custom permissions configuration, defined by


advanced permissions.

Encrypting File System (EFS)


EFS is a feature of Windows that you can use to store information on your
hard disk in an encrypted format. Encryption is the strongest protection
that Windows provides to help you keep your data secure.

EFS Key Features

Basic encryption implementation is straight forward;


just select a check box in the file or folder's properties to turn it on.
You have control over who can access the files.

Files are encrypted when you close them, but are


automatically ready to use when you open them.

If you change your mind about encrypting a file, clear


the check box in the file's properties.

ReFS does not support EFS.


There are other limitations to its use i.e. it is not
possible to use EFS to encrypt the Registry or an Active Directory
database for example.

Private Keys and Data Recovery Agents


When a user encrypts a file or a folder, a private key is stored in the
users profile directory on the machine where the encrypted content
resides. For that user to decrypt an encrytped file this private key must be
available to them.
So what happens when the user and/or the private key is not available?
For example, the user leaves the company or goes on vacation, or the
certificate is deleted? In these situations, you need to have a plan for
decrypting the files. You effectively have two options.
1. You can backup the profile where the users private key is located, this
can be difficult if there are a lot of users and a bit time consuming also.
2. Implement a data recovery agent.
A recovery agent is an individual who is authorized to decrypt all EFSencrypted files. The default recovery agent is the Domain Administrator.
In addition, you can delegate the recovery agent role to any user.

It is very important to back up the key that is part of the data recovery agent profile
information. The certificate can be used to restore a user's access if the private key is
forgotten or lost.

Recovery Agent Steps

There are lots of potential ways to implement both the process and
certificate elements of a EFS implementation. One such method would be
to have a Public Key Infrastructure (PKI) which would allow for more
robust management, issuance, revocation of certificates. This would also
provide for more manageability at scale when large numbers of people
and certificates are involved, potentially spread geographically. This isn't
essential though and for small scale scenarios you can use self signed
certificates, in both domain joined and non-domain joined environments.
1.

Designate a recovery agent. Decide on who will be designated as


a recovery agent. This is someone that will be entrusted with all
encrypted data, so choose carefully. You could use the default domain
administrator account but there is no redundancy in this method.
So, select a group of trusted individuals within your IT department.

2.

Use Group Policy to setup a recovery agent. In domain based


implementations you can use the domain-based Group Policy to create an
additional data recovery agent. You will need to import either a pregenerated key with an associated user or a user account which has a File
Recovery key already associated with it. Don't forget to update the
domain with the new group policy after you change it.

3.

Import File Recovery Key into Data Recovery Agent's


(DRA) local certificate store. Every user has a local certificate store on
each machine they are logged on to. It can be accessed using the
Microsoft Management console. In order to be able to decrypt files the

DRA must have the File Recovery Key in their local certificate store.
Typically, this is done by exporting the recovery key. Also, it is important
to backup this key and to tightly control access to it.
4.

Test and backup. Test to ensure the recovery agent knows how to
decrypt files. This is extremely important and often overlooked, make
sure your processes work as you envision. Ensure a backup plan is in
place for all the users profile certificates.
In the absence of a full Public Key Infrastructure (PKI) such as Active
Directory Certificate Services (AD CS) to issue and manage certificates for
EFS, you can use the command line tool cipher.exe.

The command cipher /r:<filename> will generate a new self signed File Recovery key,
which can then be imported into Group Policy for use. The key and certificate generated will
be associated with the user who is signed in when it is run by default. It is also possible to
add users to keys and certificates using this tool. From the command line
type cipher.exe /? to see a full list of commands that are available

EFS Best Practices (Users)

Teach users to export their certificates and private keys to removable media and store
the media securely when not in use.

Teach users to encrypt folders instead of individual files.

EFS Best Practices (Administrator)

Designate more than one recovery agent, and ensure both accounts are secured.

Implement a recovery agent archive program to ensure obsolete recovery keys are
stored.

Load balance your servers when there are many clients using EFS. EFS does
introduce some CPU overhead every time a user encrypts and decrypts a file.

VOLUMES
Volume Types

System volume. The system volume is a volume that


contains the files required to start Windows. By default, its size is set to
100 MB on Windows 7 computers and 350 MB starting with Windows 8.
This volume holds the Boot Configuration Data (BCD) files and the
Windows Recovery Environment (Windows RE). By default, this volume is
shown as the first partition on Disk 0, is labeled System Reserved, and it
does not have an assigned drive letter.

Boot volume. The boot volume contains the Windows


operating system files. In addition, if the boot volume is the only volume
available for users, it will also store user-generated data.

Data volume. A data volume is any volume in Windows that


is not a boot volume or a system volume. A data volume stores data, but
not operating system files or boot files.

UEFI Computers
On UEFI (Unified Extensible Firmware Interface) based computers, you
will likely also find Microsoft Reserved Partitions, which are commonly
used operating system managed components that used to be stored in
hidden disk sectors on legacy computers with Basic Input/Output System
(BIOS) firmware (for example, Logical Disk Manager database, used to
store disk metadata).

Extend and Shrink


Once a volume is created you can: Extend or Shrink the volume.

Extend a Volume

Sometimes after creating a disk volume you find out more space is
needed. For example, you create a 4 GB data drive for the Human
Resources department, but more people are hired and more disk space is
needed. If you have unallocated disk space, you can extend the existing
volume into the unused space.
Your extend options depend on the volume type, which, in turn, depends
on the disk type. Basic volumes can be extended only if the unallocated
space is contiguous. If the space is non-contiguous then the disk must be
dynamic. Dynamic volumes are often referred to as simple volumes.
[1]
. When you extend a simple volume by using unallocated space on one
or more other disks, you convert the volume into a spanned
volume. Spanned volumes link unallocated disk space on multiple disks
together.

Shrink a Volume
Shrinking a volume is the opposite of extending a volume. Shrinking is
used to deallocate unused volume space. For example, the HR
department is only using 10% of its allocated space, and you dont think
this will change so you want to take the unused storage and allocate it to
a different volume.

Note that the terms simple and basic are frequently used interchangeably
when talking about volumes regardless of the underlying disk type.
[1]

There are some limitations on shrinking volumes.

Only NTFS volumes have the Shrink Volume option. ReFS does not
support volume shrinking.

You cannot shrink a volume if it has bad clusters.

You cannot shrink a disk past an immovable file, like a page file.

Before shrinking a volume always defragment the drive.

Redundant Array of Independent Disks (RAID)

RAID
Redundant Array of Independent Disks (RAID) is a technology to provide
high reliability and (potentially) high performance storage systems. RAID
combines multiple disks into a single logical unit called a RAID array.
Depending on the configuration, a RAID array can withstand the failure of
one or more of the physical hard disks contained in the array, and/or
provide higher performance than is available by using a single disk.

Hardware RAID vs. Software RAID

Hardware RAID is implemented by installing a RAID


controller in the server, and then configuring RAID by using the RAID
controller configuration tool. With this implementation, the RAID
configuration is hidden from the operating system. The RAID arrays are
exposed to the operating system as individual disks. The only
configuration that you have to perform in the operating system is to
create volumes on the disks.

Software RAID is implemented by exposing all the disks that


are available on the server to the operating system, and then configuring
RAID from the operating system. Windows Server 2012 supports software
RAID, and you can use Disk Management to configure several different
levels of RAID. Given the significant changes and functionality that is now
available in Windows Server 2012 with Storage Spaces, software RAID has
become a secondary choice.[1]

RAID Performance
RAID subsystems can provide potentially better performance than
individual disks by distributing disk reads and writes across multiple disks.
For example, when implementing disk striping, the server can read
information from all hard disks in the stripe set simultaneously. When
combined with multiple disk controllers, this can provide significant
improvements in storage throughput.

Although RAID can provide better tolerance for disk failure, you should not use
RAID to replace traditional backup. If all the disks were to fail, then you would
still have to resort to performing a restore.

As a matter of fact, dynamic disks, upon which software RAID in Windows


operating systems is based, is being superseded by Storage Spaces. Storage
spaces will still utilise RAID scenarios that involve mirroring of boot volumes
though. For Windows server 2012 and later you should consider using Storage
Spaces in lieu of software RAID in Disk Management where possible.
[1]

Fault Tolerance
RAID enables fault tolerance by using additional disks to ensure that the
disk subsystem can continue to function even if one or more disks in the
subsystem fail.

Mirroring vs. Parity

Disk mirroring. With disk mirroring, all of the information that is


written to one disk is also written to another disk. If one of the disks fails,
the other disk is still able.

Parity information. Parity information is used in the event of a disk failure to


calculate the information that was stored on a disk. If you use this option, the server or RAID
controller calculates the parity information for each block of data that is written to the disks.
The parity information is then stored on another disk or across multiple disks. If one of the
disks in the RAID array fails, the server can use the data that is still available on the
functional disks along with the parity information to recreate the data that was stored on the
failed disk.

RAID Levels
The most common fault tolerant RAID levels are RAID 1 (also known
as mirroring), RAID 5 (also known asstriped set with distributed
parity), and RAID 1+0 (also known as mirrored set in a striped set).

Level Description Performan Space


utilizatio

Redundan

Comments

ce

RAID 0 Striped set


High read
without parity and write
or
performance.
mirroring. Dat
a is written
sequentially
to each disk.

cy

All space
on the
disks is
available.

A single disk
failure results
in the loss of
all data.

Use only in
situations
where you
require high
performance
and can
tolerate data
loss.

RAID 1 Mirrored set


without parity
or
striping. Data
is written to
two disks
simultaneousl
y.

Good
Can only
Can tolerate
performance. use the
a single disk
amount of failure.
space that
is available
on the
smallest
disk.

Frequently used
for system and
boot volumes
with hardware
RAID.

RAID 5 Striped set


with
distributed
parity. Data is
written in
blocks to each
disk with
parity spread
across all
disks.

Good read
performance,
poor write
performance.

Uses the
equivalent
of one disk
for parity.

Can tolerate
a single disk
failure.

Commonly used
for data storage
where write
performance is
not critical, but
maximizing
disk usage is
important.

RAID
Mirrored set in
1+0
a striped
(or 10) set. Several
drives are
mirrored to a
second set of
drives, and

Very good
read and
write
performance.

Only half
the disk
space is
available
due to
mirroring.

Can tolerate
the failure of
two or more
disks as long
as both disks
are not part
of the same

Frequently used
in scenarios
where
performance
and
redundancy are
critical, and the

each mirror is
striped.

mirror.

cost of the
required
additional disks
is acceptable.

Network File System (NFS)


NFS is an open-standard-based file system protocol that allows access to a
file system over a network, making remote shares appear the same way
as local storage. Network-accessible NFS shares are referred to as exports.

NFS Components

Server for NFS. This component allows a Windows-based server to share


folders over NFS. The folders can be accessed by any compatible NFS client, regardless of
which operating system the client is running.
Client for NFS. This component allows a Windows-based client to access
NFS exports on an NFS server, regardless of which platform the server runs.

NFS Scenarios

VMware virtual machine storage. In this scenario, disk files for virtual
machines running on VMware hosts reside on NFS exports. You can use Server for NFS to
host the disk files on a Windows Server 2012 R2 file server.

Multiple operating system environment. In this scenario, your organization


uses a variety of operating systems, including Windows, Linux, and Mac. The Windows file
server system can use Server for NFS and the built-in Windows sharing capabilities to ensure
all of the operating systems can access shared data.

Merger or acquisition. In this scenario, two companies are merging. Each


company has a different IT infrastructure. Users from one company use Windows 8.1 client
computers and want to access data hosted on the other companys Linux and NFS-based file
servers. You can deploy Client for NFS to the client computers to enable this functionality.

NFS Best Practices

Use the latest version of NFS servers and clients. Currently, NFS
version 4.1 is the latest version and is supported on Windows Server 2012 and later and
Windows 8 and later. By using the latest version of server and client operating systems, you
can take advantage of the latest performance and security improvements, such as client/server
negotiation and improved support for clustered servers.

Enable all available security enhancements. Since NFS version 3.0, NFS
has offered Kerberos security options to strengthen NFS communication. The following
options should be used when possible:

Kerberos v5 authentication protocol. This is the recommended


authentication protocol to maintain the highest authentication security.

Kerberos v5 authentication and integrity. This adds integrity checking


by using checksums to ensure that data has not been altered.

Kerberos v5 authentication and privacy. This adds encryption to the

authentication traffic.

Do not allow anonymous access. While anonymous access is an option for


NFS shares, you should not use it because it reduces the security of your file sharing
environment.

The main advantage of NFS is that it doesnt matter what operating system the
server or client is using. NFS is an open standard that allows sharing between
different platforms.

Implementing NFS

NFS Steps
Be sure to distinguish between the Server steps and the Client steps.

1.

Install the NFS role service. The file server that will be hosting the data will
need the Server for NFS role service. This is part of File and Storage services and
will provide the ability the export NFS shares.

2.

Configure the NFS role service. Select your NFS sharing profile, specify
access host information, authentication methods, and permissions. There are two
NFS share profiles.

NFS Share Quick. This is the fastest way to create an NFS share,
but it does not have some of the customizable share options available with Advanced
profiles. However, you can manualy configure these advanced options after the
share has been created.

NFS Share Advanced. This is the most customizable way to create


an NFS share including the ability to set folder owners for access denied assistance,
configure default classification of data, and enable quotas. To create an Advanced
profile, the File Server Resource Manager role service must be installed on the file
server.

3.

Install the NFS Client. Install the Client for NFS on any computer that will
need access to the NFS share. Most UNIX and Linux computers have a built-in NFS
client.

4.

Access the drives. This can be as simple as mounting the drive directly. You
could also incorporate the share into your iSCSI storage and Storage Spaces
implementations.

Server Message Block (SMB)


The SMB protocol is a network file sharing protocol that allows
applications on a computer to read from and write to files and to request
services from server programs on a computer network. Using the SMB
protocol, an application (or the user of an application) can access files or
other resources hosted by a remote server. This allows applications to
read, create, and update files on the remote server. SMB can
communicate with any server program that is set up to receive an SMB
client request.

SMB Scenarios

File storage for virtualization (Hyper-V over SMB). Hyper-V can store
virtual machine files, such as configuration, Virtual hard disk (VHD) files, and snapshots, in
file shares over the SMB 3.x protocol. This can be used for both stand-alone file servers and
clustered file servers that provide storage for Hyper-V clusters.

Microsoft SQL Server over SMB. SQL Server can store user database files
on SMB file shares. This is supported with SQL Server 2008 R2 for stand-alone SQL servers
and for both stand-alone and clustered SQL Server installations starting with SQL Server
2012.

Traditional storage for end-user data. The SMB 3.x protocol provides
enhancements to the Information Worker (or client) workloads. These enhancements include
reducing the application latencies experienced by branch office users when accessing data
over wide area networks (WAN) and protecting data from eavesdropping attacks. [1]

Sometimes known as man-in-the-middle attacks. An


eavesdropping attack occurs when a malicious person captures
network packets being sent and received by workstations
connected to the network.
[1]

SMB Features
The Windows 8.1 operating system and Windows Server 2012 R2 are using SMB
3.02. Here are some of the most important features.

SMB Transparent Failover for clustered file shares. The SMB


protocol has the built-in ability to handle failure of a node in a cluster hosting SMB
file shares; so that the client and the remaining cluster nodes can coordinate a
transparent move that allows continued access to resources with only a minor I/O
delay. There is no failure for applications.

SMB Scale-Out. The SMB Scale-Out feature allows you to provide


simultaneous access to the same share through multiple cluster nodes by using
Cluster Shared Volumes (CSVs). As the result, you can load balance SMB traffic
across multiple nodes of a cluster.

SMB Direct (SMB over RDMA). Formerly seen in high performance


computing scenarios only, SMB Direct is now available in Window Server 2012 and
newer operating systems. SMB Direct allows Remote Direct Memory Access
enabled (RDMA-enabled) network adapters to perform file transfers by using adapter
capabilities, with very limited system CPU overhead.

SMB Multichannel. SMB Multichannel is enabled automatically. It


allows SMB to automatically detect and configure multiple paths between individual
clients and the server. This functionality automatically enables bandwidth
aggregation and fault tolerance. For example, if a client and the server have multiple

network adapters connected to separate networks, SMB will be able to utilize both
network paths, combining the effective bandwidth and facilitating failover if one of
them becomes unavailable.

SMB Encryption. SMB Encryption allows for encryption without the


need for Internet Protocol security (IPsec). You can configure it per share or at the
server level. Note that once encryption is enabled, older SMB clients will not be able
to connect to encrypted shares or servers.

VSS for SMB File Shares. Volume Shadow Copy Service (VSS) is
enhanced to allow snapshots at the remote share level. Remote file shares act as a
provider and integrate with a backup infrastructure.

SQL Server over SMB. You can store both stand-alone and clustered
Microsoft SQL Server databases on SMB 3.x shares, which could allow
infrastructure consolidation.

Different versions of Windows client and server support different versions


of SMB. So if you have a mixed client base running Windows 10, Windows
8 and Windows 7 you may have to run multiple versions of SMB shares to
support those different client versions. For example, Windows Server 2008
R2 and Windows 7 support up to SMB 2.0. Windows Server 2012 and
Windows 8 support SMB 3.0.

Implementing SMB

1.

Select an SMB profile. If you have the File and Storage Service server role
installed, then you are ready to use SMB. You can create the SMB share in Server
Manager or with Windows PowerShell. There are three SMB profiles to choose from:

SMB Share Quick. This is the fastest method of sharing a folder on


a network. With this method, you can select a volume, or enter a custom path for the
shared folder location.

SMB Share Advanced. This profile offers the same configuration


options as the quick profile, plus additional options such as folder owners, default
data classification, and quotas. To create an advanced profile, the File Server
Resource Manager role service must be installed.

SMB Share Applications. This is a specialized profile that has


appropriate settings for Hyper-V, databases, and other server applications.

2.

Select Share Settings. The Quick and Advanced profiles have some additional
configuration options.

Access-based enumeration. Access-based enumeration displays only the


files and folders a user has permissions to access. If a user does not have Read (or
equivalent) permissions for a folder, Windows hides the folder from the users view.

Allow caching of share. This makes the contents of the share available to
offline users.

Encrypt data access. When enabled, remote file access to the share will be
encrypted. This secures the data against unauthorized viewing while the data is transferred
to and from the server.

3.

Configure the permissions. The last task to perform may be the most
important. You need to set permissions for who can access the share and what
privileges they have when they access the share. For example, will everyone have
full control over the share?

BitLocker

BitLocker Features

BitLocker Drive Encryption provides full volume encryption and startup


environment protection.

Protect company data. BitLocker encrypts all data stored on the


Windows operating system volume (and configured data volumes). This includes
user files; Windows operating system, hibernation, and paging files; applications;
and data used by applications. Encryption minimizes the risk when computers
are lost or stolen.

Specify what data to protect. Whether it is a users laptop or a


database server, BitLocker can encrypt the entire volume or only the used parts
of the volume. BitLocker can be used in combination with file system permissions
to further define who can access the data.

Secure Hyper-V environments. When you use BitLocker on


Hyper-V host servers, all of the virtualization data, including virtual hard disks,
configuration files, and snapshots, is encrypted.

Encrypt the volume containing Active Directory


database. One situation that administrators have to consider is a malicious
person walking out of a data center or server room with the hard drive of a
domain controller. When you encrypt the volume containing Active Directory
database, the risk of somebody gaining access to company information is
substantially reduced.

Secure the boot process. BitLocker protects the integrity of the


Windows startup process. BitLocker verifies that the files required to boot the
operating system have not been tampered with or modified. If the verification
finds files that were tampered with as a rootkit or a boot sector virus might, then
Windows does not start.

Removable Drives. BitLocker To Go is used on removable hard


drives. In situations where security is not a high priority, you can configure
BitLocker To Go so that a BitLocker-protected removable hard drive is
automatically unlocked and ready for use when connected to a computer. In
contrast, in highly secure environments, you can configure BitLocker To Go so
that users are prompted to manually unlock the removable hard drives when
connected. The user then enters a password.

Trusted Platform Module (TPM)

TPM is a dedicated microprocessor that handles cryptographic operations.


The TPM is a hardware component installed in many newer computers.
The TPM chip works with BitLocker to help protect user data and to ensure
that a computer has not been tampered with while the system was offline.
If your computer hardware supports it, you should enable the TPM.

TPM Scenarios

Encrypting hard drives. TPMs play an important role when


implementing BitLocker. BitLocker uses the computer's TPM to protect the
encryption key. As a result, users can access the encrypted drive as long as it is
connected to the system board that hosts the TPM and system boot integrity is
intact. In general, TPM-based protectors can only be associated with an operating
system volume.

Two-factor authentication. By leveraging a TPM with BitLocker, twofactor authentication can be achieved on startup. For example, you might require a user to
provide a startup key or a PIN , in addition to the verification provided by TPM. A PIN
consists of four to twenty digits or, if you allow enhanced PINs, four to twenty
letters, symbols, spaces, or numbers.

Performing platform verification. Computers with a TPM can use


integrity status reporting which checks the integrity of the computer. If the
computer integrity comes back as clean and the user can authenticate, then the
user can gain access to the protected volume. For example, the computer checks
to ensure the BIOS has not been tampered with, and then user unlocks the drive
with a startup key or a PIN.

BitLocker Steps

1.

Enable the BitLocker Drive Encryption feature. The Management tools are
optional, but usually desired.

2.

Enable BitLocker on the drive. Use the BitLocker Control Panel or


PowerShell to enable BitLocker on the drive.

3.

Decide how you want to protect the drive. For example, for the operating
system drive, you can configure additional authentication at startup in the form of a
startup key or PIN.

4.

Decide where you want to store the recovery key. For example, on a USB
with a backup in Active Directory.

5.

Turn it On. Turn on BitLocker for the volume and encrypt the drive.

The BitLocker recovery key is only used when the primary method to unlock the
drive cannot be used. For example, a user who knows the startup key or PIN
leaves the company or forgets their password, or an encrypted drive is moved to
another computer

BitLocker Cmdlets
Instead of using the BitLocker Drive Encryption applet, as you did in the
previous topic, you can use Windows PowerShell to manage BitLocker.
Windows PowerShell allows you to automate BitLocker operations.
Additionally, it provides support for protectors not exposed through the
applet, such as Active Directory Domain Services authentication.

Take a moment to look at the BitLocker Cmdlets in Windows


PowerShell page. This is a good starting point for learning about the
BitLocker commands that are available. Review the cmdlets and answer
the following questions.

You can use the Get-Member cmdlet to view the cmdlet properties that
are available.
Get-BitLockerVolume | Get-Member
Once you know what properties are available, you can use the SelectObject cmdlet to view those specific property values.
Get-BitlockerVolume | Select-Object ComputerName,
ProtectionStatus, VolumeStatus, Capacity
Note: The pipe (|) symbol is used to pass information from one cmdlet to
another. This is referred to as a pipeline.

BitLocker Group Policy


BitLocker can be administered through Group Policy. Group Policy lets you
create configurations that can be applied to many computers all at once,
rather than individually. Although this is an introductory course, it is
helpful at this point to see what BitLocker settings are available through
Group Policy.
By default, computers group policy settings on domain member computers are
updated every 90 minutes. You can use gpupdate /force at any time to
manually force the update of these settings.

BitLocker Recovery Best Practices


Recovering data from drives encrypted with BitLocker depends on the
protector and recovery mechanisms you have configured, but here are
some things to think about.

Make multiple of copies of your recovery key files and properly secure them.

Create a naming convention for your files so you know which key goes with
which drive.

Practice unlocking your BitLocker drives and create a set of steps to use.

Use Group Policy to make sure that the entire organization is doing things the
same way.

Backup and store your recovery keys in Active Directory.

Educate your users on why BitLocker is important.

BitLocker vs. EFS


Administrators sometimes struggle to decide between BitLocker and EFS.
Although BitLocker and EFS might appear to be similar both have specific
functionality and requirements, and as a result have different uses. The
products can be also used together to achieve higher levels of data and
system protection.
Study this table to see how EFS and BitLocker are different.

BitLocker

EFS

Encrypts entire drives and volumes

Encrypts individual files and folders

Encrypts the Active Directory

Does not encrypt the Active Directory

database

database

Implemented for all users and

Implemented by individuals

groups

Enabled by the Administrator

Enabled by the user

Requires TPM for full functionality

Does not require special hardware

Does not require user certificates

Requires user certificates

Support for ReFS

No support for ReFS

Windows PowerShell cmdlets

No dedicated PowerShell cmdlets

available

Das könnte Ihnen auch gefallen