Sie sind auf Seite 1von 54

CISSP

CISSP Essentials Security School


SearchSecurity.com's CISSP Essentials Security School offers free training for CISSP certification exam. Benefit from a series of 10 training lessons that explore the fundamental concepts, technologies and practices of information systems security. Each lesson corresponds to a subject domain in the exam's "Common Body of Knowledge" the essential elements each CISSP-certified practitioner must know. Each of the 10 lessons includes a 45-minute video presentation, a domain spotlight article that provides an insider's guide to each domain, and an exclusive quiz offering prep questions similar to those on the real CISSP exam. CISSP Essentials Security School not only provides CISSP certification education with a thorough overview of the topics covered in the exam, but it also doubles as a comprehensive security resource that enables proactive information security professionals on all levels to keep their skills sharp and gain a greater understanding of how all the pieces in the information security puzzle fit together. The 10 lessons in CISSP Essentials Security School are broken down into three domain groiups. The first three domains focus on securing data and reveal the essential elements to build an organizational security program, including the theories, technologies and methodologies to protect every company's primary information asset: its data. Domains 4-6 focus on securing the infrastructure as they reveal the nuts and bolts of how to best apply security to everyday computer and business operations. Fundamental concepts explored in the sessions include how to effectively design security architectures, implement secure networks, and build security into applications and systems. Finally domains 7-10 cover the business of security, an area that is ignored far too often in some of today's "status quo" enterprises. Security is often thought of exclusively in terms of technology, but corporate security is much more. It involves everything from governance, business management and regulatory compliance, to an understanding of physical security, disaster recovery and the law. DOMAIN 1 - SECURITY MANAGEMENT PRACTICES While viruses, worms and hacking grab all the news headlines, sound security management practices are the foundation of any organization's security success. CISSP Domain 1 explores:

Security management responsibilities The core components of security management: risk management, security policies and security education. Administrative, technical and physical controls

Risk management and risk analysis Data classification Security roles and personnel security issues

Get more information on Domain 1: Security management best practices DOMAIN 2 - ACCESS CONTROL A cornerstone of information security is controlling how resources are accessed so they can be protected from unauthorized modification or disclosure. The controls that enforce access control can be hardware or software tools, which are technical, physical or administrative in nature. CISSP Essentials domain 2 tackles:

Identification methods and technologies Biometrics Authentication models and tools Access control types: discretionary, mandatory and nondiscretionary Accountability, monitoring and auditing practices Emanation security and technologies Possible threats to access control practices and technologies

Get more information on Domain 2: CISSP Access Control DOMAIN 3 - CRYPTOGRAPHY Cryptography is one of the essential elements in the protection of electronic data. Most ecommerce applications rely on some form of encryption to protect the confidentiality and integrity of sensitive information as it transits across the Internet. Encryption is also an essential component in protecting stored data from unauthorized access. CISSP Essentials domain 3 covers:

Cryptographic components and their relationships Government involvement in cryptography Symmetric and asymmetric key cryptosystems PKI concepts and mechanisms Hashing algorithms Types of attacks on cryptosystems

Get more information on Domain 3: Cryptography strategies DOMAIN 4 - SECURITY MODELS AND ARCHITECTURE Two fundamental concepts in computer and information security are the security model, which outlines how security is to be implemented; and the architecture of a security system, which is the framework and structure of a system. CISSP Essentials domain 4 offers an in-depth review of:

Computer architectures, from the core operating system kernel to the applications to the network Trusted computing base and security mechanisms Components within the operating system Different security models used in software development
2

Security criterion and ratings Certification and accreditation processes

Get more information on Domain 4: Security models DOMAIN 5 - TELECOMMUNICATIONS AND NETWORKING This session prepares students for the CISSP exam by focusing on the "glue" of network security: how networks work, how data is transmitted from one device to another, how protocols transmit information, and how applications understand, interpret and translate data. Topics to be featured in this session include:

OSI model TCP/IP and protocols LAN, WAN and WAN technologies Cabling and data transmission types Network devices and services Intranets and extranets Telecommunication protocols and devices Remote access methodologies and technologies Resource availability Wireless technologies

Get more information on Domain 5: Telecommunications and networking security DOMAIN 6 - APPLICATIONS AND SYSTEMS DEVELOPMENT Applications and computer systems are usually developed for functionality first, not security. But it's always more effective to build security into every system from the outset rather than "bolt" it on afterward. The exact reasons why are revealed in this CISSP domain through topics focused on:

Different types of software controls and implementation Database concepts and security issues Data warehousing and data mining Software life cycle development processes Change control concepts Object-oriented programming components Expert systems and artificial intelligence

For more information on Domain 6: Application and system security DOMAIN 7 - BUSINESS CONTINUITY One of the fundamental objectives of security is "availability" the ability to access computer data and resources whenever necessary. This session focuses on one of the often overlooked but critical aspects of availability: business continuity planning and disaster recovery. Topics in this CISSP certification prep section focus on:

Business impact analysis Operational and financial impact analysis Contingency planning requirements
3

Selecting, developing and implementing disaster and contingency plans Backup and offsite facilities

Get more information on Domain 7: Business continuity best practices DOMAIN 8 - LAWS, INVESTIGATIONS AND ETHICS Fraud, theft and embezzlement have always been an unfortunate fact of life, but the computer age has brought on new opportunities for a different and more malicious set of thieves and miscreants. While many security professionals focus on "preventing" cyberattacks, the CISSP CBK teaches that it's equally important to understand how to investigate a computer crime and gather evidence that's exactly what this session addresses. Additional topics highlighted are information security regulations, laws and ethics that guide the practice:

Ethics and best practices for security professionals Computer crimes and computer law Computer crime investigation processes and evidence collection Incident-handling procedures Different types of evidence

Get more information on Domain 8: Investigations and security DOMAIN 9 - PHYSICAL SECURITY Physical security has taken on added importance in the continuing wake of September 11, 2001. While most IT professionals are focused on logical systemscomputers, networks, systems, devicesa comprehensive security program must address critical physical risks, too. The convergence of physical and logical systems makes this practice even more important. CISSP Essentials domain 9 covers:

Administrative, technical and physical controls pertaining to physical security Facility location, construction and management Physical security risks, threats and countermeasures Fire prevention, detection and suppression Authenticating individuals and intrusion detection

Get more information on Domain 9: Securing the physical space DOMAIN 10 - OPERATIONS SECURITY Operations security pertains to everything needed to keep a network, computer system and environment up and running in a secure and protected manner. Since networks are "evolutionary" and always changing, it's essential that security pros understand the fundamental procedures for managing security continuity and consistency in an operational environment. CISSP Essentials domain 10 reveals essential answers centered on key operations security topics:

Administrative and management responsibilities Product evaluation and operational assurance Change configuration management Trusted recovery states
4

E-mail security

Get more information on Domain 10: Enterprise operations security

Spotlight article: Domain 1, Security Management Practices


Security management embodies the administrative and procedural activities designed to secure corporate assets and information companywide. Security management facilitates the enterprise security vision by formalizing the infrastructure, defining the activities, and applying the tools and techniques necessary to control, monitor and coordinate security efforts across an organization. Fundamentally, information security assurance is a business issue that must be addressed in the context of the enterprise business framework. This article provides an overview of the challenges that constrain responsible security management and offers strategies as well as specific tools and techniques for evaluating, controlling, and implementing security across an enterprise. The following topics are included:

Fundamental principles of information security Foundation security terminology Security roles and responsibilities Security policies, procedures, standards and guidelines Security risk management

Fundamental principles of information security Information assurance is based on three core principles:

Confidentiality: prevent unauthorized disclosure of sensitive information for data at rest, in transit or during transformation. Integrity: prevent unauthorized modification, replacement, corruption or destruction of systems or information. Availability: prevent disruption of service and productivity, addressing threats that could render systems inaccessible.

This section will touch briefly on examples of typical security vulnerabilities related to each of these principles (i.e., denial-of-service attacks related to availability) and on the challenges of mitigating them through security awareness, timely security patching, system hardening and remote-access control, encryption, network monitoring and intrusion response, and developer attention to fault tolerance and coding quality. Minimizing organizational damage is stressed, for instance, by swift response to intrusions and recovery from incidents using intrusion-response teams and efficient backup and recovery methods. Foundation security terminology The following foundation terminology will be introduced, defined and related: vulnerability, threat, risk, exposure and countermeasure.

Security roles and responsibilities This section provides an overview of security responsibility as it relates to enterprise roles. The importance of layering security responsibilities across enterprise roles is stressed, and those roles typically connected to security are explored. (i.e. data owner, custodian, user, auditor, senior manager and security professional). The challenges surrounding security awareness and training, hiring and termination practices, operational security skill and knowledge level (in support of these and other roles) are touched on. Deeper discussion is provided regarding the primary management roles (executive, administrative and operational). Government mandates, such as HIPAA and SarbanesOxley, make it abundantly clear that management executives are ultimately responsible for the protection of all organizational assets, including private and proprietary information. Failure can result in stiff corporate -- and even personal -- penalties. Therefore, greater emphasis is placed on the exploration of executive management responsibility, which covers formalizing the security program and leadership, insuring that, above all else, management understands respects and upholds their legal and ethical obligations to their employee workforce, owners or stockholders. Thus, a top down, rather than bottom up approach, is stressed. As is interlocked layering of security efforts across the enterprise to provide appropriate security oversight and redundancy; the challenges posed by competing strategic, tactical and operational goals are also covered. On the tactical side, administrative and operational security responsibilities include topics such as translating executive policies into actionable processes and procedures, the adoption of standards and guidelines that support the security program, development of procedures and processes and the vigilant monitoring and enforcement of these measures to insure compliance with executive management policy. Throughout this section the need for due care and diligence, separation of duties and other generally accepted information security practices are emphasized. Security policies, procedures, standards and guidelines Security policies are the tangible manifestation of executive management's security vision. Policies provide explicit direction for tactical and administrative efforts. The anatomy of a quality enterprise security policy is comprehensively covered, which includes discussion of the mission statement, organizational security policy elements, issue-specific policy elements that target areas of concern and system-specific policy elements. The unique characteristics that make up different policy types, such as regulatory, advisory or informative, are also contrasted as are the areas of control, including accountability measures, physical and environmental tactics, administrative and access controls, cryptography, business continuity planning, and computer operations and incident handling. Instances where security policy defines the objectives, standards, baselines and guidelines provide the methods that will be used to accomplish the goal. By means of these methods, procedures are derived that transform policies into actionable tasks, providing step-by-step instructions to insure that the organization remains in compliance with security policy. Security risk management One of the most important aspects of security management is learning how to judge and justify security investment. Security risk analysis must consider a broad range of enterprise impacts, including the cost of physical environment damage, human error,
6

equipment malfunction, hacking (from inside and outside the organization), misuse or loss of data, and application error, among others. Applicable risk management techniques are discussed, and clear guidance is provided on to how to approach the challenge of balancing investments in security against other organizational requirements whose importance may differ greatly depending on whether the organization is operating in the public, military or private industry. Included are overviews of concepts, such as data classification, which describes the sensitivity of data (including classification levels as they are applied to private versus military organizations), risk prioritization, cost/benefit comparisons and corporate security risk mitigation strategies. The relationship of threat agents to vulnerabilities, and the types of risks they can induce are also presented. Risk analysis depends heavily on asset and information valuation, which can vary widely among organizational individuals. Therefore, multi-disciplinary involvement is recommended. Either a quantitative (fact-based) or qualitative (perception-based) approach can be used, which can be applied by manual or automated means. The advantages of each are contrasted. A systematic, quantitative approach is described in detail, which includes determining what enterprise requirements must be fulfilled, approaches to input gathering, determining loss potential (immediate or delayed) assigning cost/benefit quotients, adjusting for the cost of applying countermeasures, identifying potential threats (including those resulting from non-malicious stimulus), estimating threat frequency, and selecting the optimal countermeasures that will transfer, or reduce risks. Step-by-step instructions are provided as to how to calculate exposure factors, annualized rate of occurrence, single and annualized loss expectancy, and total versus residual risk. The range of options for mitigating risk is explored, as are the functionality and effectiveness of common solutions. The alternative qualitative approaches discussed which include the Delphi Technique for group decision-making, storyboarding, brainstorming and surveys, give the reader a well-rounded overview of risk analysis options.

CISSP Essentials training: Domain 2, Access Control


Access controls enable the protection of security assets by restricting access to systems and data by users, applications and other systems. Without a doubt, access controls are the cornerstone of any enterprise information security program. In this CISSP Essentials Security School lesson, Domain 2, Access Control, expert CISSP exam trainer Shon Harris offers a video presentation detailing how access controls support the core security principles of confidentiality, integrity and availability by inducing subjects to positively identify themselves, verify they possess appropriate credentials and the necessary rights and privileges to obtain access to the target resource and its information. Key focus areas include access control principles; administration and practices; models and technologies; types, methods and techniques; and threat monitoring. Prepare for the video by reading the Domain 2 spotlight article, which provides an indepth look at the challenges and principles behind access controls, their diverse variety, what threats they can mitigate and the challenges of selecting, implementing and administering them.

Host: Welcome to Search Security CISSP Essentials, Mastering the Common Body of Knowledge. This is the second in a series of ten classes, exploring the fundamental concepts, technologies, and practices of information systems security corresponding to the CISSP's Common Body of Knowledge. In our last class, we explored security management practices. Today's class will examine topics covered in the second domain of the CBK, access control. The cornerstone of information security is controlling how resources are accessed, so they can be protected from unauthorized modification or disclosure. The controls that enforce access control can be hardware or software tools, which are technical, physical, or administrative in nature. In this class, lecturer Shon Harris will cover identification methods and technologies, biometrics, authentication models and tools, and more. Shon Harris is a CISSP, MCSE, and president of Logical Security, a firm specializing in security education and training. Logical Security provides training for corporations, individuals, government agencies, and many organizations. You can visit Logical Security at www.logicalsecurity.com. Shon is also a security consultant, a former engineer in the Air Force's Information Warfare unit, and an established author. She has authored two best-selling CISSP books, including CISSP All-in-One Exam Guide, and was a contributing author to the book Hacker's Challenge. Shon is currently finishing her newest book, Gray Hat Hacking: The Ethical Hacker's Handbook. Thank you for joining us today, Shon. Shon Harris: Thank you for having me. Host: Before we get started, I'd like to point out several resources that supplement today's presentation. On your screen, the first link points to a library of our CISSP Essentials classes, where you can attend previous classes, and register to attend future classes as they become available. The second link on your screen allows you to test what you've learned with a helpful practice quiz on today's material. And finally, you'll find a link to the Class 2 Spotlight, more detailed information on this domain. And now, were ready to get started. It's all yours, Shon. Shon Harris: Thank you. Thank you for joining us today, we are going to go over the Access Control domain. This is a very large domain for the CISSP exam. It's not as difficult; students don't usually find it as difficult as the other domains, but it does have a lot of material in it. In this domain we talk about different access control types, technologies and methods for authentication and authorization, we'll look quickly at some of the models that are integrated into applications and operating systems that control access, and how subjects and objects communicate within the software itself. We also need to understand how to properly administer access to the company's assets. And then we'll quickly look at intrusion detection systems. Now, in the last class that we had, I talked about different types of controls, and I said that this is a theme throughout all of the domains was in the Common Body of Knowledge, and it's important for you to not only understand the different types of controls, physical, technical and administrative, but you'll need to know examples of each kind, and how they apply within the individual domain. Since we're talking about access

control right now, we have listed some of the controls that a company can put in place, to control either physical or logical access. Physical controls, of course, can be that you actually have locks, you have security guards, you have fences, you have sensitive areas that are blocked off that maybe need some type of swipe-card access to it. Technical controls would be what you would think of; it's access control, logical controls that are built into applications, operating systems, biometrics, encryption. And administrative controls also come into the play of access control, although most, a lot of people don't think about it. You need to have a security program that outlines how, the role of security within your environment, but also who is allowed to access what assets, and the ramifications for these, if these expectations are not met. Now even though we have three categories of controls, we also have different characteristics that individual controls can provide. The individual controls -- and when we say controls, it's the same thing as a countermeasure or a safeguard, it's a mechanism that is put in place to provide some type of security service -- these different controls can provide different types of services. The control can be preventative, meaning that you're trying to make sure something bad does not take place. Preventative controls could be developing and implementing a security program, encrypting data, you encrypt data to try to prevent anybody from reviewing your confidential information. Something that provides detective service is something that you would look at after something bad happened. So maybe a system goes down, you're going to look at the log to try to piece back together what took place, and try to figure out how to fix the problem. Intrusion detection systems are detective controls, because they're looking after the fact, maybe after an attack took place. Corrective means that something bad has already happened, and you have controls that can fix the problem and get the environment or the computer back to a state of working. An example of a corrective control could be, you have antivirus software. Once a file actually gets infected, your antivirus software, if it's configured to do this, will try to strip that, the virus, out of the infected file; it's going to try to correct the situation. There's other types of corrective controls within your operating systems and software. These entities will save state information. State information is really how the variables are populated at a certain snapshot in time. Applications and operating systems will save state information, so if there's some type of a glitch, maybe a power glitch, or maybe there's an operating system glitch, it will try to correct the situation, and bring you back to the state, and save your data. Now, deterrent. Some people have a problem, or don't really understand the difference between deterrent and preventative. Deterrent is that you're trying to tell the bad guy, "We're protecting ourselves, we're serious about security, so you need to move on to an easier target." Preventative means you're trying to prevent something bad from taking place. Deterrent means that you have something that's actually visible in some way, to tell the possible intruder that they're going to have to go through some work to actually carry out the damage they want to. And an example is when people actually put "Beware of Dog" signs up. Some people may not even have a dog, but they're just trying to tell the bad guy, "Go away, because we have some type of security mechanism."
9

And if you have an access control that provides recovery, and there's different kinds of mechanisms that can provide different types of recovery; for example, if your data actually got corrupted, you need to recover the data from some kind of backup. And compensation just means there's alternate controls that provide similar types of service that you can choose from. Now you need to be familiar with a combination, like administrative detective, you need to understand physical detective, technical detective, you also need to know administrative preventative, technical preventative, physical preventative. And what I mean by this is that you need to understand, if something is detective and administrative, that it's trying to accomplish certain types of tasks, and you need to know examples of each kind. Here we have some examples of detective administrative. In job rotation, a lot of people don't understand how it's detective in nature, but it is, a very powerful control that companies can put in place. The security purpose of using job rotation is that if you have one person in a position, and they are the only ones that carry out that job, they're the only ones who really know how that job is supposed to be carried out and what they're doing, they could be carrying out fraudulent activity and nobody else would know. So the company should rotate individuals in and out of different positions, to possibly uncover fraudulent activity taking place. And this is definitely a control that's used in financial institutions. Now, this is different than separation of duties. Separation of duties would be an administrative preventative control. Separation of duties means that you want to make sure that one entity cannot carry out a critical task by themselves, so you split that task up, so that these two entities would have to carry out their piece of the task before the task can be completed, and that's preventative because you're trying to prevent fraudulent activity from taking place. Job rotation is detective, because you rotate somebody into a new position, and since they may uncover some of the fraudulent activity that could've been happening. So we have examples of detective technical-again that's after the fact, intrusion detection systems, reviewing logs, forensics and detective physical, physical controls that can be used to understand what took place, and possibly... now you need to go after the bad guy, now you need to try to get the environment back up to a working state, or you need to start collecting evidence for prosecution. Now, there's different authentication mechanisms that we use today, and they all have one or more of these characteristics: it's something that you know, something that you have, or something that you are. And something that you know would be like a PIN number, a password, a passphrase. Something that you have would be a memory card, a smart card, a swipe card. And something that you are is a physical attribute, so something that you are is talking about biometric systems, and we're going to look at different types of biometric systems in a minute. Now, most of our authentication systems today just use one of these, and combine it with a username or a user account number, and that's considered one-factor. Two-factor means that you have two steps necessary to actually carry out the authentication piece. Twofactor authentication provides more protection. It's also referred to as strong authentication. So there's different types of mechanisms to authenticate individuals or

10

subjects before they can actually access subjects or objects within your environment. These are the ones that you need to know for the CISSP exam. Biometrics is a type of technology that will review some type of physical attribute, and in biometrics, what happens is that an account is set up for an individual, and there's an enrollment period that goes through. So for example, let's say the administrator set up a account for me, and the biometrics system that we're using is fingerprint, so I will put my finger into a reader. Now this reader is going to read specific vectors on my finger, and extract that information. That information from my finger is held into a profile or reference file, and then it's put into a backend database, just as our passwords and such would be kept in a back-end database. When I need to authenticate, I will put in some type of username or user account number, and then I will put my finger into the reader. The reader will pull the information, will read the same vectors on my finger and compare it to the profile that is kept in the database. If there's a match, then I'm authenticated. Token devices, there's several different types of token devices. Within the CISSP exam, you need to understand the difference between synchronous and asynchronous token devices. And these devices actually create one-time passwords, and one-time passwords provide more protection than a static password, because you only use it once. If the bad guy sniffs your traffic and uncovers that password, it's only good for a short window of time. So the token devices, synchronous is time or event-based, and asynchronous is based on challenge-response. And for the exam, you need to understand the differences between them, and how they work, and the security aspects of both of them. The other authentication mechanisms are memory cards and smart cards, and cryptographic keys. And cryptographic keys means that you're actually using your private key to prove you are who you say you are. It's not a public key, and when you go into cryptography you truly understand the difference between a private key and a public key, and the different security characteristics each of them provide. But today we use cryptography for a lot of different reasons. If you're using cryptography for authentication, you're providing your private key. Now, within biometrics, we have different types of errors that we need to be aware of. A type one error means that somebody who should be authenticated and authorized to access assets within the environment, did not happen. So if we experience a type one error, that means that someone who should've been authenticated, did not. So the scenario I went through, I went through an enrollment period, the system has a profile on me, I've authenticated, I've gone through the steps of being authenticated, and it shuts me out, it doesn't let me in. And this can happen with biometrics, especially if it's a finger reader, if it's a fingerprint reader, then of course if you have a cut, or you have dirt; voiceprint, if you have a cold, or if there's some type of problem with your voice. So because biometrics looks at such sensitive information, there is a chance for more type one errors. Now type two errors is actually more dangerous than type one errors, it means you're allowing the imposter, you're allowing somebody to authenticate who should not be able to authenticate. So let's say Bob, has never gone through an enrollment period, and should not be allowed to access our company assets, but he goes through the process of authentication, the biometric system lets him in, and that's a type two error. Now there's a metric, that's Crossover Error Rate, and this is used, the CER value is used, to determine the accuracy of different biometric systems. And the definition of the CER value is when type one equals tape two errors. That means you have just as many type
11

one errors as you have type two errors. And the reason that we use this metric is because when you get a biometric system, you calibrate it to meet your needs and your environment. The more sensitive that you actually make your biometric system, you will have a reduction in type two errors, so you're trying to keep that bad guy out, but you're going to have an increase in type one errors, meaning that the people who are supposed to be authenticated are going to be kept out. So, companies have to do a balance between type one errors and type two errors, because if you have too many type one errors, people who are supposed to be authenticated are not getting in. So you will calibrate this device to meet the necessary sensitivity level for your environment. And, let's say you and I are customers, we're looking at different biometric devices. Well how do we know that one is more accurate than the other? We don't know that, we as customers don't know that. We can go by the vendor's hype on how great their product is, but what's better is to have biometric systems that are tested by a third party, that come up with an actual rating, which is a CER rating. So a CER rating indicates how accurate the biometric system is, so the lower the CER rating, the more accurate a biometric system is. Now, there's several different types of biometric systems used today, some of them of course we use more often than others. Biometrics has been around for a long time, and it really hasn't been that popular until after 9/11, mainly because the society has kind of a push back to biometrics, because it seems to get too much into our space, it's too intrusive. We're used to having to provide a PIN number or password. So for the CISSP exam, you need to know the difference between all of the biometric system types. For example, the retina scan will look at the blood vessel patterns behind the eye, iris scan will look at the color patterns around the pupil, signature dynamics is something that's different than you actually just signing a check. Signature dynamics is collecting a lot of electrical information on how you sign your name, how you hold the pen, the pressure that you use, how fast you sign your name, and the static result, your actual signature. So it's easy to forge somebody's signature, but it's very difficult to write their name exactly as they write it. You also need to know the difference between hand topology and geometry. Hand topology is a side view of the hand, where geometry is the top view of the hand. So in topology, if you had a topology reader, you would put your hand in, and then there's a camera or a reader off to the side that will look at the thickness of your hand, you know your knuckles and such, and then geometry as the reader on top that looks at the whole view of your hand. Now I said that memory cards and smart cards are used for authentication, and memory cards are very simplistic, it just means that they have a strip on the back of the card that holds data, that's all it does, it just holds data. Memory card doesn't have any intelligence, it can't do anything with the data. A smart card is different, it actually has a microprocessor, it has integrated circuits, and you have to authenticate to your smart card to actually unlock it, because your smart card is like a little tiny computer; it can actually do stuff, it can process data. And, when you look at a smart card, you see that there's a gold seal. That gold seal is the input output channels that your card will communicate to the reader. So you put your card into a reader, and not only is it going to communicate to the reader through that gold seal, the input-output, but it's also going to get its power from the
12

reader. So you put your card through, you put in your PIN number. If you put it in correctly, you unlock this smart card. And the smart card could do a lot of things, it depends on how it's coded. It could hold your private key, if you're working within a PKI so you need to authenticate, it could respond to a challenge-response, it can create a onetime password, it could hold a lot of your work history information. And smart cards are really catching on more today. They've been popular in many different countries outside of the United States, United States is now kind of catching up, where the military has changed over for their ID all to smart cards, a lot of credit cards are using smart cards, and they provide more protection because you have to authenticate to them, and also it's harder to compromise a smart card. In fact, some smart cards, if I try to authenticate, like I put my PIN number in incorrectly, let's say four times, it can actually lock itself, where I have to call the vendor to get an access code to override that. Some smart cards will actually physically just burn out their own integrated circuits, so if I maybe do like a brute force attack, I try to authenticate, authenticate, authenticate, and after I reach a certain threshold, it will just go ahead and burn its own circuits, so that the physical card cannot be used anymore. Now in this domain, we also look at single sign-on technologies. And the goal of single sign-on is that users only have to type in one credential set, and they'll be able to access all of the resources within the environment they need to carry out their task. This is kind of a big thing in some markets today, there's a lot of companies that's trying to accomplish single sign-on through their products. And that helps the user because today, in most environments, users have to have several different username and passwords, or whatever credential sets, to be able to access different types of servers, different types of resources. That also adds to the burden of administration, when we've got all of these different types of systems and have to keep all of these various user credential sets. So single sign-on, you just log in once, and you access the resources you need, but it's kind of a utopia that a lot of companies are going after right now. The difficulty is because you're trying to get a lot of diverse technologies to be able to understand and treat one credential set the same, and let's say you have five different flavors of UNIX in your environment, you have different Windows versions, you have legacy systems, it's not easy to accomplish. So even though there's a lot of different technologies for single sign-on, these are the ones that will be covered in the CISSP exam that you need to know about. Now Kerberos is an authentication protocol that's been around for quite some time. It's been integrated and used in UNIX environments, and it's catching on more and more today, mainly because Microsoft has integrated it into its Windows 2000 family. And in fact, Windows 2000 will try to authenticate you through Kerberos first, and if your system is not configured to be able to do that, then it'll drop down to another authentication method. Now Kerberos is based on symmetric key cryptography, which is important. And there's different components within Kerberos, we have the KDC, Key Distribution Center, and principals and the realms. Now, a quick overview, is that within an environment, a network that is using Kerberos, what can't happen is that users and services cannot communicate directly. I cannot communicate directly to a print server; I have to go through steps to properly authenticate myself. Services cannot communicate, remote services cannot communicate directly to each other. They have to be authenticated. So the KDC holds all of the symmetric keys

13

for all of the principals. And just like with the different technologies, each technology can come up with their own terms. If you're not familiar with principals or realms, a KDC is responsible for all the services and users within an environment. All those services and users are referred to as principals. An environment is referred to as a realm. If you're familiar with the term of domain, within Microsoft, a domain controller is responsible for one domain. [It's] the same type of thing, a KDC is responsible for one realm, and a realm is made up of principals. Now the KDC is made up of two main services, the authentication server, and the ticket granting service. And I'll just quickly walk through the steps that happen within Kerberos. Let's say I come in to work, and I need to authenticate. So I'm going to send over my username over the KDC, actually to the authentication service on the KDC. I send over my username, but I do not send over my password, and that's a good thing, because since the password is not going over the network, somebody can't grab it and try to use it. So I send over my username, the KDC will look up to see if it knows Shon Harris, it does know Shon Harris, it's going to send over an initial ticket that's encrypted. Initial ticket gets to my computer, and my password is converted into a secret key. Now, the key, my password and the secret key, is used to decrypt the initial ticket. If I've typed in the correct password, I can decrypt this initial ticket. If that takes place properly, I am now authenticated to my local system. If I need to communicate to something outside of myself, here we have a file server, I have to send an initial ticket over the ticket granting service. And basically that initial ticket says to the ticket granting service, "Look, I've authenticated, I need to talk to the file server, create another ticket for me." Ticket granting service will create a second ticket, called Ticket Granting Ticket, and it'll have two session keys. It comes over to me, I pull out one of the session keys, and I send it over to the file server, and we both have session keys now. And I just did a really quick overview, these steps actually get a little bit deeper, a little bit more complex, and you actually need to understand them for the exam, along with some of the downfalls of Kerberos itself. Now large portion of this domain goes into different models, different access control models. And these models -- most people I find, aren't really familiar with where these models come into play -- they are the core of an application or an operating system. When a vendor actually builds an operating system, they have to make a decision, before they even write a piece of code, what type of access control is going to be used within their product. The discretionary access control means that data owners can choose who can access their files and their data. And this is the model and environment that we're most used to, because all of Windows, Macintosh, most flavors of UNIX and Linux work on DAC. And it just means that you can determine who can access what files and directories within your system. And you know if you're working on a DAC system, if you're in Windows, you do a rightclick, you look at Security Properties, and you can see who has read access, who has full control, and you can choose who has these levels of access. In a Linux environment, if you can do the CHMOD command, and change the actual attributes on files, you're working in a DAC model because it allows you to make those decisions. Now, that's different than a MAC model. Mandatory Access Control makes its decisions, the system will make the decisions, the data owners and the user will not make decisions.

14

In the MAC model, the operating system will make decisions based on the clearance of the user, and the classification of the object. So if I'm trying to access, let's say I have topsecret clearance, I'm trying to access a file that has secrets, my clearance dominates the classification, and the operating system will allow me to access that. So, Mandatory Access Control is much more strict, it doesn't allow users to do anything, it doesn't allow users to make any decision or configuration changes. And MAC systems are used in more of like government environments, military environments, where secrets really have to be kept secret. Role-based just means that you're setting up roles or groups, you assign rights or permissions to those roles or groups, and then you put users in them, and they inherit those rights. And even though I'm going very quickly over these, you need to know a lot of this stuff to much more of a depth that I'm actually covering for the exam. When do you use these type of models, in what environment, the characteristics of the different models, the mechanisms used in the models, the restrictions, all of that. Most of us are not familiar with Mandatory Access Control systems unless we work in that type of environment. The most recent operating system that came out that's MAC, Mandatory Access Control, is SE Linux, Security Enhanced Linux, that was developed by the NSA and Secure Computing. Now, we also need to be concerned about how we're controlling access. So far we've looked at some controls that we could put in place, the different types of controls, what characteristics those controls will provide, preventative, corrective, deterrent, and we looked at authentication mechanisms that will provide either something that you know, something that you have, something that you are. And we looked at single sign-on technologies. Now we need to look at, how do we properly administer the control, especially remotely? In today's environment, we have a lot of remote users that need to access our corporate assets, so how do we deal with that? And the three main protocols we need to know about are RADIUS, TACACS, and Diameter. Now, RADIUS has been around for a long time. It's an open protocol. And anytime anything is referred to as open, it means it's available to vendors to manipulate the code to work in their product. So different vendors have taken the RADIUS protocol and manipulated it enough to work seamlessly in their product. So we have different flavors of RADIUS. Each one of these protocols that we're talking about is referred to as a triple A protocol, which means it carries out authorization, authentication, and auditing. And auditing also includes something that most people don't think about, but auditing is the way for ISPs to keep track of the amount of bandwidth that's being used, so it could charge the corporation properly. So it's not just auditing on keeping track of what happened, but also used in billing activities. So in RADIUS, just to walk through a quick scenario, let's say I want to access the Internet. I will connect to my ISP, and my ISP will go through a handshake of how authentication will take place, but what I'm communicating to is an access server. This access server is actually the RADIUS client. I am not a RADIUS client, the access server is a RADIUS client. The RADIUS client communicates to a RADIUS server. The RADIUS server is the component that has all of the username and passwords and the different connection parameters.

15

So I will send my credentials over to the RADIUS client, which is the access server. The RADIUS client will send that information over to the RADIUS server, the RADIUS server will determine if I've entered the right credentials or not, and send and accept or decline back to the RADIUS client. Now the RADIUS server will also send not just an accept or decline, but it could indicate the type of connection that needs to be set up, how much bandwidth I'm allowed, if I need a VPN setup, etc. And that's just to get on the Internet. A lot of times when you need to communicate with your corporate environment, you're going to have to go through a second set of authentication, and a lot of companies do use RADIUS in that mode also. Now, TACACS is not an open protocol, TACACS is a proprietary protocol, it's been developed and owned by Cisco. So, they're not going to allow you to have the code for free, and manipulate it as you see fit. It works with their products. Now, TACACS has gone through generations, as TACACS, extended TACACS, and now we're at TACACS+. Even though TACACS and RADIUS basically do the same thing, there are some differences. The authentication, authorization, and auditing pieces of TACACS+ are separate. In RADIUS they're all clumped together; you get them all, you don't have a choice. In TACACS+, the administrator, who is configuring and setting this up, can choose what services he or she actually wants. TACACS also allows the administrator to come up with more detailed, oriented profiles than RADIUS, meaning that if you have Sally authenticating remotely into the corporate world, then she can have her own profile on what she can and can't access, which can be different than Mark's. That's something different. Also, TACACS+ communicates over TCP, which is a reliable transport protocol, where RADIUS goes over UDP. And, RADIUS does not protect the RADIUS client to the RADIUS server communication as well as TACACS+ does. Between the RADIUS client and the RADIUS server, just the user password is encrypted. Between TACACS+, the client and the password, all of this communication going back and forth, is encrypted. Now, Diameter is a new protocol that a lot of people don't know about. The goal is that it's supposed to possibly replace RADIUS. The reason that Diameter was created is that we have a lot of devices today that need to be able to authenticate differently than the traditional methods. The traditional methods of remote authentication, at one time happened over SLIP connections, now it's happening over PPP connections, and uses traditional ways of authenticating, PAP, CHAP, or EAP. But we have wireless devices, we have smartphones, we have a lot of devices that can't, or don't, communicate over these types of connections, also don't have the resources to have a full TCP/IP stack, and maybe need to authenticate different ways. So Diameter allows for different types of authentication, it really opens the door to the types of devices that can authenticate to your environment, and how that authentication can take place. Now I didn't come up with this, they came up with this. Whoever created Diameter, they said "Diameter is twice of RADIUS." If you get it, "radius and diameter," they're saying, "Diameter is twice of RADIUS." So I guess if you come up with a new protocol, you can come up with your own goofy name. Now in this domain also, we go through intrusion detection systems. Two basic approaches to IDS is network-based versus host-based. And network-based means that you have sensors that are in different locations within your environment, and you have to be aware that whatever type of traffic you're trying to monitor, that's where you need to
16

have a sensor, and a sensor is either a dedicated device, or it's a computer running IDS software, and the network interface card is put in promiscuous mode. Promiscuous mode just means it can look at all the traffic going back and forth. So another thing is, where do you actually place that sensor? Do you want it in your DMZ? A lot of companies put an IDS sensor in their DMZ. Companies have to make a decision on if they're going to put a sensor in front of their firewall, that is facing an untrusted environment. If you put a sensor outside of your environment, in front of your firewall, you're going to get an amazing amount of traffic, so a lot of companies don't do that, because there's so much junk, there's so much traffic. Some companies who require a higher level of protection will put a sensor outside the firewall to find out who's knocking, and start to gather statistics on trying to be able to predict the type of attacks that may take place. Because there's certain things, certain pings, sweeps, and probes, and activities that you can determine, "Okay, this is what the person's after, this is the type of attack we need to be prepared for." So a network-based is different than a host-based; a host-based just means that you've got some type of software that's on an individual system, and that's its world, it doesn't understand network traffic, it doesn't care about network traffic. It just cares about what's going on within that single computer. And a host-based IDS would be looking at user activity, what's trying to access system and configuration files, what types of access are coming in from the outside, and trying to determine any type of malicious activity. We can also split the IDSes up into signature and behavioral-based. Now, signature-based is basically the same type of idea as antivirus software. Antivirus software has signatures that are just patterns to try to map to an actual virus. In IDS, we have signatures which are patterns that the IDS uses to be able to determine if there's a certain attack going on. For example, there's an attack called the LAND attack; in the LAND attack, the source and destination IP address and port are the same, because in a packet, the source and destination IP addresses have to be different. I send you a packet, my source IP address, and your destination IP address. Well, if I'm a bad guy, I can manipulate that header, and have the source and destination address to be yours, and if your system is vulnerable to a LAND attack, that just means your TCP/IP stack has no idea how to interpret that. And that's an example of a signature. So if an IDS actually has that signature, and it finds a packet that has the same source and destination address, it will then alert that there's this type of attack going on. There's other types of attacks that could be identified this way; XNIS, which means all the flags within headers are turned on, a lot of different types of fragment attacks is where the offsets within the headers are incorrect. And signature is different than behavioralbased. A behavioral-based system will learn about what is considered normal in your environment. And really how IDSes started was behavioral-based, behavioral host-based systems. They were developed and used within the military, the military was not as concerned about attacks, but concerned about users misusing their resources. But we've extended behavioral-based to look for attacks in network-based systems. So behavioral-based, what'll happen is, you'll install this IDS in your environment, and it goes through a learning mode, it learns on what are the normal activities for your environment, the user activities, the types of protocols that are used, how much UDP traffic is used compared to TCP traffic. And after this learning mode, which is usually a couple of weeks, then you have a profile. The profile is used to compare the rest of your
17

traffic, compared to this profile. Anything that does not match the profile is considered an attack. Now, one benefit of a behavioral-based is that it can detect new attacks, because it's just something out of the normal, out of norm. So behavioral can detect new attacks, where signature-based can only detect attacks that have been identified, and signatures were written about them. Now, two types of behavioral-based IDS is statistical and anomaly-based. And statistical means that you have a certain threshold set, and you as a network administrator would do this configuration. Because in our example here, you may have 10 FTP requests over a ten minute period, and that's fine; there could be that many requests within your whole environment, that's not a problem. But if you have 50 FTP requests within 10 minutes, that is out of the norm. That means that something actually is going on, somebody's trying to break into some type of FTP servers that you have. So this is just one example, you've several different statistical-based thresholds to be able to determine if an attack's going on. And an anomaly based just means that again, something does not match the historical values that were set before your IDSes learned what's normal, and if something is out of the norm, it's considered an anomaly. Now what's important is that when you put your IDS in this learning mode, that you make sure that your environment is pristine, because companies have put their IDS in learning mode and have been under attack. If that's true, then the IDS thinks that's normal, and will incorporate that into the profile. Now, this is pretty quick, I'm going through a very large domain. There's a lot of things that I didn't actually cover, that's covered in the Access Control domain. But the crux of this domain is understanding the difference between identification, authorization, authentication, the technologies and methods for each one of those, how they interact, the mechanisms that can provide these; mainly authentication is looked at. And we looked at the cryptographic keys, the memory cards, the smart cards. We didn't go into passwords and passphrases, virtual passwords, but those are components you use for authentication. Single sign-on, you actually need to know specific characteristics about SESAME, and thin clients, and a lot more about Kerberos than I covered. The different models are hit pretty hard, DAC, MAC, role-based, rulebased, and again, into intrusion detection systems, we also go into this domain. We look at different types of attacks that can happen against the different methods and technologies that are addressed in this domain. So, this is just a quick one-hour look at this domain, it's a very large domain, and very important for corporations to understand how to properly control access to the assets that they're needing to protect. Host: Thank you, Shon. This concludes class two of CISSP Essentials, Mastering the Common Body of Knowledge: Access Control. Be sure to visit www.searchsecurity.com/CISSPEssentials, for additional class materials based on today's lesson, and to register for our next class, on cryptography. Thanks again to our sponsor, and thank you for joining us, have a great rest of the day.

Spotlight article: Domain 2, Access Control


Access controls enable the protection of security assets by restricting access to systems and data by users, applications and other systems. In the upcoming CISSP Essentials Security School video presentation, Domain 2, Access Control, featuring expert CISSP

18

trainer Shon Harris, learn how access controls support the core security principles of confidentiality, integrity and availability by inducing subjects to positively identify themselves, prove they possess appropriate credentials, and the necessary rights and privileges to obtain access to the target resource and its information. Access control principles The four key access control principles are as follows: Identification: process of a subject providing the first piece of a credential set. Authentication: the act of verifying the identity of a subject requesting the use of a system, application, data, resource or network. Authorization: the act of granting an authenticated subject access to an object. Accountability: obligations held by an identified individual who is responsible for the safeguarding of specific assets or for their supporting activities Credentials used in identification are discussed, (e.g. user name, personal identification numbers, smart cards, digital signatures, etc.), as are authentication methods such as passwords and phrases, cryptographic keys and tokens. Once identified and authenticated, access control matrixes are typically used to determine if the subject is authorized -equipped with the appropriate rights or privileges -- for access to the target object. By using all three of these security controls, accountability for the use of the resource can be traced and therefore assured. Access control administration and practices Access control administration and practices depend greatly on the structure of the organization -- its technology infrastructure and workforce behavior. The concept of the trusted security domain is presented and a variety of approaches are explored. Centralized and de-centralized (distributed) administration approaches are contrasted, and the increased challenges of the latter approach -- and of hybrid approaches -- are covered in brief. A list of recommended access control practices is included. Access control models and technologies Three types of access control models are discussed that dictate how subjects access objects. Security labeling is explored as part of the highly restrictive mandatory access control model in stark contrast to the discretionary model that allows the creator/owner of an object to grant access as she sees fit. The benefits of the role-based access control mode are presented, which provides access to resources based on profiles connected to a user's role in an organization. The (often confused) concept of lattice-based access (sensitivity level based) strategies are also included. The range of available technologies is explored. These include role based (subject oriented), rule based (object action oriented), restricted interfaces (user option oriented), content-dependent controls, capability tables (subject oriented), access control lists (object oriented) and the combination of the latter two, action control matrixes.

19

Access control methods, types and techniques The bulk of this domain focuses on the variety of access controls available -- their strengths and weaknesses, when to use them and the methods used to implement them. A "defense-in-depth" approach is taken, describing the various administrative, physical and technical controls that can be applied to the vulnerable technology layers of an information infrastructure. Administrative controls covered include policies and procedures, personnel controls (including separation and rotation of duties), supervisory structures, security awareness training and testing. Physical controls cover topics such as network segregation, TEMPEST shielding, white noise masking, perimeter security, computer controls, work area separation, data backups and cabling. Technical (logical) controls topics include system access, network architecture, network access, encryption protocols, control zone definition, and auditing. The specific controls useful to these areas are categorized according to the six types of access controls: preventive, detective, corrective, deterrent, recovery and compensating. Finally emphasis is placed on the importance of protecting audit data and logging information. A variety of access control methods are explored. Strong access control methods, such as biometrics (which include electronic imaging of body parts such as fingerprinting, hand, retinal and iris scans, etc.), and behavioral-based signatures (such as keyboard dynamics and voice print), are contrasted by their level of effectiveness and their current level of social acceptability. Authentication through password management is covered in detail, including the characteristics of strong passwords, cognitive passwords, responsible password management and policy, and restricting login attempts. Technologies useful in automating password administration, such as password checkers, password generators and automated programs, that manage password aging or limit logins are covered. Rigorous password methods, such as one time passwords and token devices (both synchronous and asynchronous) are detailed, along with cryptographic keys (a.k.a. digital signatures), smart cards and memory cards. Authorization is particularly challenging, because of the variety of methods that are simultaneously used. Users can be restricted by physical access to resources required for access to desired information (as in restricting building access) by membership in access control groups whose rights of access are limited, by the access control lists applied to the target object itself, by time of day and by transaction type. This section provides strategies that can help reduce conflicts between these methods, including defaulting to no access, restricting access on a need to know basis and by using single sign-on methods that manage permissions logically by reference. Single sign-on can be an effective and efficient means of controlling access within organizations. Approaches covered include scripting, the Sesame and Kerberos single sign-on systems, the latter currently used by the vast majority of organizations. Kerberos is covered in depth. Essentially a traffic cop for the transfer of messages between users and system, it positively identifies a message sender and recipient, and dispenses cryptographic keys that uniquely bind a message to the transaction between them.

20

Access control threat monitoring Ultimately the purpose of access control is to protect assets from unauthorized use. Threats specific to access control include dictionary, war dialing and brute-force attacks that use software to guess valid passwords and spoofing at login, which tricks users into logging into a fake screen. Countermeasures for each are explored.

However, by far the most effective means of protecting against unauthorized access is operational control and monitoring. Strategies presented include implementing intrusiondetection technology (knowledge, signature, behavior-based or statistical), embedding IDS network sensors, monitoring network traffic for aberrations, employing network sniffers and the use of honeypots to mislead intruders to decoy sites and systems and away from valuable assets.

CISSP Essentials training: Domain 3, Cryptography


Cryptography enables the protection of security assets through the transformation of clear text to unreadable form. Cryptography -- essentially defined as the transformation of clear text into an unreadable form -- is the method used to store and transmit messages safely so that only the intended recipient can read them. In this CISSP Essentials Security School lesson, Domain 3, Cryptography, featuring noted CISSP certification trainer Shon Harris, learn how cryptography, its components, methods and uses are employed in the enterprise to store and transmit messages safely. Before watching the special Domain 3, Cryptography video below, it's recommended that students first read the Class 3 Domain Spotlight article, which offers an overview of the history of cryptography and the many complex, imaginative approaches that are used to accomplish this task. Other key topics include background on cryptography's evolution, historical uses and current industry status; cryptographic components and their relationships; and cryptography methods and uses, including keys algorithms, symmetric versus asymmetric approaches, PKI concepts and mechanisms and hashing and uses.

Spotlight article: Domain 3, Cryptography


Cryptography is defined as the transformation of clear text into an unreadable form. It is the predominant method used in the enterprise to store and transmit messages safely so that only the intended recipient can read them. This domain spotlight provides an overview of the history of cryptography and the many complex, imaginative approaches that are used in contemporary enterprise encryption. Cryptography background While the systems for ensuring message secrecy have been around for millennia, modern day information technology poses new and challenging problems for individuals,
21

corporations and nations who require secrecy of their communications. Cryptography as a science has evolved exponentially and rapidly over the last 50 years, producing new and more powerful methods that were well beyond the capability of humans unaided by computer technology. Early cipher methods based on substitution and transposition still form the basis of clear text-to-ciphertext translation, but the algorithms used in transformation have become increasing complex, aided by computer processing, which enables complexity well beyond human capability. The evolution of cryptography from its earliest known application is demonstrated, and the reader is provided with the foundation needed to understand the complex approaches in current use today. One way the strength of a cryptographic system is measured is by the strength of its underlying algorithm and the complexity of methods applied to accomplish the end-toend task. The more important way to comprehend and measure the strength of a cryptosystem is through its implementation. Today we have amazingly strong algorithms, but compromises are still taking place. This is mainly because when developers integrate algorithms into their code they don't implement all of the necessary pieces properly, which leaves vulnerabilities. We have seen many examples of this as in when SSL was first released (could be broken in two minutes) and with the Wired Equivalent Privacy (WEP) protocol, which can be broken in about 30 minutes depending upon the amount of traffic. Proper cryptography isn't just using a strong algorithm, but understanding all of the pieces and parts that are involved with the process. In the past, messengers were used as the transmission mechanism, and encryption helped protect the message in case the messenger was captured. Today, the transmission mechanism has changed from human beings to packets carrying 0s and 1s passing through network cables or open airwaves. The messages are still encrypted in case an intruder captures the transmission mechanism (the packets) as they travel along their paths. Cryptography components The algorithm, the set of mathematical formulas, dictates how enciphering and deciphering take place. Many algorithms are publicly known and aren't the secret part of the encryption process. In fact, it's often said that secrecy of the algorithm is not something that you should base your security on. The way that encryption algorithms work can be kept secret from the public, but many of them are publicly known and well understood. If the internal mechanisms of the algorithm aren't a secret, then something must be. The secret piece of using a well-known encryption algorithm is the key. The key is a value that's made up of a large sequence of random bits. Is it just any random number of bits crammed together? Not really. An algorithm contains a keyspace, which is a range of values that can be used to construct a key. The key is made up of random values within the keyspace range. The larger the keyspace, the more available values can be used to represent different keys -- and the more random the keys are, the harder it is for intruders to figure them out. For example, if an algorithm allows a key length of 2 bits, the keyspace for that algorithm would be 4, which indicates the total number of different keys that would be possible. That would not be a very large keyspace, and certainly it would not take an attacker very long to find the correct key that was used. In a cryptosystem that uses symmetric cryptography, both parties will be using the same key for encryption and decryption. This approach provides dual functionality. Symmetric
22

keys are also called secret keys, because this type of encryption relies on each user to keep the key a secret and properly protected. If an intruder were to get this key, the intruder could decrypt any intercepted message encrypted with this key. Symmetric cryptography has several issues that were solved by using asymmetric and symmetric algorithms together. Here are some of the symmetric algorithms covered in the CISSP exam: Data Encryption Standard (DES), Triple DES (3DES), Blowfish, IDEA, RC4, RC5, and RC6, Advanced Encryption Standard (AES). In symmetric key cryptography, a single secret key is used between entities, whereas in public key systems, each entity has different keys, or asymmetric keys. The two different asymmetric keys are mathematically related. If a message is encrypted by one key, the other key is required in order to decrypt the message. In a public key system, the pair of keys is made up of one public key and one private key. The public key can be known to everyone, and the private key must only be known and used by the owner. In the hybrid approach, the two technologies (symmetric and asymmetric) are used in a complementary manner, with each performing a different function. A symmetric algorithm creates keys that are used for encrypting bulk data, and an asymmetric algorithm creates keys that are used for automated key distribution. Here are some of the asymmetric algorithms covered in the CISSP exam: RSA, Elliptic Curve Cryptosystem (ECC), Diffie-Hellman, El Gamal, Digital Signature Algorithm (DSA), Knapsack. Cryptography methods and uses Public Key Infrastructure (PKI) consists of programs, data formats, procedures, communication protocols, security policies, and public key cryptographic mechanisms working in a comprehensive manner to enable a wide range of dispersed people to communicate in a secure and predictable fashion. In other words, a PKI establishes a level of trust within an environment. PKI is an ISO authentication framework that uses public key cryptography and the X.509 standard protocols. The framework was set up to enable authentication to happen across different networks and the Internet. Particular protocols and algorithms aren't specified, which is why PKI is called a framework and not a specific technology. The CISSP exam covers the roles and responsibilities of many of the components of a PKI: registration authority, certification authority, certificate repository, certification revocation list, and more. A one-time pad is a perfect encryption scheme, because it's unbreakable by brute force, and each pad is used exactly once. A one-time pad uses a truly non-repeating set of random bits that are combined bitwise using the binary XOR function. The bits of the message are XORed to the bits in the pad to generate ciphertext. The random pad is the same size as the message and is only used once. Because the entire pad is random and as long as the message, it is said to be unbreakable even with infinite resources. Each bit in the pad is XORed with each bit of the message, and this step ensures that each bit is encrypted by a nonrepeating pattern of bits. The sender encrypts the message and then destroys the one-time pad. After the receiver decrypts the message, he destroys his copy of the one-time pad.

23

Secure Sockets Layer (SSL) protects a communication channel instead of individual messages. It uses public key encryption and provides data encryption, server authentication, message integrity and optional client authentication. The Internet Protocol security (IPsec) protocol suite is a method of setting up a secure channel for protected data exchange between two devices. The devices that share this secure channel can be two servers, two routers, a workstation and a server, or two gateways between different networks. IPsec is a widely accepted standard for providing network layer protection. It can be more flexible and less expensive than application- and link-layer encryption methods. IPsec has strong encryption and authentication methods. Although it can be used to enable communication between two computers, it's usually used to establish virtual private networks (VPNs) among networks across the Internet. A one-way hash is a function that takes a variable-length string and a message, and produces a fixed-length value called a hash value that represents that original data. A hash value is also called a message digest. This technology is used to ensure integrity of data and packets either during storage or transmission. The CISSP exam covers these technologies and protocols, in much more depth, and many more standards (steganography, message authentication code, secure electronic transmission, SSH).

Domain 4, Security Models and Architecture


Host: Welcome to SearchSecurity.com's CISSP Essentials: Mastering The Common Body of Knowledge. This is the fourth in a series of 10 classes exploring the fundamental concepts, technologies, and practices of Information System Security corresponding to the CISSP's Common Body of Knowledge. In our previous class we examined cryptography; today lecturer Shon Harris, will give a presentation on the two fundamental concepts in computer and information security. The security model, which outlines how security is to be implemented, and then the architecture of a security system, the framework and structure of a system. Shon Harris is a CISSP, MCSE, and president of Logical Security a firm specializing in security education and training. Logical Security provides training for corporations, individuals, government agencies, and many organizations. You can visit Logical Security at www.logicalsecurity.com. Shon is also a security consultant, a former engineer in the Air Forces Information Warfare Unit, and an established author. She has authored two best selling CISSP books including, CISSP All In One Exam Guide and was a contributing author to the book Hacker's Challenge. Shon is currently finishing her newest book, Gray Hat Hacking: Ethical Hackers Handbook. Thank you for joining us today Shon. Shon: Thank you for having me.

24

Host: Before we get started I'd like to point out several resources that supplement today's presentation. On your screen the first link points to the library of our CISSP Essentials classes where you can attend previous classes and register to attend future classes as they become available. The second link on your screen allows you to test what you've learned with a helpful practice quiz on today's materials. And finally, you'll find a link to the Class Four spotlight, which features more detailed information on this domain. And now we're ready to get started. It's all yours Shon. Shon: Thank you. Thank you for joining us today. Today I will be looking at security architecture models and really what this domain looks at is the inside of an operating system or an application. The components of the computer itself, the components of the software, how the components all work together to properly protect the environment for the user. And this is really important is that all the components work in concert to provide a certain level of protection for the environment because the complexity that's involved with an operating system, applications that are involved and how all these pieces and parts communicate to the motherboard, peripheral devices in the CPU has to be properly architected and implemented. And we have certain models that we can use to help direct us on choosing the right architecture and how to properly develop the architecture. So in this domain those we're starting with the basic components of a computer looking at the CPU, address buses, and data buses, and input/output devices, interrupts and how they all communicate. Then we go into looking at the differences between threads and processes, and how multitasking works, multiprogramming, and multiprocessing. We look at the protective mechanisms within every operating system. A protection ring architecture, the process isolation processes, the security kernel reference model, these are the things we'll look at, [and] virtual machines. And then we go into the actual models and there's a whole range of models that can be used. The [inaudible] model, the Bell-LaPadula Model, the [inaudible] model, the ClarkWilson model and we'll quickly talk about where these models come into play. Then we look at the different evaluation criteria that have been used really around the world for quite some time and their purpose and the certification and accreditation and this domain goes through a lot of different types of attacks that could happen against the components that have been developed to properly protect the overall system. So like I said, this domain starts with the basic pieces of every computer and the processor part, the different types of memory. Memory protection is huge in an operating system to ensure that data has the correct integrity, correct level of confidentiality, and that the environment the operating system is stable for the users and for applications to work. The core, the kernel of an operating system, has a memory map or a memory manager, and you need to understand the role of this memory manager and what it does is basically processes are going to request segments of the memory, the manager has to allocate segments of memory, but also it acts as an access control that [ensures] the right processes are accessing the right memory segments and they don't step on each others toes and corrupt each others data, but also that there's not allowed for covert channels to take place, which we'll look at the end of this presentation. So the different types of memory within a computer system, cash versus real memory, storage types, secondary storage, primary storage. The differences between what a process is and a threat is: a threat is an instruction set that is dynamically created and destroyed by a process. The process... all applications run as processes, and you need to understand some of the security issues around how processes interact with each other
25

because if they're not properly controlled, if the operating system does not properly control how processes communicate, that directly affects the whole stability and security of the system. And we also go through different types of languages and the generations. So we start off with just the CPU itself and the components of the CPU and you need to understand the components and how they actually work together. The CPU is not, most CPU's today are not multitasking. Operating systems are multitasking, meaning that they can carry out several different tasks at one time. What that really means is that you can several processes that are doing their thing at one time. An operating system can deal with it, it can respond to all of these process requests. The CPU is not multitasking; it deals with one instruction set at a time. So the CPU is time sliced and shared between all of the processes in an operating system and this has to be controlled properly. What happens is when we have software and hardware interrupts that are assigned a peripheral device that have a hardware interrupt, if it needs to communicate to the CPU, it has to wait until it's hardware interrupt is called and then it's instructions go over to the CPU. Now the CPU has it's own type of storage, which is called registries and these are temporary holding places. Really the instructions don't live in the registers. The addresses to where the instructions are held in memory, that's really what is being held in the registers for a CPU. So if a printer needs to communicate with the CPU once it's interrupt calls its request, the actual address to where it's instructions and data are held in memory will go into the registers. The control unit is the one that is actually controlling the time slicing for the CPU since there are so many things that are competing for the CPU's time, a control unit will basically be like the traffic cop saying, "Okay, your instructions and data will now go over to the CPU." And when that happens an ALU does it's work. It carry's out mathematical functions, it carry's out the logical functions because what the operating systems and all applications all they are sets of instructions. The sets of instructions have empty variables and this is where data gets put into. So the instruction basically tells the CPU, "This is what I need you to do with this data." So the instruction set and the data go over to the CPU, it does it's work and it sends it's response back to the requesting application to the memory address. Now I said that memory management is very important and there's... memory management is responsible for not only ensuring that processes only stay within their memory segments, but some memory segments are shared. We have shared resources within an operating system and so that has to be controlled tightly. We have... you need to understand how paging works, how virtual memory works. Virtual memory means that the operating system is basically fooling itself into thinking that it has a lot more memory than it actually does. So as you use a part of your hard drive, the page file once your RAM gets filled up, it's going to move this data down to your hard drive, and when an application needs to read the data that's on your hard drive now, it's called a page fault and the data is moved from your hard drive up into memory so you're requesting application can interact with it. And as our operating systems become better, as we have created more stable operating systems over time, we don't have as many memory problems as we did in the past. You may be familiar with, or you may remember in Windows 95 or 90X, we had certain types of errors and blue screens and fatal errors that took place because those systems had inferior memory management compared to the Windows NT and 2000 family. And that
26

mainly has to do with, the 9X family had to be backwards compatible with some of the 16 bit applications where NT in 2000 didn't. Now processes can work in different states and these are not all the states showing here; there are other states that the processes can work in. Again the operating system has several different processes, your applications, utilities they all work as processes, and the processes that are stopped, you can stop a process it depends on what operating system you're using. If you're using a Linux or a Unix system then you'd use a kill command, and the kill command is just a utility that you send a parameter over to the process. In Windows you'd used the task manager. If the process is in waiting state, that means that it's waiting for it's interrupt to be called. I said that we have hardware interrupts and we have software interrupts, the process has to wait for it's interrupt to be called so that it can send it's instructions and it's data over to the CPU. If a process is running, that means that it's waiting for the CPU to finish it's task and send back the reply or the result of what it sent over. Now, we've gone through kind of the evolutionary path of different languages. The CPU only understands binary, ones and zeros, so everything actually ends up into a binary language so it can be processed by the CPU, but we figured it's not very easy for human beings to program things in bunches of ones and zeros, are brains don't work well with that type of representation. So the first thing that we came up with is a assembly language. Assembly language is basically a hexadecimal representation, it's a different type of encoding process. Assembly language works very low of an operating system versus a higher level language. A higher level language... we have several generations there, but the benefit of a high-level language is that if I'm going to write a program in a high-level language, I don't have to be concerned about a lot of the details of the system itself. I don't need to understand all the memory addresses and how to move data from one memory to the next, but if I'm working at an assembly level, I do have to understand and work with that detailed type of information. So a high-level language allows me to create much more powerful programs because I only have to be concerned with the complexity of my application, versus being concerned and understand the complexity of the actual environment. In no way does that mean we don't use assembly language anymore. We use assembly language to do things like develop drivers, drivers for our operating system. And a highlevel language can be complied or interpreted, that just means interpreted language is a script. Individual lines will be turned into the machine language where compiled means the whole, all of the source code is turned into object code. Source code is what the programmer will write out. That gets complied into object code. The object code is for a specific platform, for a specific CPU, and for an operating system. And then the object code gets converted into binary machine language when processing actually needs to take place. Now a majority of this domain actually studies and looks at the different protection mechanisms within really every operating system we use today, and all operating systems work on a type of architecture where it can assign trust levels to the individual processes and the components within the software. The higher the trust of a process or a component the more that process can carry out. Memory segmenting is very important, which I kind of covered, but you really need to understand in more depth for the exam.

27

Layering and data hiding have to do with low-level processes not being able to communicate with high-level processes. Most situations we don't want lower level processes that are not trusted to be able to communicate directly to high-level processes because we don't want them to corrupt them. In an operating system what happens is that the low-level processes, when they need to communicate with services, operating services or to the kernel or something, the request gets passed off from something less trusted to something more trusted. I'll talk a little bit more about that when we get into the protection ring. Virtual machines: you need to understand how they work. A virtual machine is just a simulated environment that is created for different reasons. We have virtual machines that are created for applications that can't work in a certain type of environment. We create virtual machines for protection mechanisms as when we're downloading Java applets, the Java Virtual Machine comes up with a sand box. And we use virtual machines for VMware or those types of tools where we want to run more than one operating system on the same physical computer. Now protection rings are very important and they're used in every operating system. I find that a lot of people don't really understand what protection rings are or how they come into play. A lot of people just kind of memorize the stuff, but the CPU actually is setting the architecture for the protection rings of a system. The CPU, your processor has a lot of micro codes in it and you're operating system has to be developed to work with that CPU and be able to understand and work within that certain architecture and that's why some operating systems can't work on certain processors because they're working within two different architectures, they can't communicated back and forth. So it's really the CPU's that's driving this protection ring idea and you can conceptually think about protection rings as individual rings as we're showing here. Ring zero, one, two and three, the less-trusted processes will be running at a higher level ring. So our applications are less trusted they'll be running in ring three, the most trusted are running in ring zero and that's where our kernel of the operating system works. The rest of the operating system works in ring one and then possibly some utilities will be working in ring two. Now the reason that this is set up is that when processes need to communicate to the CPU the CPU is actually going to run in a specific state or mode. The processor can run in user mode or privileged mode and it depends on the trust level of the process that is actually sending a request over to the CPU. If it's a less-trusted process, so it's an application that runs in ring three then the CPU is going to execute those instructions in user mode and that really reduces the amount of damage that these instructions can carry out. Because if something's being executed in user mode then it's contained, the instructions can't carry out more critical tasks, or do certain types of damage. If processes are more trusted and send instructions over to the CPU then the CPU would work in privileged mode which will allow for the instructions to be able to carry out more critical and possibly damage or possibly dangerous activities. So the, and the protection rings it also, there's an active control of how the processes communicate to each other. A process in ring 3 cannot talk to, directly talk to something that's in ring 1. That's why we have operating system services. A less trusted process sends it's request to an operating service, the operating service will carry out this request, bring the result back to the less trusted process. So that's how your applications communicate through the operating system and to core components of the system. And all

28

operating systems use this type of architecture it's just that the different operating systems use different numbers and they may put different items in the different rings. Now TCB is a pretty important concept, Trusted Computing Base, and what that refers to is all of the software, firmware, and hardware components that are used to protect an individual system. Now this term actually came from the Orange book and we'll talk about the Orange book, but the Orange book is an evaluation, it's an evaluation criteria. So let's say I develop a product, I need it to be evaluated and so it actually gets some type of assurance rating like a C2 rating or B3 rating. So I have a product, I'm going to send it off to a third party so that they'll rate it, they'll test it, and they'll assign it some rating and then my customer base will look at this rating and will understand the level of protection that my product provides. Trusted Computing Base is really just a concept of all of these components that provide protection for the system, but what's important is these are the components that are actually tested during the assurance testing. When I send my product to go and be tested under one of these evaluation criterias [sic] it's these protection mechanisms that will be properly evaluated because the assurance rating is determining the level of protection my product is actually providing. So that's the whole reason of the components software, hardware, firmware they're all over in an computer system. It's not like they all live in one little place in an operating system, but that's the reason to conceptually but them in one place, basically putting everything in one pot you're saying these are the things that will be tested when it needs to get some type of assurance rating. Now every operating system also uses a security kernel and a reference monitor. A reference monitor is more of a concept than it is actually code. It's referred to as an abstract machine and the reference monitor concept basically outlines how subjects and objects can communicate within an operating system. A subject is an active entity that is trying to access something. An object is a passive entity that is being accessed. So it's extremely important that within the operating system that the types of access that are taking place and the operations that are being carried out after access is allowed is properly controlled. The reference monitor is basically the rules of how subjects and objects can communicate within a operating system and the security kernel is kind of the enforcer of those rules. The security kernel is a portion of the actual kernel of an operating system and it carries out the rules of access. Now you as a network administrator, you can choose some of the, you can configure some of the access rules like you can choose, here we have Katie has no access, Jane has full control, Darren has right access. This is just a very short list of really all the things that the security kernel is doing because you're being able to control what users interact with what objects, but the security kernel needs to make sure that it's properly controlling how all the processes are communicating within the operating system. So these are the things that the developers of the operating system have to understand the complexity of an operating system and when applications are put in there how do all of those things communicate in a secure mode. This brings us to why we even have models. I find a lot of people, a lot of my students don't like these models because they're not familiar with them. You can be a security professional for 20 years and have never heard of one of these models and so I find that a lot of people feel, well I don't need to know anything about this and this is kind of wasting my time, but I don't really agree. If you go and you get a degree in, if you get a

29

CIS degree or if you get a higher like a Masters or a Doctorates degree in computer security, you're going to be covering a lot of these models because they are important. And again what I find is a lot of people just memorize the necessary things that they need to memorize for the models to be able to dump it on to the CISP exam which is really too bad. It's better to understand a lot, not only the characteristics of a lot of the components that are covered for this exam, but what is the place in the world. Because if you understand where everybody's place is in the world you have a much bigger and broader understanding of information security and you have a better understanding of how it works in depth. So we have all of these different models and they are theoretical in nature, they're conceptual in nature and these models, there's several different types of them because different vendors have different security requirements of their products. If I am going to develop a firewall then I'm going to have a lot of different security needs than if I'm going to develop a product that's going to do online banking. So it depends on what type of level of security and functionality that you need within your product or what model you choose. So the model really works as you, and think about how does a vendor know what level of protection they need to provide? Well they know that by the customer base that they're going after. I have to understand my customer base, the level of protection that the customer base is going to require of my product and then I need to find a model that's going to help me meet those needs. So instead of just having all the programmers start writing code without any clear direction I can start with a model and what these do is outline, if you follow these specific rules then this level of protection will be provided. So the model starts at the conceptual level, it starts even before youre in the design phase of your product. You choose the right model, you go into your design phase, you go into your specification phase, and then you start actually programming. Now there are several different models, one that gets hit often is the BellLaPadula Model and the BellLaPadula Model was developed in the 70's. The United States military was keeping more of its secrets in time shared mainframes. It's keeping more and more of its confidential information in these mainframes and in the military you have to, not everybody can access all the information, it's based on your clearance and the classification of the data. So basically the United States government said, okay how in the world do we know that the operating systems on these mainframes are keeping track or making sure that people are only accessing what they're supposed to. This is why the BellLaPadula Model was developed it is providing the structure of any system that is going to provide math protection, mandatory access control. Mandatory access control means the system is making decisions on if a subject can access an object by comparing it's clearance and it's classification level. So the BellLaPadula Model really has all the rules on if you want to create a Mac operating system, you want to provide this level of protection, these are the rules that you need to put in place. So BellLaPadula Model was the first mathematical model that incorporates the information flow model and the state model and it uses different rules that you need to know about as the star security rule and the simple security rule. And really the importance of the BellLaPadula Model kind of set the stage for how all of our math-based systems are going to be developed, but it kind of affected us even after that because the Orange book was developed to actually evaluate Mac systems, systems that were built on the

30

BellLaPadula Model and we kind of expanded from there and I'll talk about the Orange book in just a minute. There are several different models of course, I don't have time to go through all of them and even the BellLaPadula Model you need to know more details about it than just what I'm saying, but a new model is called the Brewer-Nash model it's also referred to as the Chinese Wall and what this model does it tries to address conflicts of interest in specific industries. So for example let's say I'm a stock broker there could be conflict of interest if all of you are my customers and I go to into each one of your accounts and try to figure out if you're selling or if you're buying or what you're doing to tell another one of my customers information. So I'm giving them the trends, look everybody's selling this so you should probably go ahead and dump it. That's conflict of interest. So instead of just relying on humans doing the right thing there are products, there are software that can try to enforce that these types of things don't happen. What the Brewer-Nash allows is that for dynamically changing access controls. So if I go into one of your accounts, you're all my customers, I go into one of your accounts, now I can't access anybody else's account until I come out of that system. So whatever activities that I'm doing on your account I can't go check on three or four other accounts and then come back and make the change to your account. Another example, let's say you and I we work for marketing companies, our marketing company deals with, our marketing companies customer search financial institution so you let's say your customer is Bank One and my customer is Citibank. Now we work for the same marketing company but when I'm working with my customer's information I should not be going and looking at your customer's information that could be held on the same system because our customers are competitors and I shouldn't be able to go and find out your information and try to one up you. So these are some examples of where the Brewer-Nash model could be used and how they're trying to put controls in to ensure that users cannot carry out things that would be conflict of interest. Now what's important to realize is these are just models, these are theoretical models, these are high level direction and instruction. It does not tell any vendor exactly how to program anything. So it's up to the vendor to actually meet the requirements and the objectives that is laid out in the model. Now this domain also goes through the different evaluation criteria and the Orange book has been hit pretty hard on the exam although the Orange book is being replaced by the common criteria, but it doesn't mean the Orange book isn't important. The Orange book has been used for 20 straight years and what it is, is a set of ways to actually test a product. So I go, I'm a vendor I find out what the level of protection by customer base needs. I know what they're going to require of my product. So I need to know how to meet a certain level and within the Orange book it uses a hierarchical rating system, A-D, "A" is the highest you can get, really from my understanding from the commercial sector there is only one "A" system I think Boeing uses it, but all the other "A's" are used in government sector where they're keeping secret, secret, secret information. We're most familiar with things that get a "C" rating because they're using the DAC model, Discretionary Access Control model. So let's say that I'm a vendor and I know that my customer base is going to require a B2 rating of my product. I need to know how to achieve that B2. So that's what this criteria can be used for so that vendors know how to achieve a B2 and then the criteria is used for the evaluators and for each one of these ratings there are certain types of tests and they're are certain types of things that need to
31

take place before it gets a stamp of one of these ratings. So the criteria is all of the procedures, maybe if something is going to get a "C" rating, it's not going to go through the scrutiny and the types of tests if something is getting a "B" or an "A" rating. So the Orange book was developed 20 years ago and it was developed specifically to evaluate the operating systems that were built upon the BellLaPadula Model to determine what level of protection is being provided. Now there's a lot of literature indicates the downfalls to the Orange book. It only covers stand-alone systems, doesn't cover networking, it doesn't cover integrity and availability, it only covers confidentiality. You can't use it to evaluate network, software, or databases or a whole range of things. So what happened is the Orange -- the reason it's called the Orange book is because the cover was orange -- and they had to come up with a whole bunch of other books to address all of these other components that are distributed and diverse environment. So each one of these books they had it a different color and that's where they came up with the rainbow series and probably for the first handful of colors it made sense, but they had to keep coming up with more and more books and now they have yellow and neon yellow and light yellow, and green and light green and dark green so it kind of got out of hand. So the Orange book only was really developed to evaluate operating systems but we used it for a lot of other things. We sort of stretched it and tried to use for too many things. There are other criterias that you need to know about for the exam, IT stack Canadian has their own evaluation criteria. The federal criteria. Now the up and coming or the criteria that is most used today is the common criteria. And what the goal is for all these criterias is to try and get one that would be accepted worldwide. Because for quite sometime and still a little bit today different countries have their own criterias and that's frustrating especially for vendors trying to sell their products all over the world and they have to be tested on these different criterias and plus we want to get all the countries on the same level of information security. So that means our standards have to work across the different countries. So the common criteria was developed and seems to be the best candidate for accomplishing these goals set out. Now the different components of the common criteria would be first it starts off with a protection profile and what a protection profile is it actually describes the real world need of some type of product that has not been developed yet. And anybody can basically write a protection profile. So I can say, I have this security issue, I can't find a product to actually help me with it, so I'll write a protection profile. Now there are several protection profiles that are posted and what happens is the vendors will go and look at these protection profiles and make a determination if they're going to build a product that maps what is in the protection profile. So what they'll do is choose maybe one and then look at the market analysis and the demand and what's required to build this product. So let's say I'm a vendor and I've chosen one of these protection profiles that have been created and I'm going to build the product to meet it and that is referred to as the [as the toe], the Targeted Evaluation. So when I create the product it's called the Targeted Evaluation. Now I'm the vendor, I'm going to write up a security target. The security target explains how my product maps to the necessary requirements to achieve a certain assurance rating. Because each one of these evaluation criterias have their own assurance rating and we looked at the Orange book and it has A-D, IT stack that we didn't look at, but it uses "E's" and "F's", and common criteria we used the evaluation assurance levels.

32

So in the security target I'm going to, I'm the vendor I'm going to write up exactly how my product meets the requirements of the assurance level that I'm going after and how it meets the real world needs that is outlined in the protection profile. So the protection profile outlines the real world needs. The product is referred to as the Target Evaluation that's the thing that will be evaluated, the security target describes how that product provides the level of protection. And then it goes into the evaluation process and depending on the actual EPL rating that somebody's going after will depend on the type of test and scrutiny that the testing committee will go through. So it goes through its testings and then it achieves a specific EPL rating and then it's posted on Evaluated Products List. You can find these on the Internet for any of the different evaluation criteria. If you are curious or want to know assurance rating that different products have achieved you can go look at an EPL list. Now this domain actually listed a lot of different types of threats and it depends on the actual resource that your setting for the CISP exam on where these threats and attacks will be. Some are in one domain, some are in another domain, but here I've but them all, I've clumped a lot of them together, but you need to know how these attacks work, counter measures for these attacks, and maybe some actual examples of these. So back doors are not to hard somebody will compromise your system and they'll install a back door and really what that means is that there is a service that's listening on a specific port so they can enter your system anytime they want to without you knowing it and they don't have to go through any access control mechanisms. Timing attack. There's different types of timing attacks that you actually need to be aware of. Race conditions. Race conditions have to do with the sequence of processes working on one piece of data. A race condition is not just an attack, but it's a concern within any type of programming because process one has to carry out it's instructions before process two does. If process two carries out it's instructions before process one, the result is different. So the security concern with race condition is that if I'm an attacker and I can figure out how to get process two to happen before process one, I control the result and really what that comes down to is the identification, or the authorization and authentication steps are in two different steps. So process one would do authentication where I have to be authenticated, step two would be the authorization step. So authentication, are you who you say you are and then step two, process two, would be okay go ahead and access the resource you're trying to access. If I can get that second process to execute before the first process then I could access something and skip the whole authentication piece, so I don't have to provide the right credentials. Plus for overflows you'll need to know about buffer overflows means that there's too much data being accepted and you'll need to understand what balance checking is and how balance checking can be used to ensure that buffer overflows don't happen. So there's a lot of different attacks that the CISP exam covers that you need to know, again not just how they work and not just memorize what they mean, but also maybe some counter measures. Now since this domain looks mainly at the components within an operating system, covert channels is definitely covered in this domain. And covert channel just means that somebody is using a channel for communication in a way that it was not developed. So if we take just a minute and look outside of the operating system, let's look at an example inside the operating system. We could say that terrorist cells that are within the United States could be communicating through covert channels. So an example could be maybe a certain terrorist, everybody in a terrorist cell knows to go to a certain website, let's say
33

four times a day. They go to this website and they have to check a certain graphic on the website. If the graphic changes then that means something to that cell and they know to do something. They know to go on to step two and that's covert in nature just meaning that you're using something for communication purposes and that's not what that something was developed for. Overt channels means that you're using communication channels in the proper way. So within operating systems there's two main types of covert channels timing and storage. A storage covert channel means that a process can write some type of data to a shared medium that a second process can come and read. So if you and I are not supposed to be communicating, let's say that I'm working at top secret, your working at secret. I should not be writing data down to anybody at the secret level because I could be sharing information that I'm not supposed to. But if I figure out how to get my process to write to maybe a page file or some type of resource that we share. I'll write to it, you're process will come and read it and that would be an example of a covert storage channel. Now a covert timing channel is a little bit different, it means that one process modulates the resources for another process to interpret and you can kind of think of it as Morse code between process, they can kind of use a type of Morse code. So if figure out how to get the CPU cycles up for a certain period of time for a certain length and your process watches for that information and it tells your process something. Or let's say I get my process will write to the hard drive 30 times within 30 seconds, you're processes just watches for that type of information. So those are just examples and there is a lot of examples of covert channels and this domain covers how software is supposed to be developed from the ground up. You start with models depending on the level of protection you're trying to provide, then you go into the proper design phase, the specification phase, the programming phase how to secure a program, how to test for a lot of the issues that are in our software today, then the product needs to go through evaluation. Another thing that CISP exam will cover is the cell phone cloning. It doesn't necessarily fall into this domain it depends on the resource of where this information, where this topic is covered, but I'll just quickly go over cell phone cloning has been happening for a long time. Because you have two numbers on your cell phone. And the ESN number which is like your serial number which is against the law to actually change. You have the MIN number that's your phone number that's assigned to your phone. Now most people don't realize these numbers go in clear text to a base station when you need to make a call and so a lot of people have been able to sniff these numbers, the ESN and MIN. Somebody can steal your numbers, they're valid numbers and go and reprogram them into another phone and that's what cloning is. It's not even when you're making calls, it's anytime your cell phone is on it's communicating to a base station so it's always sending this data. Now tumbling's different where you know where you're actually away from home and your going to make a call what happens is these numbers the ESN and MIN numbers they have to get sent back to your home station to see if they're valid numbers before a call is allowed but that is an unacceptable amount of time to make sure that whole process goes from base station to base station to your home location just to be able to allow you to make a call. So telecommunication companies will allow that first call to go through even if these numbers aren't valid because it's going to let the first call go through and then for the second call if they're invalid you can't make that second call. But what tumbling is for each call you change the ESN and MIN number so for each time that first call is always going to be able to go through. Now tumbling isn't as effective as it used to be mainly
34

because telecommunications companies and providers are being able to do that authentication piece much quicker than before. So again the crux of this domain is really understanding a computer system from the bottom up. Looking at the CPU components, looking at the different buses that we didn't talk about, memory addresses, different types of memory, then getting into how the different types of languages compiled versus interpreted, looking at the security components within an operating system what falls within that PCB, the protection mechanisms and it looks at a lot of different models and the models really tell you how to develop a product to provide a certain level of protection you're going after. So it goes through the life cycle of the development of a product and into the evaluation process. So it depends on what criteria is used, but today it's mostly the common criteria. So you need to understand the components of the criteria. What these different ratings mean and the certification and accreditation process and a lot of different attacks that can take place against the components that have been developed specifically to protect the system overall. So this is a very good domain to understand kind of from the inside out of how software and operating systems are built to protect you and applications and your data within them. Host: Thank you Shon. This concludes Class Four of CISSP Essentials: Mastering the Common Body of Knowledge, Study Architecture and Model. Be sure to visit www.searchsecurity.com/cisspessentials for additional class materials based on today's lesson and to register for our next class on telecommunications and networking. Thanks again to our sponsor and thank you for joining us. Have a great rest of the day.

Spotlight article: Domain 4, Security Models and Architecture


As computers and networks have become more complex, so too have approaches evolved for securing them. In this CISSP Essentials Security School lesson, expert Shon Harris investigates the framework and structures that make up typical computer systems. This spotlight article sketches the evolution of security models and evaluation methods as they have struggled to keep pace with changing technology needs. The key topics in Domain 4, Security Models and Architecture are as follows: Computer and security architecture: The framework and structure of a system and how security can be implemented. Security models and modes: The symbolic representations of policy that map the objectives of the policy makers to a set of rules which computer systems must follow under various system conditions. System evaluation, certification and accreditation: Methods used to examination the security relevant parts of a system (e.g. reference monitor, access control and kernel protection mechanisms, etc.) and how certification and accreditation are confirmed. System security threats: Common vulnerabilities specific to system security architecture.

35

Computer and system architecture This section explores the components that make up computer systems and how they must be handled to provide optimal security policy enforcement regardless of inputs, system run states or conditions. A detailed discussion of the CPU is provided, including how requests are managed through it and how buffer overflows can take advantage is also presented. Input/output requests, the channels that are used, and how resources are allocated, committed and released, are discussed. Concepts such as protection rings, the difference in memory types (e.g. RAM, ROM and EPROM), cache memory, memory mapping, segmenting memory to isolate concurrent processes and storage types (primary, secondary, real and virtual) are covered. The security challenges posed by different operating states and those posed by the sharing of resources using multi-threading, multitasking and multi-processing methods are presented. Other concepts include deadlock state, invoking virtual machines, the execution domain and the difference between a process and a thread are also explored. Whereas the foregoing focuses on the physical and logical machine, this section explores how confidentiality, integrity and availability controls can be applied to the machine and which components deserve the most attention. The CISSP candidate gains a clear understanding of the tradeoffs between levels of trust, assurance and performance. Security mechanisms placed at the hardware, kernel, operating, services or the program layers are explored, along with the security of open (distributed) and closed (proprietary) systems. This section also covers the concept of the Trusted Computing Base -- the subset of system components that make up the totality of protective mechanisms. The origins of the TCB are presented as they appear in the Orange Book. Concepts such as the security perimeter, reference monitor and its requirements, the security kernel, object domains (i.e., privileged versus non-privileged), process/resource isolation, trust ratings, security layering and hiding, object and subject classifications, and the concept of least privilege are covered. These concepts are presented as a means by which security structures can be understood, and therefore, responsibly controlled. Security models and modes This section explores different types of security models and the attributes and capabilities that distinguish them. The Basic Security Theorem -- if a system initializes in a secure state and all state transitions are secure, then every subsequent state will be secure no matter what inputs occur -- is covered. The Bell-LaPadula model, Biba, the Clark-Wilson Model, the Information Flow Model and the Non-Interference Model -- each of which takes a different approach to managing user privileges with regard to object access are also covered. Security modes describe the security conditions under which a system functions. Systems can support one or more security modes, thus servicing one or more user security classification groups. This section explores four modes and also introduces the concept of the trust assurance. The level of trust is based on the integrity of the Trusted Computing Base. The concepts of trust and assurance are contrasted, and the detrimental effects of complexity on assurance are also noted.

36

System evaluation methods The Common Criteria global evaluation standard has its origins in independent global efforts, one based on U.S. standards and the other representing pan-European standards. The Trusted Computer System Evaluation Criteria (TCSEC), also referred to as the U.S. Orange Book, describes the specific criteria for several evaluation areas (security policy, identification, labels, documentation, accountability, life cycle assurance and continuous protection), and the formal process of evaluation executed by the National Computer Security Center (NCSC), which yield an evaluated product. The European community instead launched what is called the Information Technology Security Evaluation Criteria (ITSEC). ITSEC looks primarily at functionality and assurance as two broad category areas with subheadings. The key difference between the U.S. and European approaches has to do with their rating schema. The European ITSEC applies a separate rating system for security functionality and for assurance, whereas the U.S. TCSEC uses a single-rating system. The confusing relationship between these two rating schema are compared and explored in depth. As security exceeds the bounds of the computer systems, other books in the U.S. Rainbow series complement the Orange Book. This section covers the Red Book, which addresses security evaluation topics for networks and network components. The Red Book carries its own four level rating system and addresses topics such as communication integrity (i.e., authentication, message integrity and non-repudiation); denial-of-service prevention (i.e., continuity of operations, network management); and compromise protection (i.e., data confidentiality, traffic flow confidentiality and selective routing). The Common Criteria, established in 1990, was the global compromise standard that superseded both TCSEC and ITSEC. It introduces the concept of protection profiles, which outline specific real-world needs in the industry. Students will need to understand the different components of the Common Criteria and the evaluation process and assurance levels. Security evaluation yields proof (or lack thereof) of security operational readiness. Confusing terminology, such as the difference between certification (expected versus achieved readiness level) and accreditation (authorization to operate) are contrasted. Security system threats This section covers some security threats specific to security models and architecture. Among the threats explored are covert channels, developer backdoors, timing attacks that exploit race conditions at boot time and buffer overflows. Countermeasures are discussed for each.

Domain 5, Telecommunications and networking


Host: Welcome to SearchSecurity CISSP Essentials: Mastering the Common Body of Knowledge. This is the fifth in a series of ten classes explaining the fundamental concepts, technologies and practices of information systems securities corresponding to the CISSPs Common Body of Knowledge.

37

In our last class we discussed Security Architecture and Models. Todays class will examine topic under the fifth domain of the CBK; Telecommunications and Networking. This class focuses on the glue of network security, how networks work, how data is transmitted from one device to another, how protocols transmit information and how applications understand, interpret and translate data. Shon Harris is a CISSP, MSCE and President of Logical Security, a firm specializing in security education and training. Logical Security provides training for corporations, individuals, government agencies and many organizations. You can visit Logical Security at www.logicalsecurity.com. Shon is also a security consultant, a former engineer in the Air Force Information Warfare Unit and an established author. She has authored two bestselling CISSP books, including CISSP All-in-One Exam Guide and was a contributing author to the book Hackers Challenge. Shon is currently finishing her newest book, Gray Hat Hacking: The Ethical Hackers Handbook. Thank you for joining us today, Shon. Shon: Thank you for having me. Host: Before we get started Id like to point out several resources that supplement todays presentation. On your screen, the first link points to the library of our CISSP Essential classes. So you can attend previous classes and register to attend future classes as they become available. The second link on your screen allows you test what youve learned with a helpful practice quiz on todays material. And, finally youll find a link to the Class 5 Spotlight, which features more detailed information on this domain. Now, were ready to get started. All yours, Shon. Shon: Thank you. Thank you for joining us today. Well be looking at telecommunication and network domains. This is the largest domain of the CBK and it depends on your level of comfort on if you will feel if this is difficult or not. I find that a lot of people in security come from the networking background and so they have the understanding of whats covered in this domain. It doesnt mean that this domain is necessary simplistic then, which is usually overwhelming, even if you come from the networking world. Its just the amount of information thats covered. We go through several different protocols. We look at the different types of cabling and security issues; LAN/MAN/WAN technologies, the network devices and different services, how telecommunication services and switching takes place, false tolerance and wireless technologies are covered also, along with other OSI model. So I kind of feel in the beginning of this domain really probably the first fourth is networks 101 information. So, just like with all the other domains, each domain goes one inch deep, but it does go a mile wide. You do need to understand the difference between the UTP cabling, STP and fiber and coax. What are the differences there, what are the security issues, which ones are more affected by crosstalk and attenuation and which provides the most security? We also look at LAN, MAN and WAN technology. This doesnt mean just the actual how large the geographical area of the transmissions data thats going to take place. What youre really looking at when youre comparing LAN, MAN and WAN technologies are
38

protocols that work with a Data Link layer. Its only the Data Link layer that understands the type of environment that youre moving data over. If youve got data thats coming down your protocol stack, its at the Data Link layer that its going to be properly formatted for range for Ethernet or if its going over wireless or if its going over Frame Relay. So, when youre talking about LAN, MAN and WAN technologies which is really what this domain covers, its looking mainly at the Data Link player and the protocols that work there. After you go through the cabling, types of cabling and you understand what a DMZ is, which is the proper zone between trusted and untrusted environments you need to know what bastion hosts are and bastion hosts are just locked down systems. Any type of computer thats in a DMZ needs to be a bastion host because theyre going to be the first line of devices thatll be attacked. Then you get into LAN topologies and when were talking about topology, were talking at the physical layer. This is just how devices are physically connected within an environment. So, the bus, ring, star and mesh topologies; you need know what they are, but also the downfalls of each one of them because each one has specific downfalls that you need to be aware of. Now in our environment, even though there are certain downfalls for like a bus topology or ring topology. Our environments today, our networks today arent so vulnerable to only be negatively affected or to be negatively affected on whats going on at the physical layer. We have Data Link layer technologies that have a lot of intelligence to be able to try to ensure the stability of the environment, even as we have some issues going on at the physical layer. So at the Data Link layer were talking about LAN Data Link and were talking about LAN media access control technologies. These are technologies that have the rules of how devices are going to communicate over a shared medium. An Ethernet is contentionbased technology, meaning that all of the nodes are competing for one shared medium, which is that cable. In wireless theres also contention because all of the wireless devices are in contention or competing for a specific sector of frequencies. So we have to have a technology that will determine how the devices communicate to the mediums. Also, how they communicate to each other and to ensure that collisions dont take place. Thats a big issue within LANs is collisions. So, you and I put data on the line at the same time theres a collision that negatively affects the network performance. In LAN environments we need to understand Token-Ring and even though were not using much Token-Ring and Ethernet and wireless LANs and you need to know the different IEEE standards, 802.5 is Token-Ring, 802.3 is Ethernet, 802.11 is wireless LAN. And also realize that the Data Link later has two sub-layers that you need to know about, the logical link control layer and the MAC layer. Now, wireless is becoming more and more important and more and more used within the industry thus the CISSP exam is covering wireless more and more. So you will need have a good handle of wireless technologies and the components within as the security issues and a lot of the common attacks.

39

Well quickly look at some of the IEEE standards for wireless. You also need to understand the difference of the types of spread spectrum technologies. Spread spectrum is just a way of getting data onto radio frequency signals. Theres three different types of spread spectrums; 802.11 is in use today. Youll need to know the differences between the frequency-hopping and direct sequence and which standards are actually used. Theres different ways that wireless devices and access points authenticate and youll need to know if its going to SKA or if its going to be open, which means that youre not actually using WEP. So lets go ahead and look at some of these items. The one thing I dont cover in this presentation is the WEP protocol stack. WEP protocol stack is used in wireless devices that do not have the resources to have TCP/IP stack. So, WEP is a protocol stack and I think the ones that you need to be most concerned about is WTLS. WTLS is similar to SSL or TLS that we use in a TCP/IP stack, but theres an issue of gap in the WEP. Theres a security concern which is the gap in the WEP where translation between WTLS and SSL actually have to take place. Im not going to cover that, but these are some of the wireless issues you will need to be aware of. Now wireless has gone through amazing amount of generations in a very short period of time. We first came out 802.11 as an IEEE standard, works in 2.4GHz range and so does 11B and 11G, 2.4GHz referred to as the dirty spectrum and thats just because theres so many things that work there because its free. Its a free spectrum to work in. Its not regulated and thats why a lot of things are working there. It can cause a lot of interference. Its depending on whats in your environment with your wireless LAN. So we started off with .11, we went to .11b and we changed our spread spectrum technology and went A. A works in 5GHz range and has a higher bandwidth or data throughput. And so youll need to know the differences between these standards and where theyre used. Now 802.11i is a standard that right now is in the process of being accepted. You probably have heard a lot of the security issues that surround wireless LAN and that has to do with the issues of WEP, wired equivalent privacy, and this is the protocol that you used for the encryption process for wireless LAN and for the authentication of WEP. WEP is so flawed that it really doesnt provide any protection. Theres so many issues with WEP that even if you do enable it, if you just depend on WEP it can be cracked. The encryption can be cracked with free downloadable tools from the Internet. So, it doesnt mean that every wireless LAN today thats set up is totally vulnerable, but what had happened is the standard itself, 802.11 standard is just so flawed that vendors have had to come up with their own solutions. So theyve come up with their own Band Aids and their own approaches to security. And thats an issue is everybodys doing their own thing. Its not a standard and also it has inoperability issues. WEP has been written a lot about. Ive written very technical articles and given a lot of talks about the problems with WEP. We have a fix. We have a new standard thats coming out. This is 802.11i. Theres part of it thats backwards compatible that will provide high level protection for 802.11 wireless LANs that are currently out there and then a part of the standard that starts fresh. If you are just now looking to implement a wireless LAN and you should use that other portion which uses a totally different algorithm and everything.
40

Because of the problems with WEP, theres just amazing amount of attacks that have been very successful with wireless LANs. You can usually eavesdrop, of course, on the traffic, especially if its not encrypted because some corporations seem to not understand that radio frequencies do not stop at your windows or at your walls or your doors. These radio frequencies go a long distance down the road and it depends on the signal strength of your access point. So wardriving has been a very common thing where people have one or two laptops. You have an antennae and then you can eavesdrop on peoples signals and get into their system, get into their environment. Thats really the goal of carrying out attacks on wireless LANs is to gain access to the wired environment because most environments have a wired portion and a smaller wireless portion. They work in an infrastructure mode. Theres a difference between an infrastructure mode and ad-hoc mode. Ad-hoc and wireless means you have several different wireless devices that are communicating to each. Theyre not communicating through an access point to a wired portion of the environment. Most of the more common architectures infrastructure where you have wireless devices that have to authenticate and go through an access point to be able to communicate to the wired environment. So I said that WEP could be easily cracked and theres ways of manipulating the data without the receiver knowing it. Rogue APs can be set up, rogue access points, because what happens when you have lets say a wireless laptop and its booting up, your wireless card is sending out probes trying to find the closest access point. If you have two access points that are closer to you, its the signal strength that will be used in the decision process for which access point your card authenticates to. So, I can just throw up an access point that has a very strong signal. Youre wireless card will think that its the closest one to it and youll send over your credential information and Ive captured it. Thats how rogue access points happen. Now TCP/IP is suite of protocols. Its not just TCP and IP, theres a whole suite. You need to know how the services these different protocols provide security issues with them, reliable versus unreliable transporting of data, security issues of using just like telnetter FTP or TFTP, the levels of authentication they provide. SNMP is a very common method for hackers to use and gain a lot information about network devices. So you need to understand SNMP agents and managers work and what communities strings are and traps. You also need to know which levels these different protocols work at. An ARP is an address resolution protocol that works with the Data Link layer and its job is that your data is going through a data encapsulation process, which means that its going down your protocol stack and the different protocols at these different levels of this stack are putting on their own instructions, which are in the headers or the trailers of the packet. Now, it goes to the network layer and it gets an IP address, but your Data Link technologies do not understand IP addresses. So, it needs to have a MAC address. A MAC address is a hardware address thats 48 bits. So ARP is responsible for finding out what the necessary MAC address that corresponds with the IP address that the computers trying to communicate with. In this domain we go over different types of ARP attacks. ARP attacks can be carried out where a victim has the improper mapping between an IP address and a MAC address. So, Im trying to send information to your IP address, but Ive been under an ARP attack and
41

the mappings that my ARP protocol has that when I, even though I put the right IP address on the packet, at the Data Link layer the attackers MAC address gets put on the packet so it goes to the attacker. Youll need to understand how ARP works, reverse ARP, BOOTP, DHDP, those types of protocols and ICMP. ICMP is a protocol that has been developed to just move around status and error messages. Theres a certain number of ICMP packets and its not been developed to move user data around, but just status information like when theres a link that, maybe between routers, theres a path thats overloaded with traffic. So one router will send an ICMP message to indicate look you need to used another path because this ones too busy right now. And that could be a denial of service attack where you send ICMP messages to routers indicating that certain routes are down so that nobody sends data over a certain link. Another common attack using ICMP as a Loki attack. There a Loki tool that you can actually put a little of information in an ICMP packet. Again, ICMP packet wasnt developed to move user data back and forth. It just was packet information. But if I can actually put some data in and ICMP packet, then I can fool your firewall because most firewalls allow ICMP packets and dont realize that theyre actually moving data around. So, this domain goes over the different types of devices. Youll need to know the basic functionality of these devices. What OSI layer they work at. In this presentation I dont cover the OSI model, but you do need know it. An OSI model is developed to really kind of explain the different levels of functionality that takes place in a network pack and its provides standards for these different levels. Theres seven different layers in the OSI model and it provides a standard and a module approach of developing a network stack so vendors can create their own protocols that work at one layer and you, as a consumer, you as a user can change out those different protocols or you can have them all in there, but theyll be able to communicate with other vendor protocols that work at different layers because they follow standard and they standardized interfaces. You need to know all about the OSI models, the different layers, the protocols that work the different and the functionality. We have network devices here that youll need to know really kind of the basic functionality of the I/O, a repeater just works at the physical layer. Its amplifying the signal. Its not doing any forwarding or routing. Bridge works mainly as a Data Link layer. Its looking at MAC address for forwarding decisions. Routers work as mainly at the network layer and they can be packet filtering firewalls, which well look at. Switches mainly work at Data Link layers, although that we have much more sophisticated switches today that we can work at different layers and switches are really kind of the device of choice now in environments because of how fast they have to work compared to a bridge. A bridge works at the Data Link layer. But the actual processing is taking at the silicon level of the switch, which makes it much faster. Also switches have functionality and capabilities that are very beneficial network administrators, which is setting up VLANs and setting up logical containers for workstations and users and resources versus being tied to their physical location, which is how traditional nonVLAN-aware environments would be.

42

Today we have five generations of firewalls and you need to know the difference between the generations and the good and the bad about each one of them. Packets filtering is a first generation of basically a router with ACLs on them. It does not provide a high level of protection, but it doesnt take a lot of processing. So, a lot of times those will be our border routers. We need to know about two types of proxy firewalls. Theres circuit-level proxies, theres application-level proxies firewalls. A proxy means that the actual connection between the sender and receiver is broken and a direct communication cannot take place. Circuit-level proxy will be making its access decision based on header information and an applicationlevel proxy will be making its decisions based on the data payload of the packet along with the header information. So, you need to understand the differences between them, but also the good and the bad about them. Something thats stateful means that it understands a protocol stack in its communication. For example, a stateful firewall would understand that the first packet of a TCP connection is a SYN packet. The second packet is SYN-ACK and the third is ACK and then to close off that TCP connection is a FIN packet. Something thats stateful can understand that and keep track of it, but also make access decisions based on the state of the communication. So, if I try to communicate to you through a stateful firewall and I send you a SYN-ACK, the stateful firewall will know that Im trying to do something thats not safe. Im trying to fool the firewall by sending a SYN-ACK. And the reason that I would do that is that I am trying to fool the firewall by saying, dont look over here. Everythings fine. Weve already been communicating. See, Im sending the sending stack of the communication with a SYN-ACK. The packet filter and some proxies may be fooled by something like that. A kernal proxy is a newer generations and all of the processing actually takes place in the kernel and the kernel itself will create individual virtual network stacks. So, if theres communication coming over HTTP protocol, the kernel proxy is going to create an HTTP kernel stack to properly investigate each portion of that packet before it allows it to go through. Now a dynamic packet is different than a static packet filter. Static packet filters the first generation. A dynamic packet filter allows for ports to be dynamically opened and closed. First is static. There the ports are either open or their closed. But if you understand how ports work, that theres zeros through 1023 are the well known ports and those are the ports that are used on the server side. If I say that HTTP is a map to port 80, thats on the server side. Thats not on the clients side. So, if Im using a web browser and Im communicating to a web server, my client is actually going to chose a high port, a high random port. Thats beneficial when youre using dynamic packet filters because the dynamic packet filter can open that high port that Ive chosen just for the period of time Im communicating the web server and then once Im done communicating, that port is closed, which is different than a static packet filter. Now not only do we need to know the different types of firewalls, we need to know where to place these firewalls and the architecture of our environment. Theres industry standards on firewall architecture. Theres the green host which just needs to have a
43

screening device and then a firewall and thats usually a screening router and then a firewall. The Dual-Homed firewall architecture which means you have a firewall with two or more interfaces. Usually when youre talking about a Dual-Homed firewall it means usually that you have a more than one segment hanging off of that firewall. Then we have a screened subnet where you have two firewalls that are actually creating a full DMZ. Between subnet provides more protection because theres two layers of protection that the attacker has to get to before they get to your internal network. So, you need to understand the different architecture models along with types of firewalls there are. Also this domain goes over different encapsulation protocols, tunneling protocols, dial-up protocols and authentication protocols. PAP, CHAP versus EAP. Now EAP is a newer authentication protocol. It was developed to work specifically over PPP connections, but weve integrated EAP into other places also. Now EAP is not a protocol that says specifically how authentication will take place. EAP is a framework really. Its where you can actually plug in different types of authentication mechanisms. So, in traditional authentication steps, you either you PAP or you CHAP or you use MS-CHAP, but we need more flexibility. We need to have authentication that has higher security, has more flexibility. Maybe we have remote users that are going to need to authenticate to get into our corporate environment. We dont want to have a secondary user database of credentials. We want to use the same, the one user database were already using in our local environment. We already using Kerberos. So, why cant my remote users just authenticate through Kerberos? So, and thats the situation where you would actually use and implement EAP. It just provides more flexibility. Now different types of tunneling protocols, and some people think that a tunneling protocol automatically provides protections, which is not true. A tunneling protocol just means that a packet is encapsulated and its transported from one environment to another. For example, if we had two locations that use IPX/SPX and Novell proprietary protocol and we needed our two locations to be able to communicate over the Internet. Well, they cant because that protocol IPX/SPX is not understood over how the Internet has to be TCP/IP. So, we can tunnel from one location to the other if we have a tunneling protocol that could basically wrap those packets up and get them over to the destinations and unwrap them and allow them to work at the destination location. So thats what actually tunneling deals with and theres different types of tunneling protocols, but in most situations we want to tunnel through networks, but we also want to provide a level of protection. We want a VPN. So a lot of times we hear tunneling protocols as to do with setting up VPNs. Now our default protocol was PPTP. Whenever we set up VPNs we were using PPTP. So weve evolved. Weve moved mainly to IP stack and IP stack works with the Data Link layers. A suite of protocols that provide a range of security services, data origins, authentication, integrity, confidentiality.

44

Now L2TP is a Cisco tunneling protocol thats the combination of L2F, Layer-2 Forwarding, and PPTP. You would use L2TP if you needed to extend your VPN across a WAN link. So PPTP and IPSEC can only run over an IP environment. But L2TP can run over an IP environment, but it also can run over acts such as 5 Frame-Relay, ATM. So when you need to have your VPN extend over a WAN link, thats when you would use L2TP. But L2TP is just a tunneling protocol. It doesnt provide any protection. Youd have to use it in combination with IPSEC. Now at one of the MAN technologies we understand is SONET. And SONET does not work as a Data Link layer. SONET works as the physical later and really its just a standard of the signaling as of how the data is actually being moved because everythings being moved through some form of electricity. So, SONET is really just the standard of how data is going to be moved over fiber optic rings. You can think of SONET as a highway and allows different cars and buses and motorcycles and such to move over it because any type of data can move over SONET. The Data Link layers you can of as cars that move over SONET. So you can move Frame-Relay data. You can move ATM data. You can move any type of WAN technology. And for SONET if youre familiar with OC rings, OC dash and a number, those are SONET rings, optical carrier rings. And its the number after the OC-number indicates the bandwidth data throughput that can be carried over that carrier. So, these are all the WAN technologies you need to for the exam. The characteristics of when one would be used over the other, the downfalls, the pros, the ISDN, you need to know the BRI versus PRI and then ISDN is just set of services. Its emulated in actual telephone call. ISDN and DSL are technologies that allow for digital data to move over the last mile. The last mile is the only place within a telecommunication network that is still analog. The rest of the network is now digital. If we want higher speeds and we want to, even over that last mile, move data in a digital format. So, ISDN or DSL is the choice. You need to know the difference between FrameRelay, X.25 and ATM. Frame-Relay and XR 25 are packets switching technologies versus ATM as a self-switching technology. And theres a difference between a circuit switching and a packet switching. Circuit switching is how our telecommunication networks work for phone calls. So, if I call you whats actually happening is our voice data is going back and forth through switches. When I dial my phone, theres a protocol called signal 7 that will go and configure the switches between you and I. If youre there you pick up the phone. All of our data goes back and forth through the same path until we hang up and signal 7 will tear down that virtual path. Packet switching is different. Packet switching is basically when I send data to you over the Internet or over Frame-Relay or over XR 25. Those are packet switching technologies. And the data can take a bunch of different routes because it depends on how busy the different lengths are. So, data can come to you from me, from one source to one destination, but it can arrive out of sequence. So, the destination has to put it all back together again. Thats not really an issue when were moving data, but its more of an issue when were moving voice data. A voice-over-IP is becoming extremely popular in the industry and for good reasons.

45

I mean if we have voice-over-IP within our corporation, within our one building, lets say, a big benefit is now you only have to maintain one network. Where before you had to maintain a phone network and you had to maintain a data network. But now theyre come together where the voice data is moving in packets just like our data does. Also, people are using voice-over-IP for long distance phone calls and right now its very cheap and thats because its not currently regulated. FCC has not regulated voice-overIP, but as soon as they do the prices will definitely go up. So, again, its just using a moving voice data from the traditional circuit-switched environment to the packetswitched environment. Ive only touched on a few of the topics that are covered in this domain because it is so large. It does start off with the basic elements of telecommunication and networking, which are the cabling, the different types of signaling. You need to know synchronous versus asynchronous, digital versus analog, baseband versus broadband and then the different LAN media access technologies. The OSI models that describes the network stack and you need to know the TCP/IP model. The TCP/IP model, also sometimes called the DOD model, is an older model. It only has four layers where the OSI model has seven. TCP model only describes TCP/IP where the OSI model is more of an open format to describe any protocol stack. So then you move into the network devices, the types of firewalls, theres services that you need to know about. You need to know how NAT, network address translation, works and the different types of NAT. The differences between IP version 4 and IP version 6. Then you move into the MAN technologies; SONET, FDDI and a lot of WAN technologies. The majority of this domain covers the WAN technologies and weve barely touched on a few of them. Of course the security issues involved with a lot of these components along with attacks that are used against these components which mainly at different types of denial of service attacks and distributed denial of service attacks. So, this is a large domain, but if you understand it, if you understand these components, it really gives you a strong understanding from the network and ARP and how the different types of attacks take place and the different devices and component mechanisms that we use today to protect ourselves. Host: Thank you, Shon. This concludes Class 5 of CISSP Essentials: Mastering the Common Body of Knowledge - Telecommunications and Networking. Be sure to visit www.searchsecurity.com/CISSPessentials for additional class materials based on todays lesson and to register for our next class on Applications and System Development. Thanks again to our sponsor and thank you for joining us. Have a great rest of the day.

Spotlight article: Domain 6, Application and System Development


Applications and systems are the technologies closest to the data we are trying to protect. This domain details how applications and systems are structured, what security mechanisms and strategies are commonly used to secure data during access, processing

46

and storage; it also presents some of the common threats and countermeasures. The following topics are covered: System development process: The models, methods, life cycle phases, and management of the development process. Database systems: Models, management systems, query languages, components, data warehousing and mining, schema and security measures. Application development methodology: Software architecture, programming languages and concepts, change control methods, improvement models, data modeling and structures, data interface and exchange methods, artificial neural networks and expert systems. Security threats and countermeasures: Common threats to applications and systems and how expert systems and artificial neural networks can be applied to mitigate threats. System development process Determining the appropriate level of security for systems is a difficult judgment call. The decision depends on many factors, including the trust level of the operating environment, the security levels of the systems it will connect to, who will be using the system, the sensitivity of the data, how critical the functions are to the business, and how costly it will be to apply optimal security measures. Understanding the process and economics of system development is essential to understanding why few systems in production used today can be considered sufficiently secure. This section covers how different environments demand different types of security, the importance of addressing failure states, and the difficulty of balancing both security and functionality demands to meet business needs. An overview of the history of system building helps demonstrate why yesterday's system building approaches are no longer adequate in today's super-connected world. The increasing complexity of environments and technology rules out a "one size fits all" approach to security. Decisions for a Web-based business will be different than those made for a company concerned only with securing an intranet. Individuals preparing for the CISSP exam will gain insight into the decision making process, and into the perils of relying too heavily on environment-based security devices and appliances, rather than building the right level of security into a product. Open and distributed environments can be using legacy and newer technology, intranets and business partner extranets, along with a maintained marketing presence on the Internet for e-commerce purposes -- an entirety which presents an almost overwhelming security challenge. Yet, strategies are being development to better protect systems by layering security controls at different technology levels. Being the last bastion of defense, security controls applied at the system and application level, however, should be as rigorous as possible to ensure damage from an attack is minimized. Most commercial applications have security controls built, though only recently have vendors begun to set security on by default, which forces users to make deliberate risk decisions to lower their security protection from the level recommended by the vendor. These approaches may prove annoying to the user at first. However, the increasing worldwide threat level necessitates an increased level of accountability from commercial vendors and an increased level of awareness and responsibility on the part of the user.
47

The economics of building secure systems is a trade-off between the security and functionality of systems. Every dollar that goes into protecting a system is a dollar that won't be put toward building a more functional, usable system. However, as hackers, criminals and terrorists become more sophisticated in their methods, we're obligated to seek out new ways to reveal system vulnerabilities that result from uncommon conditions and trap for them so they won't be available for malicious use. Securely built systems depend on our ability to elevate the visibility and priority of security throughout each phase of the development process. Even as early as project initiation, we can begin formulating the security goal based on business needs, liability risks and investment constraints. Throughout the requirements and design phase, we can systematically uncover hidden functional and architectural flaws that could compromise security. We can apply inspection methods and automation during construction and testing to root out coding flaws or failure conditions known to be vectors for security attacks. At every decision point, risk analysis should guide customer decisions about the risk they are willing to accept as a trade-off for lower price, time to market, increased functionality or usability. In using operational checklists for installation and administration, and by applying rigorous change control methods, we can be sure our product will meet both user needs and enterprise security standards now and in the future. Database technology Databases hold the data needed to conduct business, guide business strategy, and prove business performance history. Database management software is covered, along with an overview of different types of database models -- hierarchical, distributed and relational. Most attention is paid to relational databases -- how schema is represented and used in the data dictionary, how it applies to security, how primary and foreign keys are related, how checkpoints and save points work, and how maintaining the integrity of a data set is essential to ensuring no data falls outside the schema or the security controls built into the schema. Data warehouses (aggregators of disparate data sets) and data marts (copies of subsets of data warehouses) pose similar challenges, but the effort and cost that goes into these systems makes the meta data they yield very valuable to businesses which warrants a correspondingly high level of protection. Strategies for administering data systems for optimal security are also covered. Using security views to enforce security policy, content and context driven access control strategies, exploring the challenges presented by aggregation and inference attacks, and the use of diversionary tactics such as cell suppression, noise and perturbation are among the techniques described in detail. Application development methodology After a brief overview of programming development, discussion centers on objectoriented programming, its encapsulation of code chunks as class objects, and how those objects can be altered and reused. In creating application designs, we model the use of data by the proposed application, analyzing the data paths it will take through the application. We are concerned about the atomicity of objects -- their cohesion and coupling properties, as this will drive the ease with which we can safely update them. Finally we concern ourselves with how the data our application will use is imported and exported from the application. The usefulness of standards and technologies that ensure component communication (COM, DCOM), the seamless exchange of data between disparate systems (ORB, CORBA, ODBC, DDE), the presentation or access to data outside the native application (OLE) are covered, as are automated CASE tools that help
48

manage the engineering process. There are security issues surrounding the use of each of these, as well as with more recent innovations such as Active X controls and Java Applets. Security threats and countermeasures In this section, exam preparation includes an overview of the most common threat attacks affecting or engaging applications and systems, and how they are executed. These include DoS, timing attacks, viruses, worms, and Trojan horses, among others. Advanced systems employing artificial intelligence such as expert systems and artificial neural nets can aid in revealing connections between disparate pieces of information and in recognizing anomalous patterns in network traffic or application behaviors that might signal an attack in progress.

Spotlight article: Domain 10, Operations Security


The operations department has responsibilities that pertain to everything that takes place to keep a network, computer system, applications and environment up and running in a secure and protected manner. After the network is setup is when operations kicks in, which includes the continual day-to-day maintenance of an environment. These activities are routine in nature and enable the environment, systems and applications to continue to run correctly and securely. Operation security is the process of understanding these operations from a competitor's/enemy's/hacker's viewpoint and then developing and applying countermeasures to mitigate identified threats. A company cannot provide any level of protection for itself unless it is providing the necessary operation security methodologies, technologies and procedures. This domain covers: Operations personnel Configuration management Media access protection System recovery Facsimile security Vulnerability and penetration testing Attack types Administration Network operations and systems managers have a daunting task. Not only must they assure that that their company can access what it needs to run on a daily basis, they must plan for capacity growth to anticipate performance bottlenecks, as well as service development and organizational testing. They also need to identify cost-effective

49

technology solutions, and lobby for budget and resources in political atmospheres that too often relegates them to the status of "plumber." Organizations are beginning to understand that without a sound infrastructure, their business will not run. Operations within a computing environment can pertain to software, personnel and hardware, but an operations department often focuses on the hardware and software aspects. Management is responsible for employees' behavior and responsibilities. The people within the operations department are responsible for ensuring that systems are protected and that they continue to run in a predictable manner. The operations department usually has the objectives of preventing reoccurring problems, reducing hardware failures to an acceptable level, and reducing the impact of hardware failure or disruption. This group should investigate any unusual or unexplained occurrences, unscheduled initial program loads, deviations from standards, or other odd or abnormal conditions that take place on the network. The concept of separation of duties, covered at length in Domain 1, is paramount to protecting companies from administrator misuse. Allocating parts of critical infrastructure tasks to several members of an operations team insures that no one person has the opportunity for wrongdoing that could go unnoticed. Separation of duties also extends to the managers themselves. No administrator should be responsible for tactical execution on the systems they are responsible for monitoring and assuring. Periodic job rotation is also a good strategy for detecting wrongdoers' activities. Operational management is additionally responsible for setting the levels of security access to different systems, applications and services. In every case, the rule of least privilege should apply, whenever possible. Operations management should depend heavily on information from business divisions as to the functional priority of systems, the value of their data, and who has a need to know. Too often, the quality of this information is poor -- or not forthcoming at all -- forcing the administrator to make a best guess as to the security level that should be applied. Security professionals and administrators are accountable for the proper control of system and resource use. Robust logging creates a solid baseline history of system use and network performance against which unexpected changes can be compared. Logging is also necessary to ensure traceability to the source of system problems or deliberate hacks. Many companies do robust logging, but fail to review logs. Log parser tools are considered essential for limiting the amount of information presented to an administrator during review, making the reviewing task time efficient. Regular log review can reveal unauthorized access to information, repetitive mistakes requiring further user training, whether security controls are working and that access levels are appropriate. Operational activities Operations personnel do most of the hands-on work of securing the enterprise and ensure data availability. For instance, they must ensure the proper securing of backup media and the proper disposal or recycled systems and devices. As a set of full backup tapes essentially represents the complete intellectual property of the company, losing control over this media can be very serious. Residual data left on discarded systems and media also poses some degree of risk, and operations personnel should understand if degaussing, zeroization, or physical destruction is necessary.

50

Operations personnel should conduct an operational assurance assessment to determine if the architecture of the product, and its embedded features and functions will solve the business problem without compromising the infrastructure and security protections. They may also conduct a life cycle assurance assessment -- inspecting the specifications and documentation to determine that the product was built well, and conforms to enterprise quality and security standards. In the rush to solve a business problem, companies sometimes breeze past the product evaluation stage or ignore operational recommendations and concerns, only to discover the solution does not integrate well with existing technology or provides inadequate security. This may not seem very important, until we remember that security is only as strong as its weakest link. A solution vulnerable to security attacks weakens every other system it is interfaced with. Purchasing outside solutions, or employing internal ones without due diligence to security is a common and serious problem, and operations personnel must be diligent and precise in their evaluations to impress upon management the potential serious consequences of a poor investment decision. The best way to think about a network is as one big distributed system. A change in one part of the system can have unexpected results in another. You would want to have a way to back out of a change to a known good state, should unexpected problems occur. Therefore, a managed process must be in place to control changes to environments, and every change should be meticulously recorded, so that changes can be rolled back easily if problems occur. While a functional change might not seem to have any residual effects, security levels can be inadvertently compromised. Operations personnel are responsible for ensuring the proper testing of changes to ensure that not only does the functional change work, but that security levels within the affected system, as well as those of interfacing systems have not been degraded as a result. Operations personnel must also plan and test for system recovery, as it is during failure modes that security controls built into systems could be rendered ineffective. Operations personnel should additionally engage in application monitoring. Automated tools can help discover anomalies in the use of systems and network resources that can indicate wrongdoing, or a problem that could result in security vulnerability. There are many network appliances and utilities available that help simplify this task. A prevalent vector for malware entry tends to be e-mail. E-mail is the way the corporation communicates both inside and outside the corporate perimeter Security professionals must understand how e-mail systems and common protocols such as SMTP, POP and IMAP work. In short, every device connected to the network, whether inward or outward facing, whether used in business, or simply used in system administration, poses some level of security threat to organizations. Each must be evaluated and controlled. Hacking and countermeasures The volume of network attacks is growing every day, in part because of the proliferation of free tools that can be used by anyone who has even a little knowledge. Most good operations personnel have shored up their perimeters with firewalls and DMZs, and can recognize most types of common attacks when they are happening. "Hardening" systems can reduce non-essential functions and ports that could be used as attack channels. Some administrators will apply TCP wrappers or network sniffers to monitor traffic in and out
51

of the perimeter, while others will apply sophisticated vulnerability scanning tools that map networks and test devices and systems for known vulnerabilities. All of these are good approaches for detecting possible attacks. Sadly, some security failures cannot be recognized in real time by any of these measures. There are several ways that information can become available to others for whom it was not intended, which can bring about unfavorable results. Sometimes this is done intentionally, but it can also be done unintentionally. Information can be disclosed unintentionally when one falls prey to attacks that specialize in social engineering, covert channels, malicious code and electrical airwave sniffing. Keystroke monitoring is a process whereby computer system administrators view or record both the keystrokes entered by a computer user and the computer's response during a user-to- computer session. Examples of keystroke monitoring would include viewing characters when typed by users, reading users' electronic mail and viewing other recorded information typed by users. Some forms of routine system maintenance record user keystrokes; this could constitute keystroke monitoring if the keystrokes are preserved along with the user identification such that an administrator can determine the keystrokes entered by specific users. These are just a few things that operations security personnel must understand, implement, and keep track of to ensure that the network, and components within it, is properly secured. Although information security encompasses a lot more than technology, technology is still a huge component that must be properly controlled.

52

http://docs.media.bitpipe.com/io_11x/io_111332/item_737708/ISM_JulyAugust_final.pdf

http://searchsecurity.techtarget.com/resources

http://searchsecurity.techtarget.com/tutorial/Network-Access-Control-Learning-Guide

http://searchsecurity.techtarget.com/tutorial/Secure-network-architecture-best-practicesDMZ-and-VLAN-security

http://searchsecurity.techtarget.com/tutorial/Endpoint-protection-advice-Improving-NACwith-secure-endpoints

http://searchsecurity.techtarget.com/tutorial/Endpoint-protection-best-practices-manualCombating-issues-problems

http://searchsecurity.techtarget.com/tutorial/Nessus-3-Tutorial http://searchsecurity.techtarget.com/ehandbook/Emerging-threat-detection-techniquesand-products http://searchsecurity.techtarget.com/ehandbook/Network-security-best-practices-andessentials http://searchsecurity.techtarget.com/ehandbook/Developing-your-endpoint-securitymanagement-transition-plan http://searchcio.techtarget.com/ebook/A-CIOs-Guide-to-Outsourcing http://searchsecurity.techtarget.com/ehandbook/Identity-and-access-managementsolutions-The-basics-and-issues http://searchsecurity.techtarget.com/ebook/Software-as-a-Service-Top-Things-to-KnowWhen-Moving-to-SaaS http://searchcompliance.techtarget.com/ebook/Log-management-tightens-data-securityand-IT-performance

53

http://searchsecurity.techtarget.com/ebook/Application-Security-Guide-ScanningProduction-Applications http://searchsecurity.techtarget.com/ebook/Application-Security-Guide-ScanningProduction-Applications http://searchsecurity.techtarget.com/ebook/Understanding-Governance-Risk-andCompliance-Frameworks http://searchsecurity.techtarget.com/tip/SearchSecuritycom-guide-to-informationsecurity-certifications

54

Das könnte Ihnen auch gefallen