The following sections list Common Criteria and technology terms used in this document.
1.2.1 Common Criteria Terms
Assurance
Grounds for confidence that a TOE meets the SFRs [CC].
Base Protection Profile (Base-PP)
Protection Profile used as a basis to build a PP-Configuration.
Collaborative Protection Profile (cPP)
A Protection Profile developed by
international technical communities and approved by multiple schemes.
Common Criteria (CC)
Common Criteria for Information Technology Security Evaluation (International Standard ISO/IEC 15408).
Common Criteria Testing Laboratory
Within the context of the Common Criteria Evaluation and Validation Scheme (CCEVS), an IT security evaluation facility
accredited by the National Voluntary Laboratory Accreditation Program (NVLAP) and approved by the NIAP Validation Body to conduct Common Criteria-based evaluations.
Common Evaluation Methodology (CEM)
Common Evaluation Methodology for Information Technology Security Evaluation.
Extended Package (EP)
A deprecated document form for collecting SFRs that implement a particular protocol, technology,
or functionality. See Functional Packages.
Functional Package (FP)
A document that collects SFRs for a particular protocol, technology,
or functionality.
Operational Environment (OE)
Hardware and software that are outside the TOE boundary that support the TOE functionality and security policy.
Protection Profile (PP)
An implementation-independent set of security requirements for a category of products.
A comprehensive set of security requirements for a product type that consists of at least one Base-PP and at least one PP-Module.
Protection Profile Module (PP-Module)
An implementation-independent statement of security needs for a TOE type complementary to one or more Base-PPs.
Security Assurance Requirement (SAR)
A requirement to assure the security of the TOE.
Security Functional Requirement (SFR)
A requirement for security enforcement by the TOE.
Security Target (ST)
A set of implementation-dependent security requirements for a specific product.
Target of Evaluation (TOE)
The product under evaluation.
TOE Security Functionality (TSF)
The security functionality of the product under evaluation.
TOE Summary Specification (TSS)
A description of how a TOE satisfies the SFRs in an ST.
1.2.2 Technical Terms
Address Space Layout Randomization (ASLR)
An anti-exploitation feature which loads memory mappings into unpredictable
locations. ASLR makes it more difficult for an attacker to redirect control to code
that they have introduced into the address space of a process.
Administrator
An administrator is responsible for management activities, including setting policies that are
applied by the enterprise on the operating system.
This administrator could be acting remotely through a management server, from which the system
receives configuration policies.
An administrator can enforce settings on the system which cannot be overridden by non-administrator users.
Application (app)
Software that runs on a platform and performs tasks on behalf of the user
or owner of the platform, as well as its supporting documentation.
Application Programming Interface (API)
A specification of routines, data structures, object classes, and variables
that allows an application to make use of services provided by another software
component, such as a library. APIs are often provided for a set of libraries included
with the platform.
Credential
Data that establishes the identity of a user, e.g. a cryptographic key or
password.
Critical Security Parameters (CSP)
Information that is either user or system defined and
is used to operate a cryptographic module in processing encryption functions including
cryptographic keys and authentication data, such as passwords, the disclosure or modification
of which can compromise the security of a cryptographic module or the security of the
information protected by the module.
DAR Protection
Countermeasures that prevent attackers, even those with physical access,
from extracting data from non-volatile storage.
Common techniques include data encryption and wiping.
Data Execution Prevention (DEP)
An anti-exploitation feature of modern operating systems executing on
modern computer hardware, which enforces a non-execute permission on pages of memory.
DEP prevents pages of memory from containing both data and instructions, which makes
it more difficult for an attacker to introduce and execute code.
Developer
An entity that writes OS software. For the purposes of this document,
vendors and developers are the same.
General Purpose Operating System
A class of OSes designed to support a wide-variety of workloads consisting of many concurrent applications or services.
Typical characteristics for OSes in this class include support for third-party applications,
support for multiple users, and security separation between users and their respective resources.
General Purpose Operating Systems also lack the real-time constraint that defines Real Time Operating Systems (RTOS).
RTOSes typically power routers, switches, and embedded devices.
Host-based Firewall
A software-based firewall implementation running on the OS for filtering inbound and
outbound network traffic to and from processes running on the OS.
Operating System (OS)
Software that manages physical and logical resources and provides services
for applications. The terms TOE and OS are interchangeable in this
document.
Personally Identifiable Information (PII)
Any information about an individual maintained by an agency, including, but
not limited to, education, financial transactions, medical history, and criminal or
employment history and information which can be used to distinguish or trace an
individual's identity, such as their name, social security number, date and place of
birth, mother's maiden name, biometric records, etc., including any other personal
information which is linked or linkable to an individual.[OMB]
Sensitive Data
Sensitive data may include all user or enterprise data or may be specific
application data such as PII, emails, messaging, documents, calendar items, and contacts.
Sensitive data must minimally include credentials and keys. Sensitive data shall
be identified in the OS's TSS by the ST author.
User
A user is subject to configuration policies applied
to the operating system by administrators. On some systems under certain
configurations, a normal user can temporarily elevate privileges to that of an administrator.
At that time, such a user should be considered an administrator.
This is everything we need to describe in words about this use case.
For changes to included SFRs, selections, and assignments required for this use case, see F.1 Elephant-own device.
1.5 Platforms with Specific EAs
This PP includes platform-specific EAs for the below-listed operating system platforms. For "bare-metal" applications,
applications that run on other OS platforms, and applications that run in software-based execution environments contact the
Technical Community for guidance.
Android: Mobile operating systems based on Google Android.
Microsoft Windows: Microsoft Windows operating systems.
Apple iOS: Apple's mobile operating system for iPhones.
Linux: Linux-based operating systems other than Android.
The evaluation methods used for evaluating the TOE are a combination of the workunits
defined in [CEM] as well as the Evaluation Activities for ensuring that individual SFRs
and SARs have a sufficient level of supporting evidence in the Security Target and guidance
documentation and have been sufficiently tested by the laboratory as part of completing
ATE_IND.1. Any functional packages this PP claims similarly contain their own Evaluation
Activities that are used in this same manner.
CC Conformance Claims
This PP is conformant to
Part 2 (extended) and
Part 3 (extended)
of Common Criteria CC:2022, Revision 1.
PP Claim
This PP does not claim conformance to
any Protection Profile.
The following PPs and PP-Modules are allowed to be specified in a
PP-Configuration with this PP.
Protection Profile for Mobile Device Management Version 4.0
This PP is
Functional Package for TLS Version 1.1 conformant.
This PP is
Functional Package for TLS Version 2.0 conformant.
This PP is
Functional Package for SSH Version 1.0 conformant.
This PP is
Assurance Package that Doesn't exist conformant.
The functional packages to which the PP conforms may include SFRs that are not mandatory
to claim for the sake of conformance. An ST that claims one or more of these functional
packages may include any non-mandatory SFRs that are appropriate to claim based on the
capabilities of the TSF and on any triggers for their inclusion based inherently on the SFR
selections made.
Evaluation Methods
This PP incorporates evaluation activies
from the following Evaluation Methods documents:
Name of document
Additional Information
All security requirements in these claimed functional packages are intended to satisfy
the O.PROTECTED_COMMS TOE security objective of this PP.
3 Security Problem Definition
The security problem is described in terms of the threats that the OS is expected to address,
assumptions about the operational environment, and any organizational security policies that the OS
is expected to enforce.
3.1 Threats
T.NETWORK_ATTACK
An attacker is positioned on a communications channel or elsewhere on the
network infrastructure. Attackers may engage in communications with applications and
services running on or part of the OS with the intent of compromise. Engagement may
consist of altering existing legitimate communications.
T.NETWORK_EAVESDROP
An attacker is positioned on a communications channel or elsewhere on the
network infrastructure. Attackers may monitor and gain access to data exchanged between
applications and services that are running on or part of the OS.
3.2 Assumptions
A.PLATFORM
The OS relies upon a trustworthy computing platform for
its execution. This underlying platform is out of scope of this PP.
A.PROPER_USER
The user of the OS is not willfully negligent or hostile, and uses the
software in compliance with the applied enterprise security policy. At the same time,
malicious software could act as the user, so requirements which
confine malicious subjects are still in scope.
3.3 Organizational Security Policies
P.ENTERPRISE
If the OS is bound to a directory or management server, the configuration of
the OS software must be capable of adhering to the enterprise security policies
distributed by them.
4 Security Objectives
4.1 Security Objectives for the TOE
O.ACCOUNTABILITY
Conformant OSes ensure that information exists that allows
administrators to discover unintentional issues with the configuration and operation of
the operating system and discover its cause. Gathering event information and immediately
transmitting it to another system can also enable incident response in the event
of system compromise.
O.INTEGRITY
Conformant OSes ensure the integrity of their update
packages. OSes are seldom if ever shipped without errors, and the
ability to deploy patches and updates with integrity is critical to enterprise network
security. Conformant OSes provide execution environment-based
mitigations that increase the cost to attackers by adding complexity to the task of
compromising systems.
O.MANAGEMENT
To facilitate management by users and the enterprise, conformant OSes
provide consistent and supported interfaces for their
security-relevant configuration and maintenance. This includes the deployment of
applications and application updates through the use of platform-supported deployment
mechanisms and formats, as well as providing mechanisms for configuration and
application execution control.
4.2 Security Objectives for the Operational Environment
OE.PLATFORM
The OS relies on being installed on trusted
hardware.
OE.PROPER_USER
The user of the OS is not willfully negligent or hostile,
and uses the software within compliance of the applied enterprise security policy.
Standard user accounts are provisioned in accordance with the least privilege model.
Users requiring higher levels of access should have a separate account dedicated for
that use.
OE.PROPER_ADMIN
The administrator of the OS is not careless, willfully
negligent or hostile, and administers the OS within compliance of the applied enterprise
security policy.
4.3 Security Objectives Rationale
This section describes how the assumptions, threats, and organizational
security policies map to the security objectives.
The threat T.NETWORK_EAVESDROP is countered by O.MANAGEMENT as this provides
for the ability to configure the OS to protect the confidentiality of its transmitted
data.
The organizational security policy P.ENTERPRISE is enforced through the
objective O.MANAGEMENT as this objective represents how the enterprise and user assert
management over the OS.
5 Security Requirements
This chapter describes the security requirements which have to be fulfilled by the product under evaluation.
Those requirements comprise functional components from Part 2 and assurance components from Part 3 of
[CC].
The following conventions are used for the completion of operations:
Refinement operation (denoted by bold text or strikethrough
text): Is used to add details to a requirement or to remove part of the requirement that is made irrelevant
through the completion of another operation, and thus further restricts a requirement.
Selection (denoted by italicized text): Is used to select one or more options
provided by the [CC] in stating a requirement.
Assignment operation (denoted by italicized text): Is used to assign a
specific value to an unspecified parameter, such as the length of a password. Showing the
value in square brackets indicates assignment.
Iteration operation: Is indicated by appending the SFR name with a slash and unique identifier
suggesting the purpose of the operation, e.g. "/EXAMPLE1."
5.1 Security Functional Requirements
5.1.1 Auditable Events for Mandatory SFRs
Table 2: Auditable Events for Mandatory Requirements
What the evaluator has to do with the TOE Guidance document.
KMD
What the evaluator has to do with the Key Management Document, or whatever.
Tests
The tests that the evaluator has to run to verify that the requirement is met.
5.1.3 TOE Security Functional Requirements Rationale
The following rationale provides justification for each security objective for the TOE,
showing that the SFRs are suitable to meet and achieve the security objectives:
The Security Objectives in were constructed
to address threats identified in
. The Security Functional Requirements (SFRs)
in are a formal instantiation of the Security Objectives. The PP
identifies the Security Assurance Requirements (SARs) to frame the extent to
which the evaluator assesses the documentation applicable for the evaluation and performs
independent testing. This section lists the set of SARs from CC part 3
that are required in evaluations against this PP. Individual Assurance Activities
o be performed are specified both in
as well as in this section.
The general model for evaluation of OSs against STs written to conform to this PP is as follows:
After the ST has been approved for evaluation, the TSEF will obtain the
OS, supporting environmental IT, and the administrative/user guides for
the OS. The ITSEF is expected to perform actions mandated by the Common Evaluation
Methodology (CEM) for the ASE and ALC SARs. The ITSEF also performs the Assurance Activities
contained within , which are intended to be an interpretation of the
other CEM assurance requirements as they apply to the specific technology instantiated in the
OS. The Assurance Activities that are captured in also provide
clarification as to what the developer needs to provide to demonstrate the OS is compliant
with the PP.
The information about the OS is contained in the guidance documentation available to the end user as
well as the TSS portion of the ST. The OS developer must concur with the description of the product that is
contained in the TSS as it relates to the functional requirements. The Assurance Activities
contained in should provide the ST authors with
sufficient information to determine the appropriate content for the TSS section.
The
functional specification describes the TSFIs. It is not
necessary to have a formal or complete specification of these interfaces. Additionally,
because OSs conforming to this PP will necessarily have interfaces to
the Operational Environment that are not directly invokable by OS
users, there is little point specifying that such interfaces be described in and of
themselves since only indirect testing of such interfaces may be possible. For this PP,
the activities for this family should focus on understanding the interfaces presented in
the TSS in response to the functional requirements and the interfaces
presented in the AGD documentation. No additional “functional specification” documentation
is necessary to satisfy the assurance activities specified. The interfaces that need to be
evaluated are characterized through the information needed to perform the assurance
activities listed, rather than as an independent, abstract list.
The developer shall provide a tracing from the functional specification to the
SFRs.
Application
Note:
As indicated in the introduction to this section, the
functional specification is comprised of the information contained in the AGD_OPE and
AGD_PRE documentation. The developer may reference a website accessible to application
developers and the evaluator. The assurance activities in the functional requirements
point to evidence that should exist in the documentation and TSS
section; since these are directly associated with the SFRs, the tracing in element
ADV_FSP.1.2D is implicitly already done and no additional documentation is
necessary.
There are no specific assurance activities associated with these SARs, except
ensuring the information is provided. The functional specification documentation is
provided to support the evaluation activities described in , and
other activities described for AGD, ATE, and AVA SARs. The requirements on the content
of the functional specification information is implicitly assessed by virtue of the
other assurance activities being performed; if the evaluator is unable to perform an
activity because there is insufficient interface information, then an adequate
functional specification has not been provided.
5.2.3 Class AGD: Guidance Documentation
The guidance documents will be
provided with the ST. Guidance must include a description of how the IT
personnel verifies that the Operational Environment can fulfill its role for the security
functionality. The documentation should be in an informal style and readable by the IT
personnel. Guidance must be provided for every operational environment that the product
supports as claimed in the ST. This guidance includes instructions to
successfully install the TSF in that environment; and Instructions to
manage the security of the TSF as a product and as a component of the
larger operational environment. Guidance pertaining to particular security functionality is
also provided; requirements on such guidance are contained in the assurance activities
specified with each requirement.
The developer shall provide operational user guidance.
Application
Note:
The operational user guidance does not have to be contained in a
single document. Guidance to users, administrators and application developers can be
spread among documents or web pages.
Rather than repeat information here, the developer should
review the assurance activities for this component to ascertain the specifics of the
guidance that the evaluator will be checking for. This will provide the necessary
information for the preparation of acceptable guidance.
The operational user guidance shall describe, for each user role, the
user-accessible functions and privileges that should be controlled in a secure
processing environment, including appropriate warnings.
Application
Note:
User and administrator are to be considered in the definition
of user role.
The operational user guidance shall describe, for each user role, the available
functions and interfaces, in particular all security parameters under the control of
the user, indicating secure values as appropriate.
Application
Note:
This portion of the operational user guidance should be presented
in the form of a checklist that can be quickly executed by IT personnel (or end-users,
when necessary) and suitable for use in compliance activities.
When possible, this guidance is to be expressed in the eXtensible Configuration
Checklist Description Format (XCCDF) to
support security automation.
Minimally, it should be presented in a structured
format which includes a title for each configuration item,
instructions for achieving the secure configuration, and any relevant rationale.
The operational user guidance shall, for each user role, clearly present each
type of security-relevant event relative to the user-accessible functions that need to
be performed, including changing the security characteristics of entities under the
control of the TSF.
The operational user guidance shall identify all possible modes of operation of
the OS (including operation following failure or operational
error), their consequences, and implications for maintaining secure operation.
The operational user guidance shall, for each user role, describe the security
measures to be followed in order to fulfill the security objectives for the
operational environment as described in the ST.
Some of the contents of the operational guidance are verified by the
assurance activities in and evaluation of the OS according to the [CEM]. The following additional
information is also required. If cryptographic functions are provided by the OS, the operational guidance shall contain instructions for configuring
the cryptographic engine associated with the evaluated configuration of the OS. It shall provide a warning to the administrator that use of other
cryptographic engines was not evaluated nor tested during the CC evaluation of the
OS. The documentation must describe the process for verifying
updates to the OS by verifying a digital signature – this may be
done by the OS or the underlying platform. The evaluator will
verify that this process includes the following steps: Instructions for obtaining the
update itself. This should include instructions for making the update accessible to
the OS (e.g., placement in a specific directory). Instructions for
initiating the update process, as well as discerning whether the process was
successful or unsuccessful. This includes generation of the hash/digital signature.
The OS will likely contain security functionality that does not
fall in the scope of evaluation under this PP. The operational guidance shall make it
clear to an administrator which security functionality is covered by the evaluation
activities.
The developer shall provide the OS, including its preparative
procedures.
Application
Note:
As with the operational guidance, the developer should look to
the assurance activities to determine the required content with respect to preparative
procedures.
The preparative procedures shall describe all the steps necessary for secure
acceptance of the delivered OS in accordance with the developer's
delivery procedures.
The preparative procedures shall describe all the steps necessary for secure
installation of the OS and for the secure preparation of the
operational environment in accordance with the security objectives for the operational
environment as described in the ST.
As indicated in the introduction above, there are significant expectations
with respect to the documentation—especially when configuring the operational
environment to support OS functional requirements. The evaluator
shall check to ensure that the guidance provided for the OS
adequately addresses all platforms claimed for the OS in the ST.
5.2.4 Class ALC: Life-cycle Support
At the assurance level provided
for OSs conformant to this PP, life-cycle support is limited to end-user-visible aspects of
the life-cycle, rather than an examination of the OS vendor’s development and configuration
management process. This is not meant to diminish the critical role that a developer’s
practices play in contributing to the overall trustworthiness of a product; rather, it is a
reflection on the information to be made available for evaluation at this assurance level.
ALC_CMC.1 Labeling of the TOE (ALC_CMC.1)
This component is
targeted at identifying the OS such that it can be distinguished from
other products or versions from the same vendor and can be easily specified when being
procured by an end user.
The evaluator will check the ST to ensure that it contains
an identifier (such as a product name/version number) that specifically identifies the
version that meets the requirements of the ST. Further, the
evaluator will check the AGD guidance and OS samples received for
testing to ensure that the version number is consistent with that in the ST. If the vendor maintains a web site advertising the OS, the evaluator will examine the information on the web site to
ensure that the information in the ST is sufficient to distinguish
the product.
ALC_CMS.1 TOE CM Coverage (ALC_CMS.1)
Given the scope of the OS and its associated evaluation
evidence requirements, this component’s assurance activities are covered
by the assurance activities listed for ALC_CMC.1.
The "evaluation evidence required by the SARs" in this PP is limited to the
information in the ST coupled with the guidance provided to
administrators and users under the AGD requirements. By ensuring that the OS is specifically identified and that this identification is
consistent in the ST and in the AGD guidance (as done in the
assurance activity for ALC_CMC.1), the evaluator implicitly confirms the information
required by this component. Life-cycle support is targeted aspects of the developer’s
life-cycle and instructions to providers of applications for the developer’s devices,
rather than an in-depth examination of the TSF manufacturer’s
development and configuration management process. This is not meant to diminish the
critical role that a developer’s practices play in contributing to the overall
trustworthiness of a product; rather, it’s a reflection on the information to be made
available for evaluation. The evaluator will ensure that the developer has
identified (in guidance documentation for application developers concerning the
targeted platform) one or more development environments appropriate for use in
developing applications for the developer’s platform. For each of these development
environments, the developer shall provide information on how to configure the
environment to ensure that buffer overflow protection mechanisms in the environment(s)
are invoked (e.g., compiler and linker flags). The evaluator will ensure that this documentation
also includes an indication of whether such protections are on by default, or have to
be specifically enabled. The evaluator will ensure that the TSF is
uniquely identified (with respect to other products from the TSF
vendor), and that documentation provided by the developer in association with the
requirements in the ST is associated with the TSF
using this unique identification.
ALC_TSU_EXT.1 Timely Security Updates
This component requires the
OS developer, in conjunction with any other necessary parties, to provide information as
to how the end-user devices are updated to address security issues in a timely manner. The
documentation describes the process of providing updates to the public from the time a
security flaw is reported/discovered, to the time an update is released. This description
includes the parties involved (e.g., the developer, carriers(s)) and the steps that are
performed (e.g., developer testing, carrier testing), including worst case time periods,
before an update is made available to the public.
The developer shall provide a description in the TSS of how users are notified
when updates change security properties or the configuration of the product.
The description shall include the mechanisms publicly available for reporting
security issues pertaining to the OS.
Note:
The reporting mechanism could include web sites, email addresses, as well as a
means to protect the sensitive nature of the report (e.g., public keys that could be
used to encrypt the details of a proof-of-concept exploit).
The evaluator will verify that the TSS contains a description of the timely
security update process used by the developer to create and deploy security updates.
The evaluator will verify that this description addresses the entire application. The
evaluator will also verify that, in addition to the OS developer’s process, any
third-party processes are also addressed in the description. The evaluator will also
verify that each mechanism for deployment of security updates is described. The
evaluator will verify that, for each deployment mechanism described for the update
process, the TSS lists a time between public disclosure of a vulnerability and public
availability of the security update to the OS patching this vulnerability, to include
any third-party or carrier delays in deployment. The evaluator will verify that this
time is expressed in a number or range of days. The evaluator will verify that
this description includes the publicly available mechanisms (including either an email
address or website) for reporting security issues related to the OS. The evaluator
shall verify that the description of this mechanism includes a method for protecting
the report either using a public key for encrypting email or a trusted channel for a
website.
5.2.5 Class ATE: Tests
Testing is specified for functional aspects of
the system as well as aspects that take advantage of design or implementation weaknesses.
The former is done through the ATE_IND family, while the latter is through the AVA_VAN
family. At the assurance level specified in this PP, testing is based on advertised
functionality and interfaces with dependency on the availability of design information. One
of the primary outputs of the evaluation process is the test report as specified in the
following requirements.
Testing is performed to confirm the
functionality described in the TSS as well as the administrative
(including configuration and operational) documentation provided. The focus of the testing
is to confirm that the requirements specified in being met,
although some additional testing is specified for SARs in Section 5.2 Security Assurance Requirements. The
Assurance Activities identify the additional testing activities associated with these
components. The evaluator produces a test report documenting the plan for and results of
testing, as well as coverage arguments focused on the platform/OS
combinations that are claiming conformance to this PP. Given the scope of the OS and its associated evaluation evidence requirements, this component’s
assurance activities are covered by the assurance activities listed for ALC_CMC.1.
The evaluator will prepare a test plan and report documenting the testing
aspects of the system, including any application crashes during testing. The evaluator
shall determine the root cause of any application crashes and include that information
in the report. The test plan covers all of the testing actions contained in the
[CEM] and the body of this PP’s Assurance Activities. While it is
not necessary to have one test case per test listed in an Assurance Activity, the
evaluator must document in the test plan that each applicable testing requirement in
the ST is covered. The test plan identifies the platforms to be
tested, and for those platforms not included in the test plan but included in the
ST, the test plan provides a justification for not testing the
platforms. This justification must address the differences between the tested
platforms and the untested platforms, and make an argument that the differences do not
affect the testing to be performed. It is not sufficient to merely assert that the
differences have no affect; rationale must be provided. If all platforms claimed in
the ST are tested, then no rationale is necessary. The test plan
describes the composition of each platform to be tested, and any setup that is
necessary beyond what is contained in the AGD documentation. It should be noted that
the evaluator is expected to follow the AGD documentation for installation and setup
of each platform either as part of a test or as a standard pre-test condition. This
may include special test drivers or tools. For each driver or tool, an argument (not
just an assertion) should be provided that the driver or tool will not adversely
affect the performance of the functionality by the OS and its
platform. This also includes the configuration of the cryptographic engine to be
used. The cryptographic algorithms implemented by this engine are those specified by
this PP and used by the cryptographic protocols being evaluated (IPsec, TLS). The test
plan identifies high-level test objectives as well as the test procedures to be
followed to achieve those objectives. These procedures include expected results.
The test report (which could just be an annotated version of the test plan) details
the activities that took place when the test procedures were executed, and includes
the actual results of the tests. This shall be a cumulative account, so if there was a
test run that resulted in a failure; a fix installed; and then a successful re-run of
the test, the report would show a “fail” and “pass” result (and the supporting
details), and not just the “pass” result.
5.2.6 Class AVA: Vulnerability Assessment
For the first generation of
this protection profile, the evaluation lab is expected to survey open sources to discover
what vulnerabilities have been discovered in these types of products. In most cases, these
vulnerabilities will require sophistication beyond that of a basic attacker. Until
penetration tools are created and uniformly distributed to the evaluation labs, the
evaluator will not be expected to test for these vulnerabilities in the OS. The labs will be expected to comment on the likelihood of these vulnerabilities given
the documentation provided by the vendor. This information will be used in the development
of penetration testing tools and for the development of future protection profiles.
The evaluator shall perform a search of public domain sources to identify
potential vulnerabilities in the OS.
Application
Note:
Public domain sources include the Common Vulnerabilities
and Exposures (CVE) dictionary for publicly-known vulnerabilities. Public domain
sources also include sites which provide free checking of files for viruses.
The evaluator shall conduct penetration testing, based on the identified
potential vulnerabilities, to determine that the OS is resistant to
attacks performed by an attacker possessing Basic attack potential.
The evaluator will generate a report to document their
findings with respect to this requirement. This report could physically be part of the
overall test report mentioned in ATE_IND, or a separate document. The evaluator
performs a search of public information to find vulnerabilities that have been found
in similar applications with a particular focus on network protocols the application
uses and document formats it parses.
The evaluator documents the sources consulted and
the vulnerabilities found in the report.
For each vulnerability found, the evaluator
either provides a rationale with respect to its non-applicability, or the evaluator
formulates a test (using the guidelines provided in ATE_IND) to confirm the
vulnerability, if suitable. Suitability is determined by assessing the attack vector
needed to take advantage of the vulnerability. If exploiting the vulnerability
requires expert skills and an electron microscope, for instance, then a test would not
be suitable and an appropriate justification would be formulated.
Appendix A - Optional Requirements
As indicated in the introduction to this PP, the baseline requirements (those that must be
performed by the TOE) are contained in the body of this PP.
This appendix contains three other types of optional requirements:
The first type, defined in Appendix A.1 Strictly Optional Requirements, are strictly optional requirements.
If the TOE meets any of these requirements the vendor is encouraged to claim the associated SFRs
in the ST, but doing so is not required in order to conform to this PP.
The second type, defined in Appendix A.2 Objective Requirements, are objective requirements. These describe security functionality that is not yet
widely available in commercial technology.
Objective requirements are not currently mandated by this PP, but will be mandated in
the future. Adoption by vendors is encouraged, but claiming these SFRs is not required in order to conform to this
PP.
The third type, defined in Appendix A.3 Implementation-dependent Requirements, are Implementation-dependent requirements.
If the TOE implements the product features associated with the listed SFRs, either the SFRs must be claimed
or the product features must be disabled in the evaluated ocnfiguration.
A.1 Strictly Optional Requirements
This PP does not define any
Strictly Optional requirements.
A.2 Objective Requirements
This PP does not define any
Objective requirements.
A.3 Implementation-dependent Requirements
This PP does not define any
Implementation-dependent requirements.
Appendix B - Selection-based Requirements
As indicated in the introduction to this PP,
the baseline requirements
(those that must be performed by the TOE or its underlying platform)
are contained in the body of this PP.
There are additional requirements based on selections in the body of
the PP:
if certain selections are made, then additional requirements below must be included.
B.1
Auditable Events for Selection-based Requirements
Table 4: Auditable Events for Selection-based Requirements
What the evaluator has to do with the TOE Guidance document.
KMD
What the evaluator has to do with the Key Management Document, or whatever.
Tests
The tests that the evaluator has to run to verify that the requirement is met.
Appendix C - Inherently Satisfied Requirements
This appendix lists requirements that should be considered satisfied by products
successfully evaluated against this Protection Profile.
However, these requirements are not featured explicitly as SFRs and should not be
included in the ST.
They are not included as standalone SFRs because it would
increase the time, cost, and complexity of evaluation. This approach is permitted
by [CC] Part 1, 8.2 Dependencies between components.
This information benefits systems engineering activities which call for inclusion of
particular security controls. Evaluation against the Protection Profile
provides evidence that these controls are present and have been evaluated.
Requirement
Rationale for Satisfaction
FIA_UAU.1 - Timing of authentication
FIA_AFL.1 implicitly requires that the OS perform all necessary actions,
including those on behalf of the user who has not been authenticated,
in order to authenticate;
therefore it is duplicative to include these actions as a
separate assignment and test.
FIA_UID.1 - Timing of identification
FIA_AFL.1 implicitly requires that the OS perform all necessary actions,
including those on behalf of the user who has not been identified,
in order to authenticate;
therefore it is duplicative to include these actions as a
separate assignment and test.
FMT_SMR.1 - Security roles
FMT_MOF_EXT.1 specifies role-based management functions that implicitly defines
user and privileged accounts;
therefore, it is duplicative to include separate role requirements.
FPT_STM.1 - Reliable time stamps
FAU_GEN.1.2 explicitly requires that the OS associate timestamps with audit records;
therefore it is duplicative to include a separate timestamp requirement.
FMT_MOF_EXT.1 defines requirements for managing session locking;
therefore, it is duplicative to include a separate session locking requirement.
FTA_SSL.2 - User-initiated locking
FMT_MOF_EXT.1 defines requirements for user-initiated session locking;
therefore, it is duplicative to include a separate session locking requirement.
FAU_STG.1 - Protected audit trail storage
FPT_ACF_EXT.1 defines a requirement to protect audit logs;
therefore, it is duplicative to include a separate protection of audit trail requirements.
FAU_GEN.2 - User identity association
FAU_GEN.1.2 explicitly requires that the OS record
any user account associated with each event; therefore, it is duplicative
to include a separate requirement to associate a user account with each
event.
FAU_SAR.1 - Audit review
FPT_ACF_EXT.1.2 requires that audit logs (and other objects)
are protected from reading by unprivileged users; therefore, it is duplicative
to include a separate requirement to protect only the audit information.
Appendix D - Entropy Documentation and Assessment
This appendix describes the required supplementary information for the entropy
source used by the TOE.
The documentation of the entropy source should be detailed enough that, after
reading, the evaluator will thoroughly understand the entropy source and why
it can be relied upon to provide sufficient entropy. This documentation should
include multiple detailed sections: design description, entropy justification,
operating conditions, and health testing. This documentation is not required to
be part of the TSS.
D.1 Design Description
Documentation shall include the design of the entropy source as a whole,
including the interaction of all entropy source components. Any information
that can be shared regarding the design should also be included for any
third-party entropy sources that are included in the product.
The documentation will describe the operation of the entropy source to
include, how entropy is produced, and how unprocessed (raw) data can be
obtained from within the entropy source for testing purposes. The documentation
should walk through the entropy source design indicating where the entropy
comes from, where the entropy output is passed next, any post-processing
of the raw outputs (hash, XOR, etc.), if/where it is stored, and finally,
how it is output from the entropy source. Any conditions placed on the
process (e.g., blocking) should also be described in the entropy source
design. Diagrams and examples are encouraged.
This design must also include a description of the content of the
security boundary of the entropy source and a description of how
the security boundary ensures that an adversary outside the boundary
cannot affect the entropy rate.
If implemented, the design description shall include a description
of how third-party applications can add entropy to the RBG. A
description of any RBG state saving between power-off and
power-on shall be included.
D.2 Entropy Justification
There should be a technical argument for where the unpredictability in
the source comes from and why there is confidence in the entropy source
delivering sufficient entropy for the uses made of the RBG output
(by this particular TOE). This argument will include a description of
the expected min-entropy rate (i.e. the minimum entropy (in bits) per
bit or byte of source data) and explain that sufficient entropy is
going into the TOE randomizer seeding process. This discussion will
be part of a justification for why the entropy source can be relied
upon to produce bits with entropy.
The amount of information necessary to justify the expected
min-entropy rate depends on the type of entropy source included in the
product.
For developer provided entropy sources, in order to justify the
min-entropy rate, it is expected that a large number of raw source
bits will be collected, statistical tests will be performed, and the
min-entropy rate determined from the statistical tests. While no
particular statistical tests are required at this time, it is expected
that some testing is necessary in order to determine the amount of
min-entropy in each output.
For third party provided entropy sources, in which the TOE vendor
has limited access to the design and raw entropy data of the source, the
documentation will indicate an estimate of the amount of min-entropy
obtained from this third-party source. It is acceptable for the vendor
to “assume” an amount of min-entropy, however, this assumption must be
clearly stated in the documentation provided. In particular, the
min-entropy estimate must be specified and the assumption included
in the ST.
Regardless of type of entropy source, the justification will also
include how the DRBG is initialized with the entropy stated in the ST,
for example by verifying that the min-entropy rate is multiplied by the
amount of source data used to seed the DRBG or that the rate of entropy
expected based on the amount of source data is explicitly stated and
compared to the statistical rate. If the amount of source data used to
seed the DRBG is not clear or the calculated rate is not explicitly
related to the seed, the documentation will not be considered complete.
The entropy justification shall not include any data added from
any third-party application or from any state saving between restarts.
D.3 Operating Conditions
The entropy rate may be affected by conditions outside the control
of the entropy source itself. For example, voltage, frequency,
temperature, and elapsed time after power-on are just a few of the
factors that may affect the operation of the entropy source.
As such, documentation will also include the range of operating conditions
under which the entropy source is expected to generate random data.
It will clearly describe the measures that have been taken in the
system design to ensure the entropy source continues to operate
under those conditions. Similarly, documentation shall describe
the conditions under which the entropy source is known to malfunction
or become inconsistent. Methods used to detect failure or degradation
of the source shall be included.
D.4 Health Testing
More specifically, all entropy source health tests and their rationale
will be documented. This will include a description of the health tests,
the rate and conditions under which each health test is performed
(e.g., at startup, continuously, or on-demand), the expected results
for each health test, and rationale indicating why each test is
believed to be appropriate for detecting one or more failures in the
entropy source.
Appendix E - Application Software Equivalency Guidelines
E.1 Introduction
The purpose of equivalence in PP-based evaluations is to find a balance between evaluation rigor and commercial practicability—to
ensure that evaluations meet customer expectations while recognizing that there is little to be gained from requiring that every
variation in a product or platform be fully tested. If a product is found to be compliant with a PP on one platform, then all
equivalent products on equivalent platforms are also considered to be compliant with the PP.
A Vendor can make a claim of equivalence if the Vendor believes that a particular instance of their Product implements PP-specified
security functionality in a way equivalent to the implementation of the same functionality on another instance of their Product on
which the functionality was tested. The Product instances can differ in version number or feature level (model), or the instances may
run on different platforms. Equivalency can be used to reduce the testing required across claimed evaluated configurations. It can
also be used during Assurance Maintenance to reduce testing needed to add more evaluated configurations to a certification.
These equivalency guidelines do not replace Assurance Maintenance requirements or NIAP Policy #5 requirements for CAVP certificates.
Nor may equivalency be used to leverage evaluations with expired certifications.
These Equivalency Guidelines represent a shift from complete testing of all product instances to more of a risk-based approach.
Rather than require that every combination of product and platform be tested, these guidelines support an approach that recognizes
that products are being used in a variety of environments—and often in cloud environments over where the vendor (and sometimes the
customer) have little or no control over the underlying hardware. Developers should be responsible for the security functionality of
their applications on the platforms they are developed for—whether that is an operating system, a virtual machine, or a software-based
execution environment such as a container. But those platforms may themselves run within other environments—virtual machines or
operating systems—that completely abstract away the underlying hardware from the application. The developer should not be held
accountable for security functionality that is implemented by platform layers that are abstracted away. The implication is that
not all security functionality will necessarily be tested for all platform layers down to the hardware for all evaluated
configurations—especially for applications developed for software-based execution environments such as containers. For these cases,
the balancing of evaluation rigor and commercial practicability tips in favor of practicability. Note that this does not affect
the requirement that at least one product instance be fully tested on at least one platform with cryptography mapped to a CAVP
certificate.
Equivalency has two aspects:
Product Equivalence: Products may be considered equivalent if there are no
differences between Product Models and Product Versions with respect to PP-specified security functionality.
Platform Equivalence: Platforms may be considered equivalent if there are no
significant differences in the services they provide to the Product—or in the way the platforms
provide those services—with respect to PP-specified security functionality.
The equivalency determination is made in accordance with these guidelines by the Validator and Scheme using information provided by the Evaluator/Vendor.
E.2 Approach to Equivalency Analysis
There are two scenarios for performing equivalency analysis. One is when a product has been certified and the vendor
wants to show that a later product should be considered certified due to equivalence with the earlier product. The
other is when multiple product variants are going though evaluation together and the vendor would like to reduce
the amount of testing that must be done. The basic rules for determining equivalence are the same in both cases.
But there is one additional consideration that applies to equivalence with previously certified products. That is,
the product with which equivalence is being claimed must have a valid certification in accordance with scheme rules
and the Assurance Maintenance process must be followed. If a product’s certification has expired, then equivalence
cannot be claimed with that product.
When performing equivalency analysis, the Evaluator/Vendor should first use the factors and guidelines for Product
Model equivalence to determine the set of Product Models to be evaluated. In general, Product Models that do not differ
in PP-specified security functionality are considered equivalent for purposes of evaluation against the AppPP.
If multiple revision levels of Product Models are to be evaluated—or to determine whether a revision of an evaluated
product needs re-evaluation—the Evaluator/Vendor and Validator should use the factors and guidelines for Product
Version equivalence to analyze whether Product Versions are equivalent.
Having determined the set of Product Models and Versions to be evaluated, the next step is to determine the set of
Platforms that the Products must be tested on.
Each non-equivalent Product for which compliance is claimed must be fully tested on each non-equivalent platform
for which compliance is claimed. For non-equivalent Products on equivalent platforms, only the differences that
affect PP-specified security functionality must be tested for each product.
“Differences in PP-Specified Security Functionality” Defined
If PP-specified security functionality is implemented by the TOE, then differences in the actual implementation
between versions or product models break equivalence for that feature. Likewise, if the TOE implements the
functionality in one version or model and the functionality is implemented by the platform in another version
or model, then equivalence is broken. If the functionality is implemented by the platform in multiple models or
versions on equivalent platforms, then the functionality is considered different if the product invokes the platform
differently to perform the function.
E.3 Specific Guidance for Determining Product Model Equivalence
Product Model equivalence attempts to determine whether different feature levels of the same product across
a product line are equivalent for purposes of PP testing. For example, if a product has a “basic” edition and an “enterprise”
edition, is it necessary to test both models? Or does testing one model provide sufficient assurance that both models
are compliant?
Product models are considered equivalent if there are no differences that affect PP-specified security
functionality—as indicated in Table 1.
If the differences between Models affect only non-PP-specified functionality, then the Models are equivalent.
Different
If PP-specified security functionality is affected by the differences between Models,
then the Models are not equivalent and must be tested separately. It is necessary only to test the functionality
affected by the software differences. If only differences are tested, then the differences must be enumerated,
and for each difference the Vendor must provide an explanation of why each difference does or does not affect
PP-specified functionality. If the Product Models are separately tested fully, then there is no need to document the differences.
Table 1. Determining Product Model Equivalence
E.4 Specific Guidance for Determining Product Version Equivalence
In cases of version equivalence, differences are expressed in terms of changes implemented in revisions
of an evaluated Product. In general, versions are equivalent if the changes have no effect on any
security-relevant claims about the TOE or assurance evidence. Non-security-relevant changes to TOE
functionality or the addition of non-security-relevant functionality does not affect equivalence.
Factor
Same/Different
Guidance
Product Models
Different
Versions of different Product Models are not equivalent unless the Models are equivalent as defined in Section 3.
If the differences affect only non-PP-specified functionality, then the Versions are equivalent.
Different
If PP-specified security functionality is affected by the differences, then the
Versions are not considered equivalent and must be tested separately. It is necessary only to test
the functionality affected by the changes. If only the differences are tested, then for each
difference the Vendor must provide an explanation of why the difference does or does not affect
PP-specified functionality. If the Product Versions are separately tested fully, then there is
no need to document the differences.
Table 2. Factors for Determining Product Version Equivalence
E.5 Specific Guidance for Determining Platform Equivalence
Platform equivalence is used to determine the platforms that equivalent versions of a Product must be tested on.
Platform equivalence analysis done for one software application cannot be applied to another software application.
Platform equivalence is not general—it is with respect to a particular application.
Product Equivalency analysis must already have been done and Products have been determined to be equivalent.
The platform can be hardware or virtual hardware, an operating system or similar entity, or a software execution
environment such as a container. For purposes of determining equivalence for software applications, we address each
type of platform separately. In general, platform equivalence is based on differences in the interfaces between the
TOE and Platform that are relevant to the implementation of PP-specified security functionality.
If an application runs directly on hardware without an operating system—or directly on virtualized
hardware without an operating system—then platform equivalence is based on processor architecture and
instruction sets. In the case of virtualized hardware, it is the virtualized processor and architecture
that are presented to the application that matters—not the physical hardware.
Platforms with different processor architectures and instruction sets are not equivalent. This is not
likely to be an issue for equivalency analysis for applications since there is likely to be a different
version of the application for different hardware environments.
Equivalency analysis becomes important when comparing processors with the same architecture. Processors
with the same architecture that have instruction sets that are subsets or supersets of each other are not
disqualified from being equivalent for purposes of an App evaluation. If the application takes the same
code paths when executing PP-specified security functionality on different processors of the same family,
then the processors can be considered equivalent with respect to that application.
For example, if an application follows one code path on platforms that support the AES-NI instruction
and another on platforms that do not, then those two platforms are not equivalent with respect to that
application functionality. But if the application follows the same code path whether or not the platform
supports AES-NI, then the platforms are equivalent with respect to that functionality.
The platforms are equivalent with respect to the application if the platforms are equivalent with respect to all PP-specified
security functionality.
Factor
Same/Different/None
Guidance
Platform Architectures
Different
Platforms that present different processor architectures and instruction sets to the application are not equivalent.
For platforms with the same processor architecture, the platforms are equivalent with
respect to the application if execution of all PP-specified security functionality follows the same code path on both platforms.
Table 3. Factors for Determining Hardware/Virtual Hardware Platform Equivalence
E.5.2 Platform Equivalence—OS Platforms
For traditional applications that are built for and run on operating systems, platform equivalence is
determined by the interfaces between the application and the operating system that are relevant to PP-specified
security functionality. Generally, these are the processor interface, device interfaces, and OS APIs. The following
factors applied in order:
Factor
Same/Different/None
Guidance
Platform Architectures
Different
Platforms that run on different processor architectures and instruction sets are not equivalent.
Platform Vendors
Different
Platforms from different vendors are not equivalent.
Platform Versions
Different
Platforms from the same vendor with different major version numbers are not equivalent.
Platform Interfaces
Different
Platforms from the same vendor and major version are not equivalent if there are
differences in device interfaces and OS APIs that are relevant to the way the platform provides PP-specified
security functionality to the application.
Platform Interfaces
Same
Platforms from the same vendor and major version are equivalent if there are
no differences in device interfaces and OS APIs that are relevant to the way the platform
provides PP-specified security functionality to the application, or if the Platform does
not provide such functionality to the application.
Table 4. Factors for Determining OS/VS Platform Equivalence
If an Application is built for and runs in a non-OS software-based execution environment, such as a Container or
Java Runtime, then the below criteria must be used to determine platform equivalence. The key point is that the
underlying hardware (virtual or physical) and OS is not relevant to platform equivalence. This allows applications
to be tested and run on software-based execution environments on any hardware—as in cloud deployments.
Factor
Same/Different/None
Guidance
Platform Type/Vendor
Different
Software-based execution environments that are substantially different or come
from different vendors are not equivalent. For example, a Java virtual machine is not the same as a
container. A Docker container is not the same as a CoreOS container.
Platform Versions
Different
Execution environments that are otherwise equivalent are not equivalent if they have
different major version numbers.
All other things being equal, execution environments are equivalent if there is no
significant difference in the interfaces through which the environments provide PP-specified security
functionality to applications.
Table 5. Factors for Software-based Execution Environment Platform Equivalence
E.6 Level of Specificity for Tested Configurations and Claimed Equivalent Configurations
In order to make equivalency determinations, the vendor and evaluator must agree on the equivalency claims. They must
then provide the scheme with sufficient information about the TOE instances and platforms that were evaluated, and the
TOE instances and platforms that are claimed to be equivalent.
The ST must describe all configurations evaluated down to processor manufacturer, model number, and microarchitecture version.
The information regarding claimed equivalent configurations depends on the platform that the application was developed for and runs on.
Bare-Metal Applications
For applications that run without an operating system on bare-metal or virtual bare-metal, the claimed configuration must
describe the platform down to the specific processor manufacturer, model number, and microarchitecture version. The Vendor
must describe the differences in the TOE with respect to PP-specified security functionality and how the TOE functions
differently to leverage platform differences (e.g., instruction set extensions) in the tested configuration versus the
claimed equivalent configuration.
Traditional Applications
For applications that run with an operating system as their immediate platform, the claimed configuration must describe
the platform down to the specific operating system version. If the platform is a virtualization system, then the claimed
configuration must describe the platform down to the specific virtualization system version. The Vendor must describe the
differences in the TOE with respect to PP-specified security functionality and how the TOE functions differently to leverage
platform differences in the tested configuration versus the claimed equivalent configuration. Relevant platform differences
could include instruction sets, device interfaces, and OS APIs invoked by the TOE to implement PP-specified security
functionality.
Software-Based Execution Environments
For applications that run in a software-based execution environment such as a Java virtual machine or a Container, then
the claimed configuration must describe the platform down to the specific version of the software execution environment.
The Vendor must describe the differences in the TOE with respect to PP-specified security functionality and how the TOE
functions differently to leverage platform differences in the tested configuration versus the claimed equivalent
configuration.