INTERNET-DRAFT
   Document: draft-ietf-ipo-carrier-requirements-00.txt                                                  Yong Xue
Document: draft-ietf-ipo-carrier-requirements-01.txt       Worldcom Inc.
Category: Informational                                         (Editor)

Expiration Date: January, September, 2002
                         UUNET/Worldcom

                                                            Monica Lazer
                                                             John Strand
                                                          Jennifer Yates
                                                            Dongmei Wang
                                                                    AT&T

                                                        Ananth Nagarajan
                                                               Lynn Neir
                                                           Wesam Alanqar
                                                            Tammy Ferris
                                                                  Sprint

                                                      Hirokazu Ishimatsu
                                                  Japan Telecom Co., LTD

                                                           Steven Wright
                                                               Bellsouth

                                                           Olga Aparicio
                                                 Cable & Wireless Global
                                                            March, 2002.

                 Carrier Optical Services Requirements

Status of this Memo

   This document is an Internet-Draft and is in full conformance with
   all provisions of Section 10 of RFC2026. Internet-Drafts are
   Working working
   documents of the Internet Engineering Task Force (IETF), its areas,
   and its working groups.  Note that other groups may also distribute
   working documents as Internet-Drafts.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or rendered obsolete by other documents
   at any time. It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/ietf/1id-abstracts.txt.

   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html.

   Abstract
   This contribution Internet Draft describes a carriers optical services framework
   and associated the major carrier's service
   requirements for the automatic switched optical network. As such, this
   document concentrates networks
   (ASON) from both an end-user's as well as an operator's
   perspectives. Its focus is on the requirements driving description of the work towards
   realization
   service building blocks and service-related control
   plane functional requirements. The management functions
   for the optical services and their underlying networks
   are beyond the scope of ASON.  This this document is intended to and will be protocol-
   neutral. addressed
   in a separate document.

   Table of Contents
   1. Introduction....................................................3 Introduction                                           3
    1.1 Justification................................................3 Justification                                        3
    1.2 Conventions used in this document............................3 document                    3
    1.3 Background...................................................3
 1.4 Value Statement..............................................4
 1.5 Statement                                      3
    1.4 Scope of This Document.......................................5 Document                               4
   2. Definitions and Terminology.....................................5 Abbreviations                                          5
   3. General Requirements............................................6 Requirements                                   5
    3.1 Separation of Networking Functions...........................6 Functions                   5
    3.2 Network and Service Scalability..............................7 Scalability                      6
    3.3 Transport Network Technology.................................7 Technology                         6
    3.4 Service Building Blocks......................................8 Blocks                              7
   4. Service Model and Applications..................................8 Applications                         7
    4.1 Service and Connection Types                         7
    4.2 Examples of Common Service Models                    8
   5. Network Reference Model........................................11 Model                                9
    5.1 Optical Networks and Subnetworks............................11 Subnetworks                     9
    5.2 Network Interfaces..........................................11 Interfaces                                   9
    5.3 Intra-Carrier Network Model.................................15 Model                         11
    5.4 Inter-Carrier Network Model.................................16 Model                         12
   6. Optical Service User Requirements..............................17 Requirements                     13
    6.1 Connection Management.......................................17 Common Optical Services                             13
    6.2 Optical Services............................................20 Service Invocation                          14
    6.3 Bundled Connection                                  16
    6.4 Levels of Transparency......................................21
 6.4 Transparency                              17
    6.5 Optical Connection granularity..............................21
 6.5 granularity                      17
    6.6 Other Service Parameters and Requirements...................23 Requirements           18
   7. Optical Service Provider Requirements..........................25 Requirements                 19
    7.1 Access Methods to Optical Networks..........................25 Networks                  19
    7.2 Dual Homing and Network Interconnections            19
    7.3 Inter-domain connectivity                           20
    7.4 Bearer Interface Types  ....................................26
 7.3                              21
    7.5 Names and Address Management................................26
 7.4 Link Identification.........................................29
 7.5 Management                        21
    7.6 Policy-Based Service Management Framework...................29
 7.6 Multiple Hierarchies........................................32 Framework           22
    7.7 Support of Hierarchical Routing and Signaling       22
   8. Control Plane Functional Requirements for Optical Services.....32
      Services                                              23
    8.1 Control Plane Capabilities and Functions....................32 Functions            23
    8.2 Signaling Network...........................................34 Network                                   24
    8.3 Control Plane Interface to Data Plane.......................36 Plane               25
    8.4 Management Plane Interface to Data Plane            25
    8.5 Control Plane Interface to Management Plane.................36

 8.5 Plane         26
    8.6 Control Plane Interconnection...............................41 Interconnection                       27
   9. Requirements for Signaling, Routing and Discovery .............43     27
    9.1 Requirements for information sharing over UNI, I-NNI
        and E-NNI                                           27
    9.2 Signaling Functions ........................................44
 9.2 Routing Functions...........................................46                                 28
    9.3 Routing Functions                                   30
    9.4 Requirements for path selection                     32
    9.5 Automatic Discovery Functions...............................49 Functions                       32
   10. Requirements for service and control plane resiliency........54 resiliency 34
    10.1 Service resiliency.......................................54 resiliency                                 34
    10.2 Control plane resiliency........ ........... ...............58 resiliency                           37
   11. Security concerns and requirements............................58 Considerations                              40
    11.1 Data Plane Optical Network Security and Control Plane Security.............58 Concerns                  40
    11.2 Service Access Control.....................................59
 11.3 Optical Network Security Concerns..........................62 Control                             42
   12. Acknowledgements                                     43

1. Introduction

1.1  Justification

The charter

   Next generation WDM-based optical transport network (OTN) will
   consist of optical cross-connects (OXC), DWDM optical line systems
   (OLS) and optical add-drop multiplexers (OADM) based on the IPO WG calls for a document on "Carrier Optical
Services Requirements" for IP/Optical networks. This document addresses
that aspect of the IPO WG charter. Furthermore, this document was
accepted as an IPO WG document by unanimous agreement at the IPO WG
meeting held on March 19, 2001, in Minneapolis, MN, USA.  It presents a
carrier and end-user perspective on optical network services and
requirements.

1.2 Conventions used in this document

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119.

1.3 Background

Next generation optical transport network (OTN) will consist of optical
crossconnects (OXC), DWDM optical line systems (OLS) and optical add-
drop multiplexers (OADM) based on the architecture defined by the ITU
standards G.872 in [G.872]. The OTN network is an optical transport
network bounded by
   architecture defined by the ITU Rec. G.872 in [G.872]. The OTN is
   bounded by a set of optical channel access points and has a layered
   structure consisting of optical channel, multiplex section and
   transmission section sub-layer networks. Optical networking
   encompasses the functionality functionalities for the establishment, transmission,
   multiplexing,
switching, protection, and restoration switching of optical connections carrying a wide range
   of user signals of varying formats and bit rate.
It

   The ultimate goal is an emerging trend to enhance the OTN network with an intelligent optical
   layer control plane to dynamically provision network resources and to
   provide network survivability using ring and mesh-based protection
   and restoration techniques. The resulting intelligent networks are
   called automatic switched optical networks or ASON. ASON [G.8080].

   The emerging and rapidly evolving automatic switched optical networking
(ASON) ASON technologies [G.ASON] are aimed at
   providing optical networks with intelligent networking functions and
   capabilities in its control plane to enable wavelength switching,
   rapid optical connection provisioning and dynamic rerouting. The same
   technology will also be able to control TDM based SONET/SDH optical
   transport network as defined by ITU Rec. G.803 [G.803]. This new
   networking platform will create tremendous business opportunities for
   the network operators and service providers to offer new services to
   the market.

1.4 Value Statement

By deploying ASON technology, a carrier expects to achieve the
following benefits from both technical and business perspectives:
Rapid Circuit Provisioning: ASON technology will enable the dynamic
end-to-end provisioning

1.1. Justification

   The charter of the optical connections across IPO WG calls for a document on "Carrier Optical
   Services Requirements" for IP/Optical networks. This document
   addresses that aspect of the optical
network IPO WG charter. Furthermore, this
   document was accepted as an IPO WG document by unanimous agreement at
   the IPO WG meeting held on March 19, 2001, in Minneapolis, MN, USA.
   It presents a carrier and end-user perspective on optical network
   services and requirements.

1.2.  Conventions used in this document

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in RFC 2119.

1.3. Value Statement

   By deploying ASON technology, a carrier expects to achieve the
   following benefits from both technical and business perspectives:

   - Rapid Circuit Provisioning: ASON technology will enable the dynamic
   end-to-end provisioning of the optical connections across the optical
   network by using standard routing and signaling protocols.

   - Enhanced Survivability: ASON technology will enable the network to
   dynamically reroute an optical connection in case of a failure using
   mesh-based network protection and restoration techniques, which
   greatly improves the cost-effectiveness compared to the current line
   and ring protection schemes in the SONET/SDH network.

   - Cost-Reduction: ASON networks will enable the carrier to better
   utilize the optical network , thus achieving significant unit cost
   reduction per Megabit due to the cost-effective nature of the optical
   transmission technology, simplified network architecture and reduced
   operation cost.

   - Service Flexibility: ASON technology will support provisioning of
   an assortment of existing and new services such as protocol and bit-rate bit-
   rate independent transparent network services, and bandwidth-on-demand bandwidth-on-
   demand services.

Editor's Note: The next revision will make this more explicit with
respect to the relationship with the ASON control plane.

   - Enhanced Interoperability: ASON technology will be using use a control plane
   utilizing the industry and international standards architecture and
   protocols, which facilitate the interoperability of the optical
   network equipment from different vendors.

   In addition, the introduction of a standards-based control plane
   offers the following potential benefits:

   - Reactive traffic engineering at optical layer that allows network
   resources to be dynamically allocated to traffic flow.

   - Reduce the need for service providers to develop new operational
   support systems software for the network control and new service
   provisioning on the optical network, thus speeding up the deployment
   of the optical network technology and reducing the software
   development and maintenance cost.

   - Potential development of a unified control plane that can be used
   for different transport technologies including ONT, OTN, SONET/SDH, ATM
   and PDH.

1.5

1.4.  Scope of This Document

   This IPO working group (WG) document is aimed at providing, from the carrier's perspective,
   a service framework and associated requirements in relation to the
   optical services to be offered in the next generation optical
   networking environment and the their service control and management
   functions.  As such, this document concentrates on the requirements
   driving the work towards realization of ASON.  This document is
   intended to be protocol-neutral.

Note: It is recognized by carriers writing this document that some
features and requirements are not supported by protocols being
developed in the IETF.  However, the purpose of this document is to
specify generic carrier functional requirements.

Editor's Note - We may add a statement that these are not all
   inclusive requirements, and keep it until future revision make it an
   all inclusive list of requirements.

   Every carrier's needs are different. The objective of this document
   is NOT to define some specific service models. Instead, some major
   service building blocks are identified that will enable the carriers
   to mix and match them in order to create the best service platform
   most suitable to their business model. These building blocks include
   generic service types, service enabling control mechanisms and
   service control and management functions. The ultimate goal is to
   provide the requirements to guide the control protocol developments
   within IETF in terms of IP over optical technology.

   In this document, we consider IP a major client to the optical
   network, but the same requirements and principles should be equally
   applicable to non-IP clients such as SONET/SDH, ATM, ITU G.709, etc.

2. Definitions and Terminology
     Optical Transport Network (OTN)
     SONET/SDH  Network
     Automatic Switched Transport Network (ASTN)
     Optical Service Carriers
     Transparent and Opaque Network
     Other Terminology
     Bearer channels  Abbreviations

          ASON     Automatic Switched Optical Networking
          ASTN     Automatic Switched Transport Network
       AD      Administrative Domain
       AND     Automatic Neighbor Discovery
       ASD     Automatic Service Discovery
          CAC     Connection Admission Control
       DCM     Distributed Connection Management
          E-NNI   Exterior NNI
          E-UNI   Exterior UNI
          IWF     InterWorking     Inter-Working Function
          I-NNI   Interior NNI
       IrDI    Inter-Domain Interface
       IaDI    Intra-Domain Interface
       INC     Intra-network Connection
          I-UNI   Interior UNI
          NNI     Node-to-Node Interface
          NE      Network Element
          OTN     Optical Transport Network
          OLS     Optical Line System
       OCC     Optical Connection Controller
          PI      Physical Interface
          SLA     Service Level Agreement
          UNI     User-to-Network Interface

3. General Requirements

   In this section, a number of generic requirements related to the
   service control and management functions are discussed.

3.1

3.1. Separation of Networking Functions

   It makes logical sense to segregate the networking functions within
   each layer network into three logical functional network planes: control
   plane, data plane and management plane. They are responsible for
   providing network control functions, data transmission functions and
   network element management functions respectively.
Control Plane: includes The crux of the ASON
   network is the functions related to networking control
capabilities such as routing, intelligence that contains automatic
   routing, signaling and discovery functions to automate the network
   control functions.

   Control Plane: includes the functions related to networking control
   capabilities such as routing, signaling, and policy control, as well
   as resource and service discovery. These functions are automated.

   Data Plane (transport plane): includes the functions related to
   bearer channels and signal transmission.

   Management Plane: includes the functions related to the management
   functions of network element, networks and network resources and
   services. These functions are less automated as compared to control
   plane functions.

   Each plane consists of a set of interconnected functional or control
entities
   entities, physical or logical, responsible for providing the
   networking or control functions defined for that network layer.

   The crux of the ASON network is the networking intelligence that
contains automatic routing, signaling and discovery functions to
automate the network control functions and these automatic control
functions are collectively called the control plane functions.

The separation of the control plane from both the data and management
   plane is beneficial to the carriers in that:
. Allow that it:

   - Allows equipment vendors to have a modular system design that will
   be more reliable and maintainable thus reducing the overall systems
   ownership and operation cost.
. Allow

   - Allows carriers to have the flexibility to choose a third party
   vendor control plane software systems as its control plane solution
   for its switched optical network.
. Allow

   - Allows carriers to deploy a unified control plane and OSS
   OSS/management systems to manage and control different types of
   transport networks it owes.
. Allow

   - Allows carriers to use a separate control network specially
   designed and engineered for the control plane communications.

Requirement 1.

   The control traffic separation of control, management and user data traffic shall not be
  assumed to be congruently routed under the same topology because the
  control transport network topology may very well be different from function is
   required and it shall accommodate both logical and physical level
   separation.

   Note that of the data transport network.

Note: This it is in contrast to the IP network where the control
   messages and user traffic are routed and switched based on the same
   network topology due to the associated in-band signaling nature of
   the IP network.

3.2

3.2.  Network and Service Scalability

   Although specific applications or networks may be on a small scale,
   the control plane protocol and functional capabilities shall not
   limit large-scale networks

   In terms of the scale and complexity of the future optical network,
   the following assumption can be made when considering the scalability
   and performance requirements that are required of the optical control and
   management functions.
Within one operator subnetwork:  - There may be up to hundreds of OXC nodes and
   the same order of magnitude of OADMs per carrier network.

   - There may be up to thousands of terminating ports/wavelength per
   OXC node node.

   - There may be up to hundreds of parallel fibers between a pair of
   OXC nodes nodes.

   - There may be up to hundreds of wavelength channels transmitted on
   each fiber.
The number of optical connections on a network varies depending upon  In relation to the size frequency and duration of the network.

Requirement 2. Although specific applications may be on a small scale,
  the protocol itself shall not limit large-scale networks.

3.3 Transport Network Technology
Optical services can be offered over different types of underlying optical technologies.
   connections:

   - The service characteristic expected end-to-end connection setup/teardown time should be in certain degree will
determine
   the features and constraints order of seconds.

   - The expected connection holding times should be in the services.

This document assumes order of
   minutes or greater.

   - The expected number of connection attempts at UNI should be in the
   order of 100's.

   - There may be up to millions of simultaneous optical connections
   switched across a single carrier network.  Note that even though
   automated rapid optical connection provision is required, but the
   carriers expect the majority of provisioned circuits, at least in
   short term, to have a long lifespan ranging from months to years.

3.3. Transport Network Technology

   Optical services can be offered over different types of underlying
   optical transport technologies including both TDM-based SONET/SDH
   network and WDM-based OTN networks.

   For this document, standards-based transport technologies such as SONET/SDH
   as defined in the ITU Rec. G.803 and OTN - implementation framing as
   defined in ITU Rec. G.709

3.4 shall be supported.

   Note that the service characteristics such as bandwidth granularity
   and signaling framing hierarchy to a large degree will be determined
   by the capabilities and constraints of the server layer network.

3.4.  Service Building Blocks

   The ultimate primary goal of this document is to identify a set of basic
   service building blocks the carriers can mix and match them to create
   the best suitable service models that serve their business needs.
Editor's Note: May need list of

   The service building blocks in view are comprised of document
content. a well-defined set of
   service capabilities and a basic set of service control and
   management functions, which offer a basic set of services and
   additionally enable a carrier to define enhanced services through
   extensions and customizations. Examples of the building blocks
   include the connection types, provisioning methods, control
   interfaces, policy control functions, and domain internetworking
   mechanisms, etc.

4.  Service Model and Applications

   A carrier's optical network supports multiple types of service
   models. Each service model may have its own service operations,
   target markets, and service management requirements.

4.1 Static Provisioned Bandwidth Service (SPB)

Static Provisioned Bandwidth

4.1.  Service creates Soft Permanent
Connections. Soft Permanent Connections are those connections initiated
from the management plane, but completed through the control plane and
its interactions with the management plane.  These connections
traditionally fall within the category of circuit provisioning and are
characterized by very long holding times.

Requirement 3. Connection Types

   The control plane shall allow optical network is primarily offering high bandwidth connectivity
   in the management plane
  control form of connections, where a connection is defined to be a
   fixed bandwidth connection between two client network resources for network management including, but
  not limited elements, such
   as IP routers or ATM switches, established across the optical
   network. A connection is also defined by its demarcation from ingress
   access point, across the optical network, to management egress access point of soft permanent connections.

Service Concept: The SPB supports enhanced leased line and private line
services.
   the optical network.

   The network operator provides following connection provisioning at capability types must be supported:

   - Uni-directional point-to-point connection

   - Bi-directional   point-to-point connection

   - Uni-directional point-to-multipoint connection

   For point-to-point connection, the
customer request through carrier following three types of network operation center. The
provisioning time could take some time and provisioning process could
   connections based on different connection set-up control methods
   shall be manual or semi-manual. The specific functionalities of SPB offered
by supported:

   - Permanent connection (PC): Established hop-by-hop directly on each
   ONE along a carrier may be carrier specific. But any specified path without relying on the network capability that
can be invoked by, say, routing and
   signaling across capability. The connection has two fixed end-points and
   fixed cross-connect configuration along the path and will stays
   permanently until it is deleted. This is similar to the concept of
   PVC in ATM.

   - Switched connection (SC): Established through UNI shall also be directly
accessible by signaling
   interface and the connection is dynamically established by network operator's
   using the network provisioning routing and network
management work centers. signaling functions. This is basically similar to
   the "point and click" type concept of provisioning services currently proposed by many vendors. The
connections established SVC in this way are so-called permanent or soft- ATM.

   - Soft permanent connections.

Service Operation: During the provision process multiple network
resources are reserved connection (SPC): Established by specifying two PC
   at end-points and dedicated to let the specific path. The control
interface is either human (e.g. a customer calls a customer service
representative) or via a customer network management system (e.g.,
customer may make its request over a secure web site or by logging into dynamically establishes a specialized OSS). Any provisioned bandwidth service facility is

tracked. The path is data based as an object (or structure) containing
information relating to the SC
   connection attributes and the physical
entities used in creating the path (e.g., ingress and egress, NE ports,
cross-office and inter-office facilities). between. This information is used to
reserve network resources at provisioning time, similar to track performance
parameters, the SPVC concept in ATM.

   The PC and SPC connections should be provisioned via management plane
   to perform maintenance functions. An end-to-end managed
service may involve multiple networks, e.g., both access networks control interface and
an intercity network. In this case provisioning may the SC connection should be initiated by
whichever network has primary service responsibility.

Target Market: SPB provisioned via
   signaled UNI interface.

4.2.  Examples of Common Service Models

   Each carrier can defines its own service focuses model based on customers unable to request
connections using direct signaling to the network, customers with
complex engineering requirements that cannot be handled autonomously by
the operator's optical layer control plane, customers requiring
connections to off-net locations, it business
   strategy and customers who need end-to-end
managed environment. The following are three service offered by (or out-sourced to) carriers. models that
   carriers may use:

4.2.1.  Provisioned Bandwidth Service Management: SPB (PBS)

   The PBS model provides enhanced leased/private line services
   provisioned via service involves carrier management system. interface (MI)  using either PC or
   SPC type of connection. The
connections provided by SPB may provisioning can be under real-time or near
   real-time. It has the control of value-added
network following characteristics:

   - Connection request goes through a well-defined management services, such as specific path selection, complex
engineering requirements, or customer required monitor functions. The
connection should be deleted only at customer's request. Billing of SPB
will be based on the bandwidth, service during, quality-of-service, interface

   - Client/Server relationship between clients and
other characteristics of the connection. In SPB model, the user shall
not optical network.

   - Clients have any information about the no optical network, however,
information on the health of the provisioned connection network visibility and other
technical aspects of this depend on network
   intelligence or operator for optical connection may be provided to the user as a
part of the service agreement.

4.2  Bandwidth-on-Demand Service (BOD) setup.

4.2.2.  Bandwidth on Demand Service supports management (BDS)

   The BDS model provides bandwidth-on-demand dynamic connection
   services via signaled user-network interface (UNI). The provisioning
   is real-time and is using SC type of switched
connections. Switched connections are those connections initiated by optical connection. It has the
   following characteristics:

   - Signaled connection request via UNI directly from the user edge device over or its
   proxy.

   - Customer has no or limited network visibility depending upon the UNI
   control interconnection model used and completed through network administrative policy.

   - Relies on network or client intelligence for connection set-up
   depending upon the control
plane. These connections may be more dynamic than plane interconnection model used.

4.2.3.  Optical Virtual Private Network (OVPN)

   The OVPN model provides virtual private network at the soft permanent
connections and have much shorter holding times than soft permanent
connections.

Service Concept: In SPB model, optical layer
   between a specified set of user is required to pay sites.  It has the cost following
   characteristics:

   - Customers contract for specific set of the network resources such as
   optical connection independent ports, wavelengths, etc.

   - Closed User Group (CUG) concept is supported as in normal VPN.

   - Optical connection can be of PC, SPC or SC type depending upon the usage
   provisioning method used.

   - An OVPN site can request dynamic reconfiguration of the connection. In current data
private line services, the average utilization rate is very low and
most of connections
   between sites within the bits are unused. This is mainly due to time of day same CUG.

   - Customer  may have limited or full visibility and day control of week reasons. Even though businesses close down at night and over
   contracted network resources depending upon the weekend, user still needs to pay for customer service
   contract.

   At minimum, the SPB connections. In BOD
model, there PBS, BDS and OVPN service models described above
   shall be supported by the potential control functions.

5.  Network Reference Model

   This section discusses major architectural and functional components
   of tearing down a user's connection
when he is closed and giving it back to generic carrier optical network, which will provide a reference
   model for describing the user again when his
business day begins. This is requirements for the service model control and management
   of bandwidth on demand.
In BOD service model, connections carrier optical services.

5.1.  Optical Networks and Subnetworks

   As mentioned before, there are established two main types of optical networks
   that are currently under consideration: SDH/SONET network as defined
   in ITU Rec. G.803, and reconfigured OTN as defined in
real time, ITU Rec. G.872.

   We assume an OTN is composed of a set of optical cross-connects (OXC)
   and optical add-drop multiplexer (OADM) which are so-called switched interconnected in a
   general mesh topology using DWDM optical connections. Signaling
between line systems (OLS).

   It is often convenient for easy discussion and description to treat
   an optical network as an subnetwork cloud, in which the user NE details of
   the network become less important, instead focus is on the function
   and the interfaces the optical layer control plane initiates all
necessary network activities. A real-time commitment for provides. In general, a future
connection may also
   subnetwork can be established.  A standard defined as a set of "branded"

service options is available. The functionality available is a proper
subset of that available to SPB Service users and is constrained by the
requirement for real-time provisioning, among other things.
Availability of the requested connection is contingent on resource
availability.

Service Operation: This service provides support of real-time creation
of bandwidth between two end-points. The time needed to set up
bandwidth on demand shall be access points on the order network
   boundary and a set of seconds, preferably sub-
seconds. To support point-to-point optical connections establishment dynamically, the end
terminals shall be already physically connected to the between
   those access points.

5.2.  Network Interfaces

   A generic carrier network with
adequate capacity. Ingress into the reference model describes a multi-carrier
   network needs to be pre-provisioned
for point-to-point ingress facilities. Also, necessary cross-connects
throughout the environment. Each individual carrier network shall can be set up automatically upon service
request. To provide BOD services, the UNI signaling between user edge
device and network edge device is required for all connection end-
points. further
   partitioned into domains or sub-networks based on administrative,
   technological or architectural reasons.  The BOD service request shall be completed if and only if the
request is consistent with the relevant SLAs, the network demarcation between
   (sub)networks can support
the requested connection, be either logical or physical and  consists of a
   set of reference points identifiable in the user edge device at optical network. From the other end
point accepts connection.

Target Market: BOD service focuses on customers, such as ISP's, large
intranet, and other data and SDH/SONET networks, requiring large point-
to-point capacities and having very dynamic demands, customers
supporting UNI functions
   control plane perspective, these reference points define a set of
   control interfaces in their edge devices.

Service Management: BOD service provides customers the possibility terms of
rapid provisioning optical control and high service utilization. Since connection
establishment is not part of the functions of the network management
system, the connection management may be some value-added services
according to LSAs. Also, connection admission control shall be provided
at the connection request on time.
   functionality. The connection shall be deleted from
customer's request at either the source endpoint or the destination
endpoint. Billing following figure 5.1 is an illustrative diagram
   for this.

                            +---------------------------------------+
                            |            single carrier network     |
         +--------------+   |                                       |
         |              |   | +------------+        +------------+  |
         |   IP         |   | |            |        |            |  |
         |   Network    +-EUNI+  Optical   +-I-UNI--+ Carrier IP |  |
         |              |   | | Subnetwork |        |   network  |  |
         +--------------+   | |            +--+     |            |  |
                            | +------+-----+  |     +------+-----+  |
                            |        |        |            |        |
                            |       I-NNI    I-NNI        I-UNI     |
         +--------------+   |        |        |            |        |
         |              |   | +------+-----+  |     +------+-----+  |
         |   IP         +-EUNI|            |  +-----+            |  |
         |   Network    |   | |   Optical  |        |   Optical  |  |
         |              |   | | Subnetwork +-I-NNI--+ Subnetwork |  |
         +--------------+   | |            |        |            |  |
                            | +------+-----+        +------+-----+  |
                            |        |                     |        |
                            +---------------------------------------+
                                   E-UNI                  E-NNI
                                     |                     |
                              +------+-------+     +----------------+
                              |              |     |                |
                              | Other Client |     |  Other Carrier |
                              |   Network    |     |    Network     |
                              | (ATM/SONET)  |     |                |
                              +--------------+     +----------------+

                         Figure 5.1 Generic Carrier Network Reference
Model

   The network interfaces encompass two aspects of BOD shall be based on the bandwidth, service
during, quality-of-service, networking
   functions: user data plane interface and other characteristics of control plane interface. The
   former concerns about user data transmission across the
connection. In BOD model, physical
   network interface and the user shall not have any information latter concerns about the optical network, however, information on control message
   exchange across the health of network interface such as signaling, routing,
   etc. We call the
provisioned connection former physical interface (PI) and other technical aspects the latter
   control plane interface. Unless otherwise stated, the control
   interface is assumed in the remaining of this connection
may be provided to the user via UNI connection request.

4.3 Optical Virtual Private Network (OVPN)
Service Concept: The customer may contract for some specific network
resources (capacity document.

5.2.1.  Control Plane Interfaces

   Control interface defines a relationship between OXCs, OXC ports, OXC switching resources)
such that two connected
   network entities on both side of the customer is able to interface. For each control these resources
   interface, we need to
reconfigure the optical cross-connections define an architectural function each side
   plays and establish, delete,
maintain connections. In effect they would have a dedicated optical
sub-network under controlled set of information that can be exchanged
   across the customer's control.

Service Operations: For future study.

Target market: OVPN service focuses on customers, such as ISP, large
intranets, carriers, interface. The information flowing over this logical
   interface may include, but not limited to:

   - Endpoint name and other networks requiring large point-to-point
capacities address

   - Reachability/summarized network address information

   - Topology/routing information
   - Authentication and having variable demands who wish to integrate the connection admission control information

   - Connection management signaling messages

   - Network resource control information

   Different types of their service and optical layers, business-to-business
broadband solution assemblers.

Service Management: OVPN service provides the customer interfaces can be defined for the possibility
of loaning some optical network resources such that the customer is
able to maintain its own sub-network. Since the OVPN connections
maintenance is no longer part of the functions of
   control and architectural purposes and can be used as the network
management system,
   reference points in the connection management may provide some value-
added services according to LSAs. control plane. In OVPN model, there this document, the
   following set of interfaces are defined as shown in Figure 5.1:

   The User-Network Interface (UNI) is no connection
admission a bi-directional signaling
   interface between service requester and service provider control from the carrier
   entities. We further differentiate between interior UNI (I-UNI) and
   exterior UNI (E-UNI) as follows:

   - E-UNI: A UNI interface for which the customer is free to
reconfigure its service request control entity
   resides outside the carrier network resources. Billing of OVPN shall be based on control domain.

   - I-UNI: A UNI interface for which the service requester control
   entity resides within the carrier network resources contracted. Network connection acceptance shall
involve only a simple check to ensure control domain.

   The reason for doing so is that the request we can differentiate a class of UNI
   where there is in
conformance with capacities and constraints specified in trust relationship between the OVPN
service agreement.

Requirement 4. In OVPN model, real-time information about client equipment and
   the state optical network. This private nature of
  all resources contracted for shall be made available UNI may have similar
   functionality to the customer.
  Depending on the service agreement, this NNI in that it may include allow for controlled routing
   information on
  both in-effects and spare resources accessible to cross the customer.

5. Network Reference Model

This Section discusses major architectural and functional components UNI. Specifics of
a generic carrier optical network, which should provide a reference
model for describing the requirements for the carrier optical services.

5.1 Optical Networks and Subnetworks

There are two main types of optical networks that I-UNI are currently
   under
consideration: SDH/SONET study.

   The Network-Network Interface  (NNI) is a bi-directional signaling
   interface between two optical network as defined in ITU G.707 and T1.105, elements or sub-networks.

   We differentiate between interior (I-NNI) and OTN network exterior (E-NNI) NNI as defined in ITU G.872.
We assume an optical transport network (OTN) is composed of a set of
optical cross-connects (OXC) and optical add-drop multiplexer (OADM)
which are interconnected
   follows:

   - E-NNI: A NNI interface between two control plane entities belonging
   to different control domains.

   - I-NNI: A NNI interface between two control plane entities within
   the same control domain in a general mesh topology using DWDM optical
line systems (OLS). the carrier network.

   It should be noted that it is often convenient quite common to use E-NNI between two
   sub-networks within the same carrier network if they belong to
   different control domains. Different types of interface, interior vs.
   exterior, have different implied trust relationship for easy discussion security and description
   access control purposes. Trust relationship is not binary, instead a
   policy-based control mechanism need to treat an
optical network as an opaque subnetwork, be in which place to restrict the details
   type and amount of information that can flow cross each type of
   interfaces depending the
network become less important, instead focus is on the function carrier's service and business requirements.
   Generally, two networks have a trust relationship if they belong to
   the
interfaces the same administrative domain.

   Interior interface examples include an I-NNI between two optical
   network provides. In general, an opaque
subnetwork can be defined as elements in a set of access points on single control domain or an I-UNI interface
   between the optical transport network
boundary and an IP client network owned
   by the same carrier. Exterior interface examples include an E-NNI
   between two different carriers or an E-UNI interface between a set of point-to-point
   carrier optical connections between those
access points.

5.2 Network Interfaces

A generic carrier network reference and its customers.

   The control plane shall support the UNI and NNI interface described
   above and the interfaces shall be configurable in terms of the type
   and amount of control information exchange and their behavior shall
   be consistent with the configuration (i.e., exterior versus interior
   interfaces).

5.3. Intra-Carrier Network Model Intra-carrier network model describes is
   concerned about the network service control and management issues
   within networks owned by a multi-carrier single carrier.

5.3.1. Multiple Sub-networks

   Without loss of generality, the optical network environment. Each individual owned by a carrier network
   service operator can be further
partitioned into domains depicted as consisting of one or more optical
   sub-networks based on administrative,
technological or architectural reasons.  The demarcation between
(sub)networks can interconnected by direct optical links. There may be either logical or physical and  consists
   many different reasons for more than one optical sub-networks It may
   be the result of using hierarchical layering, different technologies
   across access, metro and long haul (as discussed below), or a set result
   of reference points identifiable in the business mergers and acquisitions or incremental optical network. From network
   technology deployment by the
control plane perspective, these reference points define carrier using different vendors or
   technologies.

   A sub-network may be a set of
control interfaces single vendor and single technology network.
   But in general, the carrier's optical network is heterogeneous in
   terms of equipment vendor and the technology utilized in each sub-
   network.

5.3.2.  Access, Metro and Long-haul networks

   Few carriers have end-to-end ownership of the optical control networks. Even
   if they do, access, metro and management
functionality. The following is an illustrative diagram for this.

                           +---------------------------------------+
                           |                                       |
      +--------------+     |                                       |
      |              |     | +------------+        +------------+  |
      |   IP         |     | |            |        |            |  |
      |   Network    +-E-UNI-+   Optical  +-I-UNI--+ Carrier IP |  |
      |              |     | | Subnetwork |        |   network  |  |
      +--------------+     | |            +--+     |            |  |
                           | +------+-----+  |     +------+-----+  |
                           |        |        |            |        |
                           |       I-NNI    I-NNI        E-UNI     |
      +--------------+     |        |        |            |        |
      |              |     | +------+-----+  |     +------+-----+  |
      |   IP         +-E-UNI-|            |  +-----+            |  |
      |   Network    |     | |   Optical  |        |   Optical  |  |
      |              |     | | Subnetwork +-I-NNI--+ Subnetwork |  |
      +--------------+     | |            |        |            |  |
                           | +------+-----+        +------+-----+  |
                           |        |                     |        |
                           +---------------------------------------+
                                   I-UNI                  E-NNI
                                    |                     |
                             +------+-------+     +----------------+
                             |              |     |                |
                             | Other Client |     |  Other Carrier |
                             |   Network    |     |    Network     |
                             | (ATM/SONET)  |     |                |
                             +--------------+     +----------------+
               Figure 5.1 Generic Carrier Network Reference Model

The network interfaces encompass two aspects long-haul networks often belong to
   different administrative divisions as separate optical sub-networks.
   Therefore Inter-(sub)-networks interconnection is essential in terms
   of supporting the networking
functions: user data plane interface end-to-end optical service provisioning and control plane interface.
   management. The
former concerns about user data transmission across the network
interface access, metro and the latter concerns about the control message exchange
across the network interface such as signaling, routing, etc. We call
the former physical interface (PI) long-haul networks may use
   different technologies and architectures, and as such may have
   different network properties.

   In general, an end-to-end optical connection may easily cross
   multiple sub-networks with the latter control plane
interface. Unless otherwise stated, the control interface following possible scenarios
   Access -- Metro -- Access
   Access - Metro -- Long Haul -- Metro - Access

5.3.3.  Implied Control Constraints

   The carrier's optical network is assumed in
the remaining of this document.

Control Plane Interfaces

Control interface defines general treated as a trusted
   domain, which is defined as a relationship between two connected network
entities on both side of under a single technical
   administration with implied trust relationship. Within a trusted
   domain, all the interface. For each control interface, we
need optical network elements and sub-networks are
   considered to define an architectural function each side plays be secure and trusted by each other at a
controlled set of information, which can be exchanged across defined level.
   In the
interface. The information flowing over this logical interface may
include:
- Endpoint name intra-carrier model interior interfaces (I-NNI and address

- Reachability/summarized I-UNI) are
   generally assumed.

   One business application for the interior UNI is the case where a
   carrier service operator offers data services such as IP, ATM and
   Frame Relay over its optical core network. Data services network address information
- Topology/routing information
- Authentication
   elements such as routers and connection admission control information
- Connection ATM switches are considered to be
   internal optical service messages
- Network resource control client devices. The topology information (I-NNI only)

Different types of the interfaces can be defined for
   the carrier optical network
control and architectural purposes and can may be used as the network
reference points in shared with the control plane. internal client
   data networks.

5.4.  Inter-Carrier Network Model

   The User-Network Interface (UNI) is a bi-directional signaling
interface between inter-carrier model focuses on the service requester and service provider control plane
entities.
We differentiate aspects
   between interior (I-UNI) different carrier networks and exterior (E-UNI) UNI as
follows:
E-UNI: A bi-directional signaling interface describes the internetworking
   relationship between them.

5.4.1.  Carrier Network Interconnection

   Inter-carrier interconnection provides for connectivity among
   different optical network operators. To provide the global reach end-
   to-end optical services, the optical service requester
and control plane entities belonging to and management
   between different domains. Information
flows include support of carrier networks become essential. The normal
   connectivity between the carriers may include:

   Private Peering: Two carriers set up dedicated connection flows and address resolution.
I-UNI: A bi-directional signaling interface between service requester
and control plane entities belonging to one or more domains having
   them via a
trusted relationship.

Editor's Note: Details of I-UNI have private arrangement.

   Public Peering: Two carriers set up a point-to-point connection
   between them at a public optical network access points (ONAP)

   Due to be worked out.

The Network-Network Interface  (NNI) the nature of the automatic optical switched network, it is
   possible to support the interface distributed peering for the IP client layer
   network where the connection between two distant IP routers can be
   connected via an optical networks or sub-networks, specifically between connection.

5.4.2. Implied Control Constraints

   In the two directly
linked edge ONEs of inter-carrier network model, each carrier's optical network is
   a separate administrative domain. Both the two interconnected networks.

We differentiate UNI interface between interior (I-NNI) the
   user and exterior (E-NNI) the carrier network and the NNI as
follows:

E-NNI: A bi-directional signaling interface between control plane
entities belonging to different domains. Information flows include
support of connection flows two
   carrier's networks are crossing the carrier's administrative boundary
   and also reachability information
exchanges.
I-NNI: A bi-directional signaling interface between therefore are by definition exterior interfaces.

   In terms of control plane
entities belonging to one or more domains having a trusted
relationship.  Information flows over I-NNI also include information exchange, the topology
information.

It should information
   shall not be noted that it is quite possible allowed to use across both E-NNI even between
subnetworks with a trust relationship to keep topology information
exchanges only within and E-UNI interfaces.

6.  Optical Service User Requirements

   This section describes the subnetworks.
Generally, two networks have a trust relationship if they belong to user requirements for optical services,
   which in turn impose the
same administrative domain.

Generally, two networks do not have a trust relationship if they belong
to requirements on service control and
   management for the different administrative domains.
Generally speaking, network operators. The user requirements reflect
   the following levels perception of the optical service from a user's point of view.

6.1.  Common Optical Services

   The basic unit of trust interfaces shall be
supported:
Interior interface: an interface is interior when there optical service is a trusted
relationship fixed-bandwidth optical
   connection between the two connected networks.
Exterior interface: an interface is exterior when there  is no trusted
relationship between parties. However different services are
   created based on its supported signal characteristics (format, bit
   rate, etc), the two connected networks.
Interior interface examples include an I-NNI between two service invocation methods and possibly the
   associated Service Level Agreement (SLA) provided by the service
   provider.

   At present, the following are the major optical sub-
networks belonging to a single carrier services provided in
   the industry:

   - SONET/SDH, with different degrees of transparency

   - Optical wavelength services: opaque or an I-UNI interface between transparent

   - Ethernet at 1 Gbps and 10 Gbps

   - Storage Area Networks (SANs) based on FICON, ESCON and Fiber
   Channel

   The services mentioned above shall be provided by the optical
   transport layer of the network and an IP network owned by being provisioned using the same
carrier. Exterior interface examples include an E-NNI between two
different carriers or an E-UNI interface
   management, control and data planes.

   Opaque Service refers to transport services where signal framing is
   negotiated between a carrier optical the client and the network operator (framing and its customers.
The two types of interfaces may define different architectural
functions
   bit-rate dependent), and distinctive level only the payload is carried transparently.
   SONET/SDH transport is most widely used for network-wide transport.
   Different levels of access, security transparency can be achieved in the SONET/SDH
   transmission and trust
relationship.

Editor's Note: More work is needed discussed in defining specific functions on
interior and exterior interfaces.

Requirement 5. The control plane interfaces shall be configurable Section 6.4.

   Transparent Service assumes protocol and
  their behavior shall be consistent with the configuration (i.e.,
  exterior versus interior interfaces).

5.3 Intra-Carrier Network Model

The carrier's rate independency. However,
   since any optical network is treated as a trusted domain, which connection is
defined as network under a single technical administration associated with full
trust relationship within the network. Within a trusted domain, all the
optical network elements and sub-networks are considered to be secure
and trusted by each other. A highly simplified optical networking
environment consists of an signal bandwidth,
   for transparent optical transport network and a set of
interconnected client networks services, knowledge of various types such as IP, ATM and
SONET.

In the intra-carrier model, within a carrier-owned network, generally
interior interfaces (I-NNI maximum bandwidth
   is required.

   Ethernet Services, specifically 1Gb/s and I-UNI) 10Gbs Ethernet services,
   are assumed.

The interfaces between gaining more popularity due to the carrier-owned network equipment and lower costs of the
optical network are a interior UNI and the interfaces between optical
sub-networks within a carrier's administrative domain are interior NNI;
while the interfaces between the carrier's optical network customers'
   premises equipment and its
users are exterior UNI, simplified management requirements
   (compared to SONET or SDH).

   Ethernet services may be carried over either SONET/SDH (GFP mapping)
   or WDM networks. The Ethernet service requests will require some
   service specific parameters: priority class, VLAN Id/Tag, traffic
   aggregation parameters.

   Storage Area Network (SAN) Services. ESCON and the interfaces between optical networks of
different operators FICON are proprietary
   versions of the exterior NNI.

One business application for service, while Fiber Channel is the interior UNI standard
   alternative. As is the case wherea
carrier service operator offers data with Ethernet services, SAN services such as IP, ATM and Frame
Relay may
   be carried over  its optical core network. Data either SONET/SDH (using GFP mapping) or WDM networks.

   Currently SAN services network elements

such as routers and ATM switches are considered to be internal optical
service client devices. require only point-to-point connections, but
   it is envisioned that in the future they may also require multicast
   connections.

   The interconnection topology among control plane shall provide the carrier
NEs should be completely transparent with the capability
   functionality to to provision, control and manage all the users of services
   listed above.

6.2.  Optical Service Invocation

   As mentioned earlier, the data services.

5.3.1 Multiple Sub-networks

Without loss methods of generality, service invocation play an
   important role in defining different services.

6.2.1.  In this scenario, users forward their service request to the optical network owned by
   provider via a carrier well-defined service operator can be depicted as consisting of one management interface. All
   connection management operations, including set-up, release, query,
   or more optical
sub-networks interconnected by direct optical links. There may be many
different reasons for more than one optical sub-networks It may modification shall be invoked from the
result of using hierarchical layering, different technologies across
access, metro and long haul (as discussed below), or management plane.

6.2.2.  In this scenario, users forward their service request to the
   provider via a result of
business mergers and acquisitions or incremental optical network
technology deployment by well-defined UNI interface in the carrier using different vendors control plane
   (including proxy signaling). All connection management operation
   requests, including set-up, release, query, or
technologies.

A sub-network may modification shall be
   invoked from directly connected user devices, or its signaling
   representative (such as a single vendor and single technology network. But
in general, signaling proxy).

   In summary the carrier's optical network is heterogeneous in terms of
equipment vendor and following requirements for the technology utilized in each sub-network. There
are four possible scenarios:

---  Single vendor and single technology
---  Single vendor and multiple technologies
---  Multiple vendor single technology
---  Multiple vendors and multiple technologies.

5.3.2 Access, Metro and Long-haul networks

Few carriers control plane have end-to-end ownership of the optical networks. Even
ifthey do, access, metro and long-haul networks often belong been
   identified:

   The control plane shall support action results codes as responses to
different administrative divisions and they each
   any requests over the control interfaces.

   The control plane shall support requests for optical sub-
network. Therefore Inter-(sub)-networks interconnection is essential connection set-up,
   subject to policies in
terms of supporting effect between the end-to-end optical service provisioning user and
management. the network.

   The access, metro and long-haul networks may use different
technologies and architectures, and as such may have different network
properties.

In general, an end-to-end optical connection may easily cross multiple
sub-networks with control plane shall support the following possible scenarios
Access -- Metro -- Access
Access - Metro -- Long Haul -- Metro destination client device's
   decision to accept or reject connection creation requests from the
   initiating client's device.

   - Access

Editor's Note: More details will be added in a later revision of this
draft.

5.4 Inter-Carrier Network Model The inter-carrier model focuses on the service and control aspects
between different carrier networks plane shall support requests for connection set-up
   across multiple subnetworks over both Interior and describes the internetworking
relationship between the different carrier's optical networks.   In the
inter-carrier network model, each carrier's optical network is a

separate administrative domain. Both the UNI interface Exterior Network
   Interfaces.

   - NNI signaling shall support requests for connection set-up, subject
   to policies in effect between the user
and the carrier network subnetworks.

   - Connection set-up shall be supported for both uni-directional and
   bi-directional connections.

   - Upon connection request initiation, the NNI interface between two carrier's control plane shall
   generate a network are crossing unique Connection-ID associated with the carrier's administrative boundaries and
therefore are by definition exterior interfaces.
Carrier Network Interconnection
Inter-carrier interconnection provides
   connection, to be used for connectivity among different
optical network operators. Just information retrieval or other activities
   related to that connection.

   - CAC shall be provided as the success and scalability of the
Internet has in large part been attributed by of the inter-domain routing
protocol like BGP, so control plane functionality.
   It is the future success role of the optical network. The
normal connectivity between the carriers may include:
Private Peering: Two carriers set up dedicated connection between them
via CAC function to determine if there is
   sufficient free resource available downstream to allow a private arrangement.
Public Peering: Two carriers set up new
   connection.

   - When a point-to-point connection between
them at a public optical network access points (ONAP)
Due to the nature of request is received across the automatic optical switched network, NNI, it is
possible
   necessary to have ensure that the distributed peering where resources exist within the downstream
   subnetwork to establish the connection between
two distant ONE's is connected via an optical connection.

6. Optical Service User Requirements

An optical connection will traverse two UNI interfaces and zero or more
NNI interfaces depending on

   - If it is between two client network users
crossing a single carrier's network, or if it is between two client
network users crossing multiple carriers' networks.

6.1 Connection Management

6.1.1 Basic Connection Management

In a sufficient resources are available, the CAC may permit the
   connection oriented transport network a request to proceed.

   - If sufficient resources are not available, the CAC shall send an
   appropriate notification upstream towards the originator of the
   connection must be
established before data can be transferred. This requires, as a
minimum, request that the following request has been denied.

   - Negotiation for connection management actions set-up for multiple service level
   options shall be
supported:
Set-up Connection is initiated by supported across the NNI.

   - The policy management plane on behalf system must determine what kind of an
end-user or by
   connections can be set up across a given NNI.

   - The control plane elements need the end-user signaling device. ability to rate limit (or pace)
   call setup attempts into the network.

   - The results are as
follows: If set-up control plane shall report to the management plane, the
   Success/Failures of a connection is successful, then optical circuit,
resources, or required bandwidth is dedicated request

   - Upon a connection request failure, the control plane shall report
   to associated end-points.
Dedicated resources may include active resources as well as protection
or restoration resources in accordance with the class of service
indicated by the user. If set-up of connection is not successful, management plane a
negative response is returned to initiating entity and any partial
allocation of resources is de-allocated.

Editor's Note - may need to mention cause code identifying the ACK from reason for the user on
   failure.

   Upon a connection
create confirmation.

Teardown Connection is initiated by request failure:

   - The control plane shall report to the management plane on behalf of
an end-user or by the end-user signaling device. The results are as
follows: optical circuit, resources or a cause code
   identifying the required bandwidth are freed

up reason for ulterior usage. Dedicated resources are also freed. Shared
resources are only freed if there are no active connections sharing the
same protection or restoration resources. If tear down is not
successful, a failure

   - A negative response acknowledgment shall be returned to the end-user.
Query Connection is initiated by the management plane on behalf of an
end-user or by the end-user signaling device. Status report returned to
querying entity.
Accept/Reject Connection is initiated by the end-user signaling device.
This command is relevant in across the context of switched connections only.
The destination end-user NNI

   - Allocated resources shall have the opportunity to accept or reject
new connection requests or be released.

   - Upon a connection modifications.
Furthermore, the following requirements need to request success:

   - A positive acknowledgment shall be considered:

Requirement 6. returned when a connection has
   been successfully established.

   - The control plane positive acknowledgment shall support action results code
  responses to any requests be transmitted both downstream
   and upstream, over the control interfaces.
Requirement 7. NNI, to inform both source and destination
   clients of when they may start transmitting data.

   The control plane shall support requests the client's request for connection
  set-up, subject
   tear down.

   NNI signaling plane shall support requests for connection tear down
   by connection-ID.

   The control plane shall allow either end to policies in effect between initiate connection
   release procedures.

   NNI signaling flows shall allow any end point or any intermediate
   node to initiate the user and connection release over the
  network.
Requirement 8. NNI.

   Upon connection teardown completion all resources associated with the
   connection shall become available for access for new requests.

   The management plane shall be able to tear down connections
   established by the control plane both gracefully and forcibly on
   demand.

   Partially deleted connections shall support not remain within the destination user
  edge device's decision to accept or reject network.

   End-to-end acknowledgments shall be used for connection creation
  requests from deletion
   requests.

   Connection deletion shall not result in either restoration or
   protection being initiated.

   Connection deletion shall at a minimum use a two pass signaling
   process, removing the initiating user edge device.
Requirement 9. cross-connection only after the first signaling
   pass has completed.

   The control plane shall support the user management plane and client's device
   request for connection tear down.
Requirement 10. attributes or status query.

   The control plane shall support management plane and user edge neighboring
   device (client or intermediate node) request for connection
   attributes or status query.

In addition, there are several actions that need to be supported, which
are not directly related

   The control plane shall support action results code responses to an individual connection, but are necessary
for establishing healthy any
   requests over the control interfaces.

   The requirements below show some management plane shall be able to query on demand the status of these actions:

Requirement 11.
   the connection

   The UNI shall support initial registration of the UNI-C with the network.
Requirement 12.
   network via the control plane.

   The UNI shall support registration and updates by the UNI-C entity of
   the edge devices clients and user interfaces that it controls.
Requirement 13.

   The UNI shall support network queries of the user edge client devices.
Requirement 14.

   The UNI shall support detection of user edge device client devices or of edge ONE
   failure.

In addition,

6.3.  Bundled Connection

   Bundled connections differ from simple basic connections in that a
   connection admission control (CAC) is necessary for
authentication of the user and controlling access to network resources.

Requirement 15.     CAC shall be provided request may generate multiple parallel connections bundled
   together as part of the control plane
  functionality. It is the role of the CAC function to determine if

  there is sufficient free resource available to allow a new one virtual connection.
Requirement 16.     If there is sufficient resource available, the CAC
  may permit the connection request to proceed.
Requirement 17.     If there is not sufficient resource available, the
  CAC shall notify the originator of the connection request that the
  request has been denied.

6.2 Enhanced Connection Management

6.2.1 Compound Connections

   Multiple point-to-point connections may be managed by the network so
   as to appear as a single compound connection to the end-points.
   Examples of such compound bundled connections are connections based on virtual
   concatenation, diverse routing, or restorable connections.

Compound connections are distinguished from basic connections in that a
UNI request will generate multiple parallel NNI signaling sessions.
Connection Restoration
The control plane should provide the signaling and routing capabilities
to permit connection restoration based on the user's request for its
assigned service class.

Diverse Routing

   The control plane should provide the signaling and routing capabilities
to permit a user actions required to request diversely routed manage compound connections from a carrier
who supports this functionality.

Multicast Connections
The control plane should provide are the signaling and routing capabilities
to permit a user to request multicast connections from a carrier who
supports this functionality.

6.2.2 Supplemental Services

Requirement 18.     The control plane shall provide support same as
   the ones outlined for the
  development management of supplementary services that are independent basic connections.

6.4.  Levels of the
  bearer service.

Where these Transparency

   Opaque connections are carried across networks using a range of protocols, it framing and bit-rate dependent - the exact
   signal framing is necessary known or needs to ensure that the protocol interworking provides a
consistent service as viewed by the user regardless of the be negotiated between network
implementation.

Requirement 19.     The control plane shall support closed user groups.
  This allows a user group to create, for example, a virtual private
  network.

Supplementary services
   operator and its clients. However, there may be not required or possible for soft
permanent connections.

6.2.3 Optical VPNs

In optical virtual private networks, the customer contracts multiple levels of
   transparency for
specific network resources (capacity between OXCs, OXC ports, OXC
switching resources) and is able to control these resources individual framing types. Current transport networks
   are mostly based on SONET/SDH technology. Therefore, multiple levels
   have to
establish, disconnect, and reconfigure be considered when defining specific optical connection connections.

Requirement 20. services.

   The control plane should provide the signaling and
  routing capabilities to permit a user to request optical virtual
  private networks from a carrier who supports this functionality.

6.3 Optical Services

Optical services embody a large range example below shows multiple levels of transport services. Currently,
most transport systems are transparency applicable to
   SONET/SDH based, however, innovations transport.

   - Bit transparency in
optical technology such as photonic switching bring about the distinct
possibility of support for pure optical transport services, while SONET/SDH frames. This means that the
proliferation of Ethernet coupled with advancements OXCs
   will not terminate any byte in the technology
to support 1Gb/s SONET OH bytes.

   - SONET Line and 10 Gb/s interfaces section OH (SDH multiplex and regenerator section
   OH) are drivers to make this
service class widely available.

Transparent Service assumes that the user requires optical transport
without normally terminated and the network being aware can monitor a large set
   of the framing. parameters.

   However, since
transmission systems and the engineering rules that apply have
dependencies on the signal bandwidth, even for transparent optical
services, knowledge if this level of the bandwidth requirements is essential.
Opaque Service refers to transport services where signal framing transparency is
negotiated between the user and used, the network operator, and only TOH will be
   tunneled in unused bytes of the
payload is carried transparently. SONET/SDH transport is most widely
used for network-wide transport, non-used frames and such is discussed in most detail
in will be recovered
   at the following sections.

As stated above, Ethernet Services, specifically 1Gb/s terminating ONE with their original values.

   - Line and 1Gbs
Ethernet services section OH are gaining more and more popularity due to forwarded transparently, keeping their
   integrity thus providing the lower
costs of customer the customers' premises equipment and its simplified
management requirements (compared ability to SONET or SDH). Therefore, more and
more network customers have expressed better determine
   where a high level of interest in
support of these transport services.

Ethernet services may be carried over either SONET/SDH or photonic failure has occurred, this is very helpful when the
   connection traverses several carrier networks. As discussed in subsequent sections Ethernet service requests
require some

   - G.709 OTN signals

6.5.  Optical Connection granularity

   The service specific parameters: priority class, VLAN Id/Tag,
traffic aggregation parameters.

Also gaining ground in the industry are granularity is determined by the Storage Area Network (SAN)
Services. ESCON specific technology,
   framing and FICON are proprietary versions bit rate of the service,
while Fiber Channel is physical interface between the standard alternative. As discussed in
subsequent sections Fiber Channel service may require a latency
parameter, since ONE and
   the protocol between client at the service clients edge and by the
server may be dependent on capabilities of the transmission delays (the service is
sensitive ONE. The
   control plane needs to delays in support signaling and routing for all the range of hundreds of .s). As
   services supported by the ONE.

   The physical connection is characterized by the case with
Ethernet services, SAN services may be carried over either SONET/SDH
(using GFP mapping) or photonic networks. Currently SAN services
require only point-to-point connections, but it is envisioned that in
the future they may also require multicast connections.

6.4 Levels of Transparency

Bitstream connections are framing aware - the exact signal framing is
known or needs to be negotiated between network operator and user.
However, there may be multiple levels of transparency for individual
framing types. Current transport networks are mostly based on SONET/SDH
technology. Therefore, multiple levels have to be considered when
defining specific nominal optical services.

The example below shows multiple levels of transparency applicable to
SONET/SDH transport.
- SONET Line and section OH (SDH multiplex and regenerator section OH)
  are normally terminated and a large set of parameters can be
  monitored by the network.
- Line and section OH are carried transparently
- Non-SONET/SDH transparent bit stream

6.5 Optical Connection granularity

The service granularity is determined by the specific technology,
framing and bit rate of the physical
   interface between the ONE and the
user edge device and by the capabilities of the ONE. The control plane
needs to support signaling and routing for all the services supported
by the ONE. Connection granularity is defined by a combination of
framing (e.g., SONET or SDH) and bandwidth of the signal carried over
the network for the user. The connection rate and associated other properties may
define the physical characteristics of the optical connection. such as protocol supported.
   However, the consumable attribute is bandwidth. In general, there
   should not be a one-to-one correspondence imposed between the
   granularity of the service provided and the maximum capacity of the
   interface to the user.

Requirement 21. The SDH and SONET connection granularity, shown in bandwidth utilized by the table below, client becomes
   the logical connection, for which the customer will be charged.

   In addition, sub-rate interfaces shall be supported by the optical
   control plane.
Any specific NE's plane such as VT /TU granularity (as low as 1.5 Mb/s)

   The control plane implementation needs to shall support only the subset ITU Rec. G.709 connection
   granularity for the OTN network.

   The control plane shall support the SDH and SONET connection
   granularity.

   In addition, 1 Gb and 10 Gb granularity shall be supported for 1 Gb/s
   and 10 Gb/s (WAN mode) Ethernet framing types, if implemented in the
   hardware.

   For SAN services the following interfaces have been defined and shall
   be supported by the control plane if the given interfaces are
   available on the equipment:
   - FC-12
   - FC-50
   - FC-100
   - FC-200

   Therefore, sub-rate fabric granularity shall support VT-x/TU-1n
   granularity down to VT1.5/TU-l1, consistent with its the hardware.

Editor's Note: An OTN table for

   Encoding of service granularity will types in the protocols used shall be added.

        SDH        SONET        Transported signal
        name       name
        RS64       STS-192      STM-64 (STS-192) signal without
                    Section      termination such that
   new service types can be added by adding new code point values or
   objects.

6.6.  Other Service Parameters and Requirements

6.6.1.  Classes of any OH.
        RS16       STS-48       STM-16 (STS-48) signal without
                    Section      termination Service

   We use "service level" to describe priority related characteristics
   of any OH.
        MS64       STS-192      STM-64 (STS-192); termination connections, such as holding priority, set-up priority, or
   restoration priority. The intent currently is to allow each carrier
   to define the actual service level in terms of
                    Line         RSOH (section OH) possible.
        MS16       STS-48       STM-16 (STS-48); termination priority, protection,
   and restoration options. Therefore, individual carriers will
   determine mapping of
                    Line         RSOH (section OH) possible.
        VC-4-      STS-192c-    VC-4-64c (STS-192c-SPE);
        64c        SPE          termination individual service levels to a specific set of RSOH (section OH),
                                  MSOH (line OH)
   quality features.

   Specific protection and VC-4-64c TCM OH
                                  possible.
        VC-4-      STS-48c-     VC-4-16c (STS-48c-SPE);
        16c        SPE          termination restoration options are discussed in Section
   10. However, it should be noted that while high grade services may
   require allocation of RSOH (section OH),
                                  MSOH (line OH) and VC-4-16c  TCM
                                  OH possible.
        VC-4-4c    STS-12c-     VC-4-4c (STS-12c-SPE); termination
                    SPE protection or restoration facilities, there may
   be an application for a low grade of RSOH (section OH), MSOH (line
                                  OH) service for which preemptable
   facilities may be used.

   Multiple service level options shall be supported and VC-4-4c TCM OH possible.
        VC-4       STS-3c-      VC-4 (STS-3c-SPE); termination the user shall
   have the option of
                    SPE          RSOH (section OH), MSOH (line OH) selecting over the UNI a service level for an
   individual connection.

   The control plane shall be capable of mapping individual service
   classes into specific protection and VC-4 TCM OH possible.
        VC-3       STS-1-SPE    VC-3 (STS-1-SPE); termination / or restoration options.

6.6.2.  Connection Latency

   Connection latency is a parameter required for support of
                                  RSOH (section OH), MSOH (line OH) time-
   sensitive services like Fiber Channel services. Connection latency is
   dependent on the circuit length, and VC-3 TCM OH possible.
                                  Note: In SDH as such for these services, it could be
   is essential that shortest path algorithms are used and end to-end
   latency is verified before acknowledging circuit availability.

   The control plane shall support latency-based routing constraint
   (such as distance) as a higher
                                  order or lower order VC-3, this path selection parameter.

6.6.3.  Diverse Routing Attributes

   The ability to route service paths diversely is
                                  identified by a highly desirable
   feature. Diverse routing is one of the sub-addressing
                                  scheme. In case connection parameters and is
   specified at the time of a lower order
                                  VC-3 the higher order VC-4 OH can
                                  be terminated.
        VC-2       VT6-SPE      VC-2 (VT6-SPE); termination connection creation. The following
   provides a basic set of
                                  RSOH (section OH), MSOH (line OH),
                                  higher order VC-3/4 (STS-1-SPE) OH
                                  and VC-2 TCM OH possible.
        -          VT3-SPE      VT3-SPE; termination requirements for the diverse routing support.

   Diversity between two links being used for routing should be defined
   in terms of section
                                  OH, line OH, higher order STS-1-
                                  SPE OH and VC3-SPE TCM OH
                                  possible.
        VC-12      VT2-SPE      VC-12 (VT2-SPE); termination link disjointness, node disjointness or Shared Risk Link
   Groups (SRLG) that is defined as a group of
                                  RSOH (section OH), MSOH (line OH),

                                  higher order VC-3/4 (STS-1-SPE) OH
                                  and VC-12 TCM OH possible.
        VC-11      VT1.5-SPE    VC-11 (VT1.5-SPE); termination links which share some
   risky resources, such as a specific sequence of
                                  RSOH (section OH), MSOH (line OH),
                                  higher order VC-3/4 (STS-1-SPE) OH
                                  and VC-11 TCM OH possible.

Requirement 22.     In addition, 1 Gb and 10 Gb granularity shall be
  supported for 1 Gb/s and 10 Gb/s (WAN mode) Ethernet framing types,
  if implemented in the hardware.

Requirement 23.     For SAN services conduits or a
   specific office. A SRLG is a relationship between the following interfaces have been
  defined and shall links that
   should be supported characterized by the control plane if the given
  interfaces are available on the equipment:
- FC-12 two parameters:

   - FC-50 Type of Compromise: Examples would be shared fiber cable, shared
   conduit, shared right-of-way (ROW), shared link on an optical ring,
   shared office - FC-100 no power sharing, etc.)

   - FC-200

In addition, extensions Extent of Compromise: For compromised outside plant, this would be
   the intelligent optical network
functionality towards the edges length of the network in support of sub-rate
interfaces (as low as 1.5 Mb/s) will  support of VT /TU granularity.

Requirement 24.     Therefore, sub-rate extensions  in ONEs supporting
  sub-rate fabric granularity shall support VT-x/TU-1n granularity down
  to VT1.5/TU-l1, consistent with the hardware.

Requirement 25. sharing.

   The connection types supported by control plane routing algorithms shall be consistent with the service granularity and interface types
  supported by the ONE.

The control plane and its associated protocols should be extensible able to
support new services as needed.

Requirement 26.     Encoding of service types route a single
   demand diversely from N previously routed demands in the protocols used
  shall be such that new service types can be added by adding new
  codepoint values or objects.

Note: Additional attributes may be required to ensure proper
connectivity between endpoints.

6.6 Other Service Parameters and Requirements

6.6.1 Classes terms of link
   disjoint path, node disjoint path and SRLG disjoint path.

7.  Optical Service

We use "service level" to describe priority related characteristics of
connections, such as holding priority, set-up priority, or restoration
priority. The intent currently is to allow each carrier to define the
actual Provider Requirements

   This section discusses specific service level in terms of priority, protection, control and restoration

options. Therefore, mapping of individual management
   requirements from the service levels to a specific
set provider's point of priorities will be determined by individual carriers.

Requirement 27. view.

7.1.  Access Methods to Optical Networks

   Multiple service level options access methods shall be supported
  and supported:

   - Cross-office access (User NE co-located with ONE) In this scenario
   the user shall have the option of selecting over the UNI a
  service level for an individual connection.

However, edge device resides in order for the network to support multiple grades of
restoration, same office as the control plane must identify, assign, and track
multiple protection ONE and restoration options.

Requirement 28.      Therefore, has
   one or more physical connections to the control plane shall map individual
  service classes into specific protection and/or restoration options.

Specific protection and restoration options are discussed in Section
10. However, it should ONE. Some of these access
   connections may be noted that in use, while high grade services others may
require allocation of protection be idle pending a new
   connection request.

   - Direct remote access

   In this scenario the user edge device is remotely located from the
   ONE and has inter-location connections to the ONE over multiple fiber
   pairs or restoration facilities, there via a DWDM system. Some of these connections may be an application for in use,
   while others may be idle pending a low grade new connection request.

   - Remote access via access sub-network

   In this scenario remote user edge devices are connected to the ONE
   via a multiplexing/distribution sub-network. Several levels of services for which pre-emptable
facilities
   multiplexing may be used.

Individual carriers will select appropriate options for protection
and/or restoration assumed in support of their specific network plans.

6.6.2 Connection Latency

Connection latency this case. This scenario is applicable
   to metro/access subnetworks of signals from multiple users, out, of
   which only a parameter required for support subset have connectivity to the ONE.

   All of Fibber
Channel services. Connection latency is dependent on the circuit
length, and as such for these services, it is essential that shortest
path algorithms are used above access methods must be supported.

7.2.  Dual Homing and end-to-end latency Network Interconnections

   Dual homing is verified before
acknowledging circuit availability.

Editor's Note: more detail may a special case of the access network. Client devices
   can be required here.

6.6.3 Diverse Routing Attributes

The ability dual homed to route service paths diversely is a highly desirable
feature. Diverse routing is one of the connection parameters and is
specified at same or different hub, the time of same or different
   access network, the connection creation. same or different core networks, the same or
   different carriers.  The following
provides a basic set different levels of requirements dual homing connectivity
   result in many different combinations of configurations. The main
   objective for the diverse routing support.
- Diversity compromises between two links being used dual homing is for routing should
  be defined in terms enhanced survivability.

   The different configurations of Shared Risk Link Groups (SRLG - see [draft-
  chaudhuri-ip-olxc-control-00.txt]]), dual homing will have great impact on
   admission control, reachability information exchanges,
   authentication, neighbor and service discovery across the interface.

   Dual homing must be supported.

7.3.  Inter-domain connectivity

   A domain is a group portion of links which share
  some resource, such as a specific sequence of conduits network, or a specific
  office. A SRLG an entire network that is
   controlled by a relationship between single control plane entity.  This section discusses
   the links that should be
  characterized by two parameters:
- Type of Compromise: Examples would be shared fiber cable, shared
  conduit, shared right-of-way (ROW), shared link on an optical ring,
  shared office - no power sharing, etc.)
- Extent of Compromise: For compromised outside plant, this would be various requirements for connecting domains.

7.3.1.  Multi-Level Hierarchy

   Traditionally current transport networks are divided into core inter-
   city long haul networks, regional intra-city metro networks and
   access networks. Due to the length of differences in transmission technologies,
   service, and multiplexing needs, the sharing.

Requirement 29. three types of networks are
   served by different types of network elements and often have
   different capabilities. The control plane routing algorithms shall be able
  to route a single demand diversely from N previously routed demands,
  where diversity would be defined to mean that no more than K demands
  (previously routed plus the new demand) should fail in diagram below shows an example three-
   level hierarchical network.

                              +--------------+
                              |  Core Long   |
               + -------------+   Haul       +-------------+
               |              | Subnetwork   |             |
               |              +-------+------+             |
       +-------+------+                            +-------+------+
       |              |                            |              |
       |  Regional    |                            |  Regional    |
       |  Subnetwork  |                            |  Subnetwork  |
       +-------+------+                            +-------+------+
               |                                           |
       +-------+------+                            +-------+------+
       |              |                            |              |
       | Metro/Access |                            | Metro/Access |
       |  Subnetwork  |                            |  Subnetwork  |
       +--------------+                            +--------------+

                    Figure 2 Multi-level hierarchy example

   Functionally we can often see clear split among the event 3 types of a
  single covered failure.

7. Optical Service Provider Requirements

7.1 Access Methods to Optical Networks

Multiple access methods shall be supported:
- Cross-office access (User NE co-located
   networks: Core long-haul network deals primarily with ONE)
     In this scenario the user edge device resides in the same office
     as the ONE facilities
   transport and has one or more physical connections to switching. SONET signals at STS-1 and higher rates
   constitute the ONE.
     Some units of these access connections may transport. Regional networks will be in use, while others may more
   closely tied to service support and VT-level signals need to be idle pending also
   switched. As an example of interaction a new connection request.
- Direct remote access
     In this scenario the user edge device is remotely located from the
     ONE and has inter-location connections switching DS1 signals
   interfaces to the ONE other such devices over multiple
     fiber pairs or the long-haul network via STS-1
   links. Regional networks will also groom traffic of the Metro
   networks, which generally have direct interfaces to clients, and
   support a DWDM system. Some highly varied mix of these connections may services.  It should be noted that,
   although not shown in use, while others Figure 2, metro/access subnetworks may be idle pending a new connection request.
- Remote access via access sub-network
     In this scenario remote user edge devices are connected have
   interfaces to the ONE
     via core network, without having to go through a multiplexing/distribution sub-network. Several levels of
     multiplexing may
   regional network.

   Routing and signaling for multi-level hierarchies shall be assumed in this case. This scenario is
     applicable supported
   to metro/access subnetworks of signals from allow carriers to configure their networks as needed.

7.3.2.  Network Interconnections

   Subnetworks may have multiple
     users, out, points of which only a subset have connectivity to the ONE.

Requirement 30. inter-connections. All access methods
   relevant NNI functions, such as routing, reachability information
   exchanges, and inter-connection topology discovery must be supported.

7.1.1 Dual Homing recognize and
   support multiple points of inter-connections between subnetworks.
   Dual homing inter-connection is often used as a survivable architecture.

   Such an inter-connection is a special case of the access network. Dual homing may
take different flavors, and as such affect interface designs in more
than one way:
- A client device may be dual homed on the same subnetwork
- A client device may be dual homed on different a mesh network,
   especially if these subnetworks are connected via an I-NNI, i.e.,
   they are within the same administrative domain (and domain.  In this case the same domain as
   control plane requirements described in Section 8 will also apply for
   the core
  subnetwork)
- A client device may be dual homed on different subnetworks within inter-connected subnetworks, and are therefore not discussed
   here.

   However, there are additional requirements if the
  same administrative domain (but a interconnection is
   across different domain from domains, via an E-NNI.  These additional
   requirements include the core
  subnetwork)
- A client device may be dual homed on different subnetworks off
  different administrative domains.
- A metro subnetwork may be dual homed communication of failure handling functions,
   routing, load sharing, etc. while adhering to pre-negotiated
   agreements on these functions across the same core subnetwork,
  within boundary nodes of the same administrative domain

- A metro subnetwork
   multiple domains.  Subnetwork interconnection may also be dual homed on the same core subnetwork, of achieved
   alternatively via a different administrative domain
- A metro network may be dual homed to separate core subnetworks, of
  different administrative domains.
The different flavors of dual homing will have great impact on
admission control, reachability information exchanges, authentication,
neighbor and service discovery across subnetwork.  In this case, the interface.

Requirement 31.     Dual homing must above
   requirements stay the same, but need to be supported.

7.2 communicated over the
   interconnecting subnetwork, similar to the E-NNI scenario described
   above.

7.4.  Bearer Interface Types

Requirement 32.

   All the bearer interfaces implemented in the ONE shall be supported
   by the control plane and associated signaling protocols.

   The following interface types shall be supported by the signaling
   protocol:
   - SDH
   - SONET
   - 1 Gb Ethernet, 10 Gb Ethernet (WAN mode)
   - 10 Gb Ethernet (LAN mode)
   - FC-N (N= 12, 50, 100, or 200) for Fiber Channel services
   - OTN (G.709)
   - PDH
   - Transparent optical

7.3

7.5.  Names and Address Management

In this section addressing refers to optical layer addressing and it is
an identifier required for routing

7.5.1.  Address Space Separation

   To ensure the scalability of and signaling protocol within smooth migration toward to the
   optical network. Identification used by other logical entities outside switched network, the optical separation of three address spaces are
   required:
   - Internal transport network control plane (such as higher layer services
addressing schemes or a management plane addressing scheme) may addresses
   - Transport Network Assigned (TNA) address
   - Client addresses.

7.5.2.  Directory Services

   Directory Services shall be used
as naming schemes by supported to enable operator to query the
   optical network.   Recognizing that multiple
types of higher layer services need to be supported by network for the optical
network, multiple network address of a specified user.
   Address resolution and translation between various user edge device naming schemes must be supported,
including at the minimum IP
   names and NSAP naming schemes.
The control plane corresponding optical network addresses shall be supported.
   UNI shall use the higher layer service address as a name
rather than as a routable address. The control plane must know what
internal addressing scheme is used user naming schemes for connection request.

7.5.3.  Network element Identification

   Each network element within the a single control plane domain.
Optical layer addresses domain shall be provisionable for each connection
point managed uniquely
   identifiable. The identifiers may be re-used across multiple domains.
   However, unique identification of a network element becomes possible
   by associating its local identity with the control plane. Dynamic address assignment schemes
are desirable in the control plane, however in the event the assignment
is not dynamic then connection point addresses need to global identity of its
   domain.

7.6.  Policy-Based Service Management Framework

   The IPO service must be configurable
from the management plane.  In either case, the supported by a robust policy-based management
   system must to be able to query the currently assigned value.

While, IP-centric services are considered by many as one make important decisions.

   Examples of the drivers policy decisions include: - What types of connections can
   be set up for optical network services, it is also widely recognized that a given UNI?

   - What information can be shared and what information must be
   restricted in automatic discovery functions?

   - What are the
optical network will security policies over signaling interfaces?
   - What border nodes should be used in support of a large array when routing depend on factors
   including, but not limited to source and destination address, border
   nodes loading, time of both data connection request.

   Requirements: - Service and voice services. In order to achieve real-time provisioning for all
services supported by the optical network while minimizing OSS
development by carriers, it is essential for the network policies related to configuration
   and provisioning, admission control, and support a
UNI definition that does not exclude non-IP services.

Requirement 33.     For this reason, multiple naming schemes shall be
  supported to allow network intelligence to grow towards the edges.

One example of naming is the use of physical entity naming.

Carrier Network Elements identify individual ports by their location
using a scheme based on "CO/NE/bay/shelf/slot/port" addressing schema.
Similarly, facilities are identified by route
"id/fiber/wavelength/timeslot".
Mapping of Physical Entity addressing to Optical Network addressing
shall be supported. Name to address translation should Service Level
   Agreements (SLAs) must be supported
similar to DNS.
To realize fast provisioning flexible, and bandwidth at the same time simple and
   scalable.

   - The policy-based management framework must be based on demand services in
response to router requests, it is essential to standards-
   based policy systems (e.g. IETF COPS).

   - In addition, the IPO service management system must support IP naming.

Requirement 34.      Mapping of higher layer user IP naming to Optical
  Network Addressing shall be supported.
European carriers use NSAP naming for private lines and many US data
centric applications, including ATM-based services also use NSAP
addresses. As such it is important that NSAP naming should be
supported.

Requirement 35.     Mapping of higher layer NSAP naming to Optical
  Network shall be supported.

Requirement 36.     Listed below are additional Optical Network
  Addresses (ONA) requirements:
1) There shall be at least one globally unique address associated
   backwards compatible with
  each user device. A user device may have one or more ports connected
  to the network.
2) The address space shall support connection legacy service management across multiple
  networks, both within one administrative domain systems.

7.7.  Support of Hierarchical Routing and across multiple
  administrative domains.
3) Address hierarchies Signaling

   The routing protocol(s) shall be supported.
4) Address support hierarchical routing
   information dissemination, including topology information aggregation
   and summarization summarization.

   The routing protocol(s) shall minimize global information and keep
   information locally significant as much as possible.

   Over external interfaces only reachability information, next routing
   hop and service capability information should be supported. (This is
  actually an NNI requirement).
5) Dual homing exchanged. Any other
   network related information shall allow, but not require the use of multiple leak out to other networks.

8.  Control Plane Functional Requirements for Optical Services

   This section addresses whether within the same administrative domain, or across
  multiple administrative domains.

6) Need an international body to administer requirements for the address space. Note that
  this need is independent optical control plane
   in support of what addressing scheme is used, and this
  concerns the user and the network operator communities.
7) service provisioning.

   The size scope of the Optical Network Address shall be sufficient to avoid
  address exhaustion within control plane include the next 50 years. The address space shall
  scale up to a large base of customers and to a large number control of
  operators.
8) Internal switch addresses shall not be derivable from ONAs and shall
  not be advertised to the customer.
9) The ONA shall not imply network characteristics (port numbers, port
  granularity, etc).
10)  ONA reachability deals with connectivity interfaces
   and not with the user
  device being powered up (reachability updates triggered by
  registration network resources within an optical network and deregistration, not by client device reboots) (Name
  registration persists for as long as the user retains interfaces
   between the same ONA -
  until de-registration).
11)  ONAs shall be independent of user names, higher layer services
  (i.e., should support IP, ATM, PL, etc) and optical network internal
  routing addresses. User names are opaque to optical network. User
  equipment and its client networks. In other optical carriers have no knowledge of optical
  network internal routing addresses, including ports information.
12) words, it
   include NNI and UNI aspects.

8.1.  Control Plane Capabilities and Functions

   The client (user) name should not make assumptions on what control capabilities are offered supported by the server (service provider) name, underlying control
   functions and
  thus protocols built in the semantics of control plane.

8.1.1.  Network Control Capabilities

   The following capabilities are required in the two name spaces should be separate and
  distinct. This does not place any constraints on the syntax of client network control plane
   to successfully deliver automated provisioning for optical services:
   - Neighbor, service and server layer name spaces, or of the user topology discovery

   - Address assignment and service provider
  name spaces (G.astn draft)
13)  The addressing scheme shall not impede use of either client-server
  or peer model within an operator's network.
14)  There should resolution

   - Routing information propagation and dissemination

   - Path calculation and selection

   - Connection management

   These capabilities may be supported by a single standard, fixed space combination of addresses functions
   across the control and the management planes.

8.1.2.  Control Plane Functions for network control

   The following are essential functions needed to
  which names will be mapped from a wide range of higher layer
  services.

7.3.1 Address Space Separation

Requirement 37. support network
   control capabilities:
   - Signaling
   - Routing
   - Automatic resource, service and neighbor discovery

   Specific requirements for signaling, routing and discovery are
   addressed in Section 9.

   The general requirements for the control plane must functions to support all types of client
  addressing.
Requirement 38.
   optical networking and service functions include: - The control plane
   must use have the client address as a
  name rather as a routable address.
Requirement 39. capability to establish, teardown and maintain the end-
   to-end connection, and the hop-by-hop connection segments between any
   two end-points.

   - The control plane must know what internal
  addressing scheme is used within have the capability to support traffic-
   engineering requirements including resource discovery and
   dissemination, constraint-based routing and path computation.

   - The control plane domain.

7.3.2 Directory Services

Requirement 40.     Directory Services shall be supported to enable
  operator to query the optical support network for status or action result
   code responses to any requests over the optical network address
  of a specified user.
Requirement 41.      Address resolution and translation between various
  user edge device name and corresponding optical network address control interfaces.

   - The control plane shall
  be supported.
Requirement 42. support resource allocation on both UNI shall use the user naming schemes for
  connection request.

7.4 Link Identification

Optical devices might have thousands of incoming and outgoing
connections. This will be of concern when trying to provide globally
unique addresses to
   NNI.

   - Upon successful connection teardown all optical nodes in an optical network.
Requirement 43. resources associated with
   the connection shall become available for access for new requests.

   - The control plane should be able to address NE shall support management plane request for
   connection points with addresses that are locally defined.
Requirement 44. attributes/status query.

   - The control plane should be able to advertise and
  signal for locally defined and non-unique addresses that must have only
  local significance.  This would allow for re-use of the addressing
  space.
There is the issue of providing addresses capability to support various
   protection and restoration schemes for the optical nodes or
devices that form the ASON/ASTN. channel
   establishment.

   - Control plane failures shall not affect active connections.

   - The control plane shall be able to trigger restoration based on
   alarms or other issue is providing addresses
for the incoming and outgoing connections/ports within each optical
node/device. indications of failure.

8.2.  Signaling Network

   The first issue is not signaling network consists of a problem, since the optical
devices/nodes can use the standard IP or NSAP address space. Providing
locally defined address space set of signaling channels that can be re-used in other optical
   interconnect the nodes within the domain can solve providing address space for control plane. Therefore, the
ports/connections within
   signaling network must be accessible by each node. So, of the optical communicating
   nodes within a
domain or multiple domains in the (e.g., OXCs).

   - The signaling network can communicate with must terminate at each
other using of the standard address space like IP or NSAP. The switching &
forwarding within each optical node can be based on locally defined
addresses.

7.5 Policy-Based Service Management Framework nodes in the
   transport plane.

   - The IPO service must signaling network shall not be supported by a robust policy-based management
system assumed to have the same
   topology as the data plane, nor shall the data plane and control
   plane traffic be able assumed to make important decisions.
Examples of policy decisions include:
- What types of connections can be set up for a given UNI?
- What information can be shared and what information must be
  restricted in automatic discovery functions?
- What are the security policies over congruently routed.  A signaling interfaces?

Requirement 45.     Service and
   channel is the communication path for transporting control messages
   between network policies related to
  configuration and provisioning, admission control, and support of

  Service Level Agreements (SLAs) must be flexible, nodes, and at over the same
  time simple and scalable.

Requirement 46.     The policy-based management framework must be based UNI (i.e., between the UNI entity
   on standards-based policy systems (e.g. IETF COPS).
Requirement 47.     In addition, the IPO service management system must
  support user side (UNI-C) and be backwards compatible with legacy service management
  systems.

7.5.1 Admission the UNI entity on the network side (UNI-
   N)). The control

Connection admission functionality required must messages include authentication
of client, verification of services, signaling messages, routing
   information messages, and other control maintenance protocol messages
   such as neighbor and service discovery. There are three different
   types of access to network
resources.

Requirement 48. signaling methods depending on the way the signaling channel
   is constructed: - In-band signaling: The policy management system must determine what
  kind of connections can be set up for signaling messages are
   carried over a given UNI.
Connection Admission Control (CAC) is required for authentication of
users (security), verification of connection service level parameters
and for controlling access to network resources. logical communication channel embedded in the data-
   carrying optical link or channel. For example, using the overhead
   bytes in SONET data framing as a logical communication channel falls
   into the in-band signaling methods.

   - In fiber, Out-of-band signaling: The CAC policy should
determine if there signaling messages are adequate network resources available carried
   over a dedicated communication channel separate from the optical
   data-bearing channels, but within the
carrier to support each new connection. CAC policies same fiber. For example, a
   dedicated wavelength or TDM channel may be used within the same fiber
   as the data channels.

   - Out-of-fiber signaling: The signaling messages are outside carried over a
   dedicated communication channel or path within different fibers to
   those used by the
scope of standardization.

Requirement 49.     When optical data-bearing channels. For example,
   dedicated optical fiber links or communication path via separate and
   independent IP-based network infrastructure are both classified as
   out-of-fiber signaling.

   In-band signaling may be used over a connection request UNI interface, where there are
   relatively few data channels. Proxy signaling is received by also important over
   the
  control plane, UNI interface, as it is necessary useful to support users unable to signal
   to ensure that the resources exist
  within the optical transport network to establish the connection.
Requirement 50. via a direct communication channel. In addition to this
   situation a third party system containing the above, UNI-C entity will
   initiate and process the control plane
  elements need information exchange on behalf of the ability to rate limit (or pace) call setup attempts
  into the network.

This is an attempt to prevent overload user
   device. The UNI-C entities in this case reside outside of the control plane processors.
In application user in
   separate signaling systems.

   In-fiber, out-of-band and out-of-fiber signaling channel alternatives
   are usually used for NNI interfaces, which generally have significant
   numbers of channels per link. Signaling messages relating to SPC type connections this might mean that all of
   the setup
message would different channels can then be slowed aggregated over a single or buffered in order to handle small
   number of signaling channels.

   The signaling network forms the current
load.

Another aspect basis of admission the transport network
   control is security.

Requirement 51. plane.  - The policy-based management system must be able to
  authenticate and authorize a client requesting the given service. signaling network shall support reliable
   message transfer.

   - The
  management system must also be able to administer and maintain
  various security policies over signaling interfaces.

7.5.2 SLA Support

Requirement 52. network shall have its own OAM mechanisms.

   - The service management system should employ
  features to ensure client SLAs. signaling network shall use protocols that support congestion
   control mechanisms.

   In addition to setting up connections based on resource availability to
meet SLAs, addition, the management system must periodically monitor connections signaling network should support message priorities.
   Message prioritization allows time critical messages, such as those
   used for the maintenance of SLAs.  Complex SLAs, restoration, to have priority over other messages, such as time-of-day or
multiple-service-class based SLAs, should also
   other connection signaling messages and topology and resource
   discovery messages.

   The signaling network must be satisfied.  In order
to do this, highly scalable, with minimal
   performance degradations as the policy-based service management system should support
automated SLA monitoring systems that may number of nodes and node sizes
   increase.

   The signaling network shall be embedded in highly reliable and implement failure
   recovery.

   Security and resilience are crucial issues for the management
system or may signaling network
   will be separate entities. Mechanisms to report events addressed in Section 10 and 11 of not
meeting SLAs, or a customer repeatedly using more than this document.

8.3.  Control Plane Interface to Data Plane

   In the SLA, should
be supported by situation where the SLA monitoring system.  Other off-line mechanisms
to forecast network traffic growth and congestion via simulation control plane and
modeling systems, may be data plane are provided
   by different suppliers, this interface needs to aid in efficient SLA management.
Another key aspect be standardized.
   Requirements for a standard control -data plane interface are under
   study. Control plane interface to SLA management the data plane is SLA translation.

Requirement 53.     In particular, policy-based Class outside the scope
   of Service this document.

8.4.  Management Plane Interface to Data Plane

   The management schemes plane is responsible for identifying which network
   resources that accurately translate customer SLAs the control plane may use to carry out its control
   functions.  Additional resources may be allocated or existing
   resources deallocated over time.

   Resources shall be able to be allocated to
  parameters that the underlying mechanisms and protocols control plane for
   control plane functions include resources involved in the
  optical transport network can understand, must be supported.

Consistent interpretation setting up and satisfaction of SLAs is especially
important when an IPO spans multiple domains or service providers.

7.6 Inter-Carrier Connectivity

Inter-carrier connectivity has
   tearing down calls and control plane specific implications on resources.  Resources
   allocated to the admission control plane for the purpose of setting up and SLA support aspects
   tearing down calls include access groups (a set of access points),
   connection point groups (a set of connection points). Resources
   allocated to the policy-based service management
system.
Multiple peering interfaces may be used between two carriers, whilst
any given carrier is likely to peer with multiple other carriers. These
peering interfaces must support all of control plane for the functions defined in section
9, although each operation of these functions has a special flavor when applied
to this interface.

Carriers will not allow other carriers the control over their network
resources, or visibility of their topology or resources.  Therefore,
topology and resource discovery should not be supported between
carriers. There plane
   itself may of course be instances where there is high degree
of trust between carriers, allowing topology include protected and resource discovery,
but this would be a rare exception.

Requirement 54.     Inter-carrier connectivity protecting control channels.

   Resources allocated to the control plane by the management plane
   shall be based on E-NNI.
To provide connectivity between clients connected able to different carriers
requires that client reachability information be exchanged between
carriers. Additional information regarding network peering points and
summarized network topology de-allocated from the control plane on management
   plane request.

   If resources are supporting an active connection and resource information will also have the resources
   are requested to be conveyed beyond de-allocated by management plane, the bounds of a single carrier. This information is
required to make route selections for connections traversing multiple
carriers.

Given that detailed topology and resource information is not available
outside a carrier's trust boundary, routing of connections over

multiple carriers will involve selection of control
   plane shall reject the autonomous systems
(ASs) traversed. This can be defined using a series of peering points.
More detailed route selection is then performed on a per carrier basis,
as request.  The management plane must either
   wait until the signaling requests resources are received at each carrier's peering
points. The detailed connection routing information should not no longer in use or tear down the
   connection before the resources can be
conveyed across de-allocated from the carrier trust boundary.

CAC, as described above, is necessary at each trust interface,
including those between carriers (see Section 11.2 for security
considerations).

Similar control
   plane. Management plane failures shall not affect active connections.

   Management plane failures shall not affect the normal operation of a
   configured and operational control plane or data plane.

8.5.  Control Plane Interface to dual homing Management Plane

   The control plane is considered a managed entity within a network.
   Therefore, it is possible subject to have inter-carrier
connectivity management requirements just as other
   managed entities in the network are subject to such requirements.

8.5.1.  Soft Permanent Connections (Point-and click provisioning)

   In the case of SPCs, the management plane requests the control plane
   to set up / tear down a connection just like what we can do over multiple diverse routes. These connectivity models
support multi hosting.

Editor's Note: further discussion on this will a
   UNI.

   The management plane shall be added in able to query on demand the status of
   the connection request The control plane shall report to the
   management plane, the Success/Failures of a later
revision.

7.7 Multiple Hierarchies

Transport networks are built in connection request.  Upon
   a tiered, hierarchal architecture.
Also, by applying connection request failure, the control plane support shall report to service and facilities
management, separate and distinct network layers may need the
   management plane a cause code identifying the reason for the failure.

8.5.2.  Resource Contention resolution Since resources are allocated to be
supported across
   the same inter-domain interface. Furthermore, control plane for
large networks, it may use, there should not be required to support multiple levels of
routing domains.

Requirement 55.     Multi level hierarchy must be supported.

Editor's Note: more details will be added as required.

Network layer hierarchies
   Services (IP, SAN, Ethernet)
   Transport: SONET/SDH/Ethernet
   DWDM, Optics
Address space hierarchies
   Geographical hierarchies
   Functional hierarchies
Network Topology hierarchies
   Access, metro, inter-city, long haul - as routing areas. Any one
   large routing area may need to be decomposed in sub-areas.

8. Control Plane Functional Requirements for Optical Services

8.1 Control Plane Capabilities contention between the
   management plane and Functions

8.1.1 Network Control Capabilities

The following capabilities are required in the network control plane to
successfully deliver automated provisioning:
- Neighbor discovery
- Address assignment
- Connection topology discovery
- Address resolution

- Reachability information dissemination
- Connection Management
These capabilities may be supported by a combination of functions
across for connection set-up.  Only
   the control and plane can establish connections for allocated resources.
   However, in general, the management planes.

8.1.2 Control Plane Functions

The following are essential functions needed to support network control
capabilities:
- Signaling
- Routing
- Resource and Service discovery

Signaling is plane shall have authority over
   the process of control message exchange using a well-
defined signaling protocol to achieve communication between plane.

   The control plane shall not assume authority over management plane
   provisioning functions.

   In the
controlling functional entities connected through a specified
communication channel. It is often used for dynamic connection set-up
across a network. Signaling is used to disseminate information between
network entities in support case of all network control capabilities.
Routing is a distributed networking process within failure, both the network for
dynamic dissemination management plane and propagation of the network
   control plane need fault information among
all the routing entities based on a well-defined routing protocol. It
enables at the routing entity same priority.

   The control plane needs fault information in order to compute the best path from one point to
another.

Resource and service discovery is the automatic process between the
connected network devices using a resource/service discovery protocol
to determine perform its
   restoration function (in the available services and identify connection state
information.

Requirement 56.     The general requirements for event that the control plane
  functions to support optical networking functions include:
     1. The is
   providing this function). However, the control plane must have needs less
   granular information than that required by the capability to establish,
        teardown and maintain management plane.  For
   example, the end-to-end connection.
     2. The control plane must have the capability only needs to establish,
        teardown and maintain know whether the hop-by-hop connection segments
        between two end-points.
     3. resource is
   good/bad.  The control management plane must have the capability would additionally need to support traffic-
        engineering requirements including know if a
   resource discovery and
        dissemination, constraint-based routing was degraded or failed and path computation.
     4. The control plane must have the capability to support
        reachability information dissemination.
     5. reason for the failure, the
   time the failure occurred and so on.

   The control plane shall support network status  or action
        result code responses to any requests not assume authority over the control
        interfaces.
     6. management plane
   for its  management functions (FCAPS).

   The control plane shall support resource allocation on both UNI
        and NNI.

     7. Upon successful connection teardown all resources associated
        with be responsible for providing necessary
   statistic data such as call counts, traffic counts to the connection shall become management
   plane. They should be available for access for new
        requests.
     8. The control upon the query from the management
   plane.

   Control plane shall ensure that there will not be unused,
        frozen network resources.
     9. The support policy-based CAC function either within
   the control plane shall ensure periodic or on demand clean-up
        of network resources.
     10. The control plane shall support management plane request for
        connection attributes/status query.
     11. The control plane must have the capability provide an interface to support various
        protection and restoration schemes for a policy server outside
   the optical channel
        establishment.
     12. Control plane failures shall not affect active connections.
     13. The control plane network.

   Topological information learned in the discovery process shall be
   able to trigger restoration based be queried on alarms or other indications of failure.

8.2 Signaling Network

The signaling network consists of a set of signaling channels that
interconnect the nodes within demand from the control management plane. Therefore, the
signaling network must be accessible by each of the communicating nodes
(e.g., OXCs).
Requirement 57.     The signaling network must terminate at each of the
  communicating nodes.
Requirement 58.

   The signaling network management plane shall not be assumed able to have
  the same physical connectivity as the data plane, nor shall tear down connections
   established by the data control plane both gracefully and control forcibly on
   demand.

8.6.  Control Plane Interconnection

   When two (sub)networks are interconnected on transport plane traffic be assumed to level,
   so should be congruently routed.
A signaling channel is the communication path for transporting
signaling messages between two corresponding control network nodes, and over the UNI (i.e.,
between at the UNI entity on control plane.
   The control plane interconnection model defines the user side (UNI-C) way how two
   control networks can be interconnected in terms of controlling
   relationship and the UNI entity on
the network side (UNI-N)). control information flow allowed between them.

8.6.1.  Interconnection Models

   There are three different basic types of signaling
methods depending on control plane network interconnection
   models: overlay, peer and hybrid, which are defined by the way IETF IPO
   WG document [IPO_frame].

   Choosing the signaling channel is constructed:
. In-band signaling: The signaling messages are carried over level of coupling depends upon a logical
  communication channel embedded in the data-carrying optical link or
  channel. For example, number of different
   factors, some of which are:

   - Variety of clients using the overhead bytes in SONET data framing
  as a logical communication channel falls into the in-band signaling
  methods.
. In fiber, Out-of-band signaling: The signaling messages are carried
  over a dedicated communication channel separate from optical network

   - Relationship between the client and optical
  data-bearing channels, but within network

   - Operating model of the same fiber. For example, a
  dedicated wavelength or TDM channel may carrier

   Overlay model (UNI like model) shall be used within the same fiber
  as the data channels.
. Out-of-fiber signaling: The signaling messages supported for client to
   optical control plane interconnection

   Other models are carried over a
  dedicated communication channel or path within different fibers optional for client to

  those used by the optical data-bearing channels. control plane
   interconnection

   For example,
  dedicated optical fiber links or communication path via separate to optical control plane interconnection all three models
   shall be supported

9.  Requirements for Signaling, Routing and
  independent IP-based network infrastructure are both classified as
  out-of-fiber signaling.

In-band signaling is particularly important Discovery

9.1.  Requirements for information sharing over a UNI interface, where
there UNI, I-NNI and E-NNI

   There are relatively few data channels. Proxy signaling is also
important over three types of interfaces where the UNI interface, as it is useful to support users
unable to signal routing information
   dissemination may occur: UNI, I-NNI and E-NNI. Different types of
   interfaces shall impose different requirements and functionality due
   to their different trust relationships.  Over UNI, the optical network via a direct communication
channel. In this situation a third party system containing the UNI-C
entity will initiate user network
   and process the information exchange on behalf of transport network form a client-server relationship.
   Therefore, the user device. The UNI-C entities in this case reside outside of transport network topology shall not be disseminated
   from transport network to the user in separate signaling systems.

In-fiber, out-of-band network.

   Information flows expected over the UNI shall support the following:
   - Call control
   - Resource Discovery
   - Connection Control
   - Connection Selection

   Address resolution exchange over UNI is needed if an addressing
   directory service is not available.

   Information flows over the I-NNI shall support the following:
   - Resource Discovery
   - Connection Control
   - Connection Selection
   - Connection Routing

   Information flows over the E-NNI shall support the following:

   - Call Control
   - Resource Discovery
   - Connection Control
   - Connection Selection
   - Connection Routing

9.2.  Signaling Functions

   Call and out-of-fiber connection control and management signaling channel alternatives messages are particularly important
   used for NNI interfaces, which generally have
significant numbers of channels per link. Signaling messages relating
to all of the different channels can then be aggregated over a single
or small number of signaling channels.

The signaling network forms the basis establishment, modification, status query and release of the transport network
   an end-to-end optical connection.

9.2.1.  Call and connection control
plane.

   To achieve reliable signaling, support many enhanced optical services, such as scheduled
   bandwidth on demand and bundled connections, a call model based on
   the control plane needs to
provide reliable transfer separation of signaling messages, its own OAM mechanisms the call control and flow connection control mechanisms is
   essential. The call control is responsible for restricting the transmission end-to-end session
   negotiation, call admission control and call state maintenance while
   connection control is responsible for setting up the connections
   associated with a call. A call can correspond to zero, one or more
   connections depending upon the number of
signaling packets where appropriate.

Requirement 59.     The signaling protocol shall connections needed to
   support reliable the call.

   This call model has the advantage of reducing redundant call control
   information at intermediate (relay) connection control nodes, thereby
   removing the burden of decoding and interpreting the entire message transfer.
Requirement 60.     The signaling network shall have
   and its own OAM
  mechanisms.
Requirement 61.     The signaling protocol shall support congestion parameters. Since the call control mechanisms.

In addition, is provided at the signaling network should support message priorities.
Message prioritization allows time critical messages, such as those
used for restoration, ingress
   to have priority over other messages, such as
other connection signaling messages and topology the network or at gateways and resource discovery
messages.

Requirement 62.     The signaling network should boundaries. As such the
   relay bearer needs only provide the procedures to support message
  priorities.
The switching
   connections.

   Call control is a signaling association between one or more user
   applications and the network must be highly scalable, with minimal performance
degradations as to control the number of nodes set-up, release,
   modification and node sizes increase.
Requirement 63.     The signaling network shall be highly scalable.

The signaling network must also be highly reliable, implementing
mechanisms for failure recovery. Furthermore, failure maintenance of signaling
links or sets of connections. Call control is
   used to maintain the signaling software must not impact established
connections or cause partially established association between parties and a call may
   embody any number of underlying connections, nor should they
impact including zero, at any elements
   instance of the management plane.

Requirement 64.     The signaling network shall time.

   Call control may be highly reliable and
  implement failure recovery.

Requirement 65.     Control channel realized by one of the following methods:

   - Separation of the call information into parameters carried by a
   single call/connection protocol

   - Separation of the state machines for call control and connection
   control, whilst signaling software failures
  shall not cause disruptions information in established connections within the
  data plane, a single call/connection
   protocol

   - Separation of information and signaling messages affected state machines by providing separate
   signaling protocols for call control plane outages
  should not result in partially established connections remaining
  within the network.

Requirement 66.     Control channel and signaling software failures
  shall not cause management plane failures.
Security connection control

    Call admission control is also a crucial issue for policy function invoked by an
   Originating role in a Network and may involve cooperation with the signaling network. Transport
networks are generally expected
   Terminating role in the Network. Note that a call being allowed to carry large traffic loads and high
bandwidth
   proceed only indicates that the call may proceed to request one or
   more connections. The consequence is significant economic impacts
should hackers disrupt It does not imply that any of those connection
   requests will succeed. Call admission control may also be invoked at
   other network operation, using techniques such as boundaries.

   Connection control is responsible for the
recent denial overall control of service attacks seen within the Internet.

Requirement 67.     The signaling network shall
   individual connections. Connection control may also be secure, blocking all
  unauthorized access.

Requirement 68. considered to
   be associated with link control. The signaling network topology overall control of a connection
   is performed by the protocol undertaking the set-up and signaling node
  addresses shall not be advertised outside release
   procedures associated with a carrier's domain connection and the maintenance of
  trust.

8.3 Control Plane Interface to Data Plane

In the situation where
   state of the connection.

   Connection admission control plane and data plane are provided by
different suppliers, this interface needs to be standardized.
Requirements for a standard control -data plane interface are under
study. Control plane interface to the data plane is outside the scope
of this document.

8.4 Control Plane Interface to Management Plane

The control plane is considered a managed entity within essentially a network.
Therefore, it is subject to management requirements just as other
managed entities in the network are subject to such requirements.

8.4.1 Allocation of resources

The management plane is responsible for identifying which network
resources process that the control plane may use to carry out its control

functions.  Additional resources may be allocated or existing determines
   if there are sufficient resources
deallocated over time.

Requirement 69.     Resources shall be able to be allocated to the
  control plane for control plane functions include admit a connection (or re-
   negotiates resources involved
  in setting up and tearing down calls during a call). This is usually performed on a
   link-by-link basis, based on local conditions and policy. Connection
   admission control plane specific
  resources.  Resources allocated to the control plane for may refuse the purpose
  of setting up and tearing down calls include access groups (a set of
  access points), connection point groups (a set of connection points).
  Resources allocated to the control request.

   Control plane for shall support the operation separation of the call control and
   connection control.

   Control plane itself may include protected shall support proxy signaling.

   Inter-domain signaling shall comply with g.8080 and protecting control
  channels.
Requirement 70.     Resources allocated g.7713 (ITU).

   The inter-domain signaling protocol shall be agnostic to the control plane by intra-
   domain signaling protocol within any of the
  management plane domains within the
   network.

   Inter-domain signaling shall support both strict and loose routing.

   Inter-domain signaling shall not be able to assumed necessarily congruent
   with routing.

    It should not be de-allocated from the control
  plane on management plane request.
Requirement 71.     If resources are supporting an active connection
  and assumed that the resources same exact nodes are requested to be de-allocated from the control
  plane, the control plane handling both
   signaling and routing in all situations.

   Inter-domain signaling shall reject the request.  The support all call  management
  plane must either wait until primitives:
   - Per individual connections

   - Per groups of connections

   Inter-domain signaling shall support inter-domain notifications.

   Inter-domain signaling shall support per connection global connection
   identifier for all connection management primitives.

   Inter-domain signaling shall support both positive and negative
   responses for all requests, including the resources are no longer in use or
  tear down cause, when applicable.

   Inter-domain signaling shall support all the connection before attributes
   representative to the resources can be de-allocated
  from connection characteristics of the control plane. Management plane failures individual
   connections in scope.

   Inter-domain signaling shall not affect
  active connections.
Requirement 72.     Management plane failures support crank-back and rerouting.

   Inter-domain signaling shall not affect the
  normal operation support graceful deletion of a configured connections
   including of failed connections, if needed.

9.3.  Routing Functions

   Routing includes reachability information propagation, network
   topology/resource information dissemination and operational control plane or
  data plane.

8.4.2 Soft Permanent Connections (Point-and click provisioning) path computation. In the case of SPCs, the management plane
   optical network, each connection involves two user endpoints. When
   user endpoint A requests the control plane to
set up/tear down a connection rather than a request coming over a UNI.

Requirement 73.     The management plane shall be able to query on
  demand user endpoint B, the status of optical
   network needs the connection request
Requirement 74.     The control plane shall report reachability information to select a path for the management
  plane, the Success/Failures of
   connection. If a connection request
Requirement 75.     Upon user endpoint is unreachable, a connection request failure, the control
  plane shall report
   to the management plane a cause code identifying
  the reason for the failure.
Requirement 76.     In a set up connection request, the management
  plane that user endpoint shall be able rejected. Network topology/resource
   information dissemination is to specify provide each node in the service class network with
   stabilized and consistent information about the carrier network such
   that a single node is required for
  the connection.

8.4.3 Resource Contention resolution

Since resources are allocated able to the control plane for use, there
should not support constrain-based path selection.
   A mixture of hop-by-hop routing, explicit/source routing and
   hierarchical routing will likely be contention between used within future transport
   networks. Using hop-by-hop message routing, each node within a
   network makes routing decisions based on the management plane message destination, and
   the control

plane for connection set-up.  Only network topology/resource information or the control plane can establish
connections for allocated resources. local routing tables
   if available. However, in general, the
management plane shall have authority over the control plane.

Requirement 77.     The control plane shall not assume authority over
  management plane provisioning functions.
   In achieving efficient load balancing and
   establishing diverse connections are impractical using hop-by-hop
   routing. Instead, explicit (or source) routing may be used to send
   signaling messages along a route calculated by the case source. This
   route, described using a set of fault management, both nodes/links, is carried within the management plane
   signaling message, and used in forwarding the
   control plane need fault message.

   Hierarchical routing supports signaling across NNIs.  It allows
   conveying summarized information at across I-NNIs, and avoids conveying
   topology information across trust boundaries. Each signaling message
   contains a list of the same priority.
Requirement 78.     The control plane shall not interfere with domains traversed, and potentially details of
   the
  speed or priority at which route within the management plane would receive alarm domain being traversed.

   All three mechanisms (Hop-by-hop routing, explicit / source-based
   routing and hierarchical routing) must be supported.  Messages
   crossing trust boundaries must not contain information from the NE or the transport plane in regarding the absence
   details of a
  control plane.

The control plane needs fault information in order to perform its
restoration function (in the event that the control plane an internal network topology. This is providing
this function). However, the control plane needs less granular particularly
   important in traversing E-UNIs and E-NNIs. Connection routes and
   identifiers encoded using topology information than that required by the management plane.  For example, (e.g., node
   identifiers) must also not be conveyed over these boundaries.

   Requirements for routing information dissemination:

   Routing protocols must propagate the control plane only needs appropriate information
   efficiently to know whether the resource is good/bad. network nodes.
    The management plane would additionally need following requirements apply:

   The inter-domain routing protocol shall comply with G.8080 (ITU).

   The inter-domain routing protocol shall be agnostic to know if a resource was
degraded or failed and the reason for intra-
   domain routing protocol within any of the failure, domains within the time network.

   The inter-domain routing protocol shall not impede any of the failure
occurred and so on.

Requirement 79.     Accounting
   following routing paradigms within individual domains:

   - Hierarchical routing

   - Step-by-step routing

   - Source routing

   The exchange of the following types of information shall be provided supported
   by the
  control plane to the management plane.  Again, there is no
  contention. This is addressed in the billing section.[open issue inter-domain routing protocols

   -
  what happens to accounting data histories when resource moved from
  control plane to management plane?]

Performance management shall be Inter-domain topology

   - Per-domain topology abstraction

   - Per domain reachability information

   - Metrics for routing decisions supporting load sharing, a management plane function only.
Again, there is no contention between the management plane range of
   service granularity and the
control plane.

Requirement 80.     The control plane shall not assume authority over
  management plane performance management functions.

8.4.4 MIBs

Requirement 81.      A standards based MIB shall be used for control
  plane management.
Requirement 82.     The standards based MIB definition service types, restoration capabilities,
   diversity, and policy.

   Inter-domain routing protocols shall support
  all management functionality required to manage the control plane.
Requirement 83.     The standards based MIB definition should per domain topology and
   resource information abstraction.

   Inter-domain protocols shall support
  all optional management functionality desired to manage the control
  plane.

8.4.5 Alarms

The control plane is not responsible reachability information
   aggregation.

   A major concern for monitoring and reporting
problems in the transport plane or in the NE that are independent of

the control plane.  It routing protocol performance is responsible, however for monitoring scalability and
reporting control plane alarms.  The
   stability issues, which impose following requirements in this section are
applicable to on the monitoring and reporting of control plane alarms.

Requirement 84. routing
   protocols:

   - The Control Plane routing protocol performance shall not lose alarms.  Alarms
  lost due to transmission errors between the Control Plane and the
  Management Plane shall be able to be recovered through Management
  Plane queries to the alarm notification log.
Requirement 85.     Alarms must take precedence over all other message
  types for transmission to the Management Plane.
Requirement 86.     Controls issued by the Management Plane must be
  able to interrupt an alarm stream coming from the Control Plane.
Requirement 87.     The alarm cause shall be based largely depend on the probableCause
  list in M.3100.
Requirement 88.     Detailed alarm information shall be included in the
  alarm notification including: the location
   scale of the alarm, the time the
  alarm occurred, and network (e.g. the perceived severity number of nodes, the alarm.
Requirement 89. number of links,
   end user etc.). The Control Plane routing protocol design shall send clear notifications
  for Critical, Major, and Minor alarms when keep the cleared condition is
  detected.
Requirement 90. network
   size effect as small as possible.

   - The Control Plane shall support Autonomous Alarm
  Reporting.
Requirement 91.     The Control Plane routing protocols shall support Alarm Reporting
  Control (See M.3100, Amendment 3).
Requirement 92.     The Control Plane following scalability
   techniques:

   1. Routing protocol shall support the ability to
  configure hierarchical routing information
   dissemination, including topology information aggregation and query the management plane applications that Autonomous
  Alarm Reporting will be sent.
Requirement 93.     The Control Plane shall support the ability to
  retrieve all or a subset of the Currently Active Alarms.
Requirement 94.
   summarization.

   2. The Control Plane routing protocol shall support Alarm Report
  Logging.
Requirement 95.     The Control Plane should support the ability be able to
  Buffer Alarm Reports separately for each management plane application
  that an Alarm Report is destined (See X.754, Enhanced Event Control
  Function).

Requirement 96.     The Control Plane shall support the ability minimize global information
   and keep information locally significant as much as possible (e.g.,
   information local to
  cancel a request to retrieve all or node, a subset sub-network, a domain, etc). For
   example, a single optical node may have thousands of the Currently Active
  Alarms (See Q.821, Enhanced Current Alarm Summary Control).
Requirement 97. ports. The Control Plane should support the ability ports
   with common characteristics need not to
  Set/Get Alarm Severity Assignment per object instance and per Alarm
  basis.
Requirement 98.     The Control Plane shall log autonomous Alarm Event
  Reports / Notifications.
Requirement 99.     The Control Plane be advertised individually.

   3. Routing protocol shall distinguish static routing information and
   dynamic routing information. Static routing information does not report
   change due to connection operations, such as neighbor relationship,
   link attributes, total link bandwidth, etc. On the symptoms of
  control plane problems other hand,
   dynamic routing information updates due to connection operations,
   such as alarms (For example, an LOF condition link bandwidth availability, link multiplexing fragmentation,
   etc.

   4. The routing protocol operation shall update dynamic and static
   routing information differently. Only dynamic routing information
   shall
  not be reported when the problem is a supporting facility LOS).

8.4.6 Status/State

Requirement 100.     The management plane updated in real time.

   5. Routing protocol shall be able to query the
  operational state of all control plane resources.
Requirement 101.    In addition, the control plane shall provide a log
  of current period and historical counts for call attempts and call
  blocks and capacity data for both UNI and NNI interfaces.

   3. The management plane shall dynamic information
   updating frequency through different types of thresholds. Two types
   of thresholds could be able to query current period and
   historical logs.

8.4.7 Billing/Traffic defined: absolute threshold and Network Engineering Support

Requirement 102. relative
   threshold.  The control plane shall record usage per UNI and
  per link connection.
Requirement 103.    Usage dynamic routing information shall be able to will not be queried by
  the management plane.

8.4.8 Policy Information

Requirement 104.    In support of CAC, disseminated
   if its difference is still inside the management plane shall be
  able to configure multiple service classes and identify protection
  and or restoration allocations required threshold. When an update has
   not been sent for each service class, and
  then assign services classes on a per UNI basis.

8.4.9 Control Plane Provisioning

Requirement 105.    Topological information learned in the discovery
  process specific time (this time shall be able to configurable
   the carrier), an update is automatically sent. Default time could be queried on demand from
   30 minutes.

   All the management
  plane.
Requirement 106. scalability techniques will impact the network resource
   representation accuracy. The management plane shall be able to configure UNI tradeoff between accuracy of the routing
   information and NNI protection groups.

Requirement 107.    The management plane shall the routing protocol scalability should be able to prohibit well
   studied. A routing protocol shall allow the
  control plane from using certain transport resources not currently
  being used for a connection for new connection set-up requests.
  There are various reasons for network operators to
   adjust the management plane needing balance according to do this
  including maintenance actions.
Requirement 108. their networks' specific
   characteristics.

9.4.  Requirements for path selection

   The management plane shall path selection algorithm must be able to tear down
  connections established by compute the control plane both gracefully and
  forcibly on demand.

8.5 Control Plane Interconnection path, which
   satisfies a list of service parameter requirements, such as service
   type requirements, bandwidth requirements, protection requirements,
   diversity requirements, bit error rate requirements, latency
   requirements, including/excluding area requirements.  The interconnection
   characteristics of the IP router (client) and optical control
planes can be realized in a number path are those of ways depending on the required
level weakest link. For example,
   if one of coupling. the links does not have link protection capability, the
   whole path should be declared as having no link-based protection. The control planes can
   following are functional requirements on path selection.

   - Path selection shall support shortest path as well as constraint-
   based routing.

   - Various constraints may be loosely or tightly
coupled.  Loose coupling is generally referred required for constraint based path
   selection, including but not limited to:
   - Cost
   - Load Sharing
   - Diversity
   - Service Class

   - Path selection shall be able to as the overlay model
and tight coupling is referred include/exclude some specific
   locations, based on policy.

   - Path selection shall be able to as the peer model.  Additionally
there is the augmented model that is somewhat support protection/restoration
   capability. Section 10 discusses this subject in between the other two
models but more akin to the peer model.  The model selected determines
the following: detail.

   - The details Path selection shall be able to support different levels of the topology, resource
   diversity, including diversity routing and reachability information
  advertised between protection/restoration
   diversity.

   - Path selection algorithms shall provide carriers the client ability to
   support a wide range of services and optical networks
- The level multiple levels of control IP routers can exercise in selecting paths
  across the optical network
The next three sections discuss these models in more details and the
last section describes the coupling requirements from a carrier's
perspective.

8.5.1 Peer Model (I-NNI like model)

Under the peer model, the IP router clients act service
   classes. Parameters such as peers service type, transparency, bandwidth,
   latency, bit error rate, etc. may be relevant.

   - Path selection algorithms shall support a set of the optical
transport network, such that single requested routing protocol instance runs over
both the IP
   constraints, and optical domains.  In this regard constraints of the networks. Some of the optical network
elements
   constraints are treated just like any other router as far technology specific, such as the control
plane is concerned. constraints in all-
   optical networks addressed in [John_Angela_IPO_draft]. The peer model, although not strictly an internal
NNI, behaves like an I-NNI requested
   constraints may include bandwidth requirement, diversity
   requirements, path specific requirements, as well as restoration
   requirements.

9.5.  Automatic Discovery Functions

   This section describes the requirements for automatic discovery to
   aid distributed connection management (DCM) in the sense that there is sharing context of
resource and topology information.

Presumably a common IGP such
   automatically switched transport networks (ASTN/ASON), as OSPF or IS-IS, with appropriate
extensions, will be used to distribute topology information.  One tacit
assumption here specified
   in ITU-T recommendation G.807. Auto-discovery is that a common addressing scheme will also be used
for applicable to the optical and IP networks.  A common address space can be
trivially realized by using IP addresses in both IP
   User-to-Network Interface (UNI), Network-Node Interfaces (NNI) and optical
domains.  Thus, to
   the optical networks elements become IP addressable
entities.

The obvious advantage Transport Plane Interfaces (TPI) of the peer model is ASTN.

   Automatic discovery functions include neighbor, resource and service
   discovery.

9.5.1.  Neighbor discovery

   This section provides the seamless interconnection
between requirements for the client and optical transport networks.  The tradeoff is

that automatic neighbor
   discovery for the tight integration UNI and the optical NNI and TPI interfaces. This requirement
   does not preclude specific routing information manual configurations that must may be known to the IP clients.
The discussion above has focused on the client to optical control plane
inter-connection.  The discussion applies equally well to inter-
connecting two optical control planes.

8.5.2 Overlay (UNI-like model)

Under the overlay model, the IP client routing, topology distribution, required
   and signaling protocols are independent in particular does not specify any mechanism that may be used for
   optimizing network management.

   Neighbor Discovery can be described as an instance of auto-discovery
   that is used for associating two subnet points that form a trail or a
   link connection in a particular layer network.  The association
   created through neighbor discovery is valid so long as the routing, topology
distribution, and signaling protocols at trail or
   link connection that forms the optical layer. association is capable of carrying
   traffic.  This model is conceptually similar referred to the classical IP over ATM model, but applied as transport plane neighbor discovery.
   In addition to an optical sub-network directly.

Though the overlay model dictates transport plane neighbor discovery, auto-discovery can
   also be used for distributed subnet controller functions to establish
   adjacencies.  This is referred to as control plane neighbor
   discovery.  It should be noted that the client and optical Sub network points that are independent this still allows the optical
   associated, as part of neighbor discovery do not have to be contained
   in network elements with physically adjacent ports.  Thus neighbor
   discovery is specific to re-use IP the layer protocols in which connections are to perform the routing be
   made and signaling functions.
In addition to the protocols being independent consequently is principally useful only when the addressing scheme network has
   switching capability at this layer.  Further details on neighbor
   discovery can be obtained from ITU-T draft recommendations G.7713 and
   G.7714.

   Both control plane and transport plane neighbor discovery shall be
   supported.

9.5.2. Resource Discovery

   Resource discovery can be described as an instance of auto-discovery
   that is used between for verifying the client and optical physical connectivity between two
   ports on adjacent network must be independent elements in the
overlay model.  That is, network.  Resource
   discovery is also concerned with the use ability to improve inventory
   management of IP layer addressing in the clients
must not place network resources, detect configuration mismatches
   between adjacent ports, associating port characteristics of adjacent
   network elements, etc.

   Resource discovery happens between neighbors. A mechanism designed
   for a technology domain can be applied to any specific requirement upon pair of NEs
   interconnected through interfaces of the addressing used within same technology.  However,
   because resource discovery means certain information disclosure
   between two business domains, it is under the optical control plane.

The overlay model would provide service providers'
   security and policy control. In certain network scenario, a UNI to the client networks through
which service
   provider who owns the clients could request to add, delete or modify optical
connections.  The optical transport network would additionally provide
reachability information may not be willing to
   disclose any internal addressing scheme to its client. So a client NE
   may not have the clients but no topology information
would be provided across the UNI.

8.5.3 Augmented model (E-NNI like model)

Under the augmented model, there are actually separate routing
instances neighbor NE address and port ID in the IP its NE level
   resource table.

   Interface ports and optical domains, but information from one
routing instance is passed through their characteristics define the other routing instance.  For
example, external IP addresses network element
   resources. Each network can store its resources in a local table that
   could be carried within include switching granularity supported by the optical
routing protocols to allow reachability information network element,
   ability to support concatenated services, range of bandwidths
   supported by adaptation, physical attributes signal format,
   transmission bit rate, optics type, multiplexing structure,
   wavelength, and the direction of the flow of information. Resource
   discovery can be passed to IP
clients.  A typical implementation would use BGP between achieved through either manual provisioning or
   automated procedures. The procedures are generic while the IP client specific
   mechanisms and optical network.

The augmented model, although not strictly an external NNI, behaves
like an E-NNI control information can be technology dependent.

   Resource discovery can be achieved in that there is limited sharing several methods. One of information.

8.5.4 Carrier Control Plane Coupling Requirements

Choosing the level of coupling depends upon a number of different
factors, some of
   methods is the self-resource discovery by which are:
- Variety of clients using the optical network
- Relationship between NE populates its
   resource table with the client physical attributes and optical network
- Operating model of the carrier

Generally in a carrier environment there will be more than just IP
routers connected to resources. Neighbor
   discovery is another method by which NE discovers the optical network.  Some other examples of
clients could be ATM switches or SONET ADM equipment.  This may drive adjacencies in
   the decision towards loose coupling transport plane and their port association and populates the
   neighbor NE. After neighbor discovery resource verification and
   monitoring must be performed to verify physical attributes to prevent undue burdens upon non-
IP router clients.  Also, loose coupling would ensure
   compatibility. Resource monitoring must be performed periodically
   since neighbor discovery and port association are repeated
   periodically.  Further information can be found in [GMPLS-ARCH].

   Resource discovery shall be supported.

9.5.3. Service Discovery

   Service Discovery can be described as an instance of auto-discovery
   that is used for verifying and exchanging service capabilities that future
clients
   are not hampered supported by legacy technologies.
Additionally, a carrier may for business reasons want a separation
between particular link connection or trail.  It is
   assumed that service discovery would take place after two Sub Network
   Points within the client layer network are associated through neighbor
   discovery.  However, since service capabilities of a link connection
   or trail can dynamically change, service discovery can take place at
   any time after neighbor discovery and optical networks.  For example, the ISP business
unit any number of times as may not want to be tightly coupled with
   deemed necessary.

   Service discovery is required for all the optical network
business unit.  Another reason services supported.

10.  Requirements for separation might be just pure
politics that play out in service and control plane resiliency

   Resiliency is a large carrier.  That is, it would seem
unlikely to force the optical transport network capability to run that same set continue its operations under
   the condition of
protocols as failures within the IP router networks.  Also, by forcing network.  The automatic switched
   Optical network assumes the same set separation of
protocols control plane and data
   plane. Therefore the failures in both networks the evolution of network can be divided into
   those affecting the networks is directly
tied together.  That is, it would seem you could not upgrade data plane and those affecting the control plane.
   To provide enhanced optical transport network protocols without taking into consideration services, resiliency measures in both
   data plane and control plane should be implemented. The following
   failure handling principles shall be supported.

   The control plane shall provide the impact on failure detection and recovery
   functions such that the IP router network (and vice versa).

Operating models also play a role failures in deciding the level of coupling.
[Freeland] gives four main operating models envisioned for an optical
transport network:

- ISP owning all data plane within the control
   plane coverage can be quickly mitigated.

   The failure of its own infrastructure (i.e., including fiber and
  duct to control plane shall not in any way adversely affect
   the customer premises)
- ISP leasing some or all normal functioning of its capacity from a third party
- Carriers carrier providing layer 1 services
- existing optical connections in the data
   plane.

10.1.  Service provider offering multiple layer 1, 2, and 3 services over a
  common infrastructure

Although relatively few, if any, ISPs fall into category 1 it would
seem resiliency

   In circuit-switched transport networks, the mostly likely quality and reliability
   of the four to use established optical connections in the peer model.  The other
operating models would lend themselves more likely to choose an overlay
model.  Most carriers would fall into category 4 and thus would most
likely choose an overlay model architecture.

In the context of the client and optical network control transport plane
interconnection the discussion here leads to the conclusion that can be
   enhanced by the
overlay model is required protection and restoration mechanisms provided by the other two models (peer and augmented)
are optional.

Requirement 109.    Overlay model (UNI like model) shall be supported
  for client to optical
   control plane interconnection
Requirement 110.    Other models are optional for client functions.  Rapid recovery is required by transport
   network providers to optical
  control plane interconnection
Requirement 111.    For optical protect service and also to optical control plane
  interconnection all three models shall be supported

9. Requirements for Signaling, Routing support stringent
   Service Level Agreements (SLAs) that dictate high reliability and Discovery

9.1 Signaling Functions

Connection management signaling messages are used
   availability for connection
establishment  and deletion. These signaling messages must be
transported across UNIs, between nodes within a single carrier's
domain, over I-NNIs and E-NNIs.

A mixture customer connectivity.

   The choice of hop-by-hop routing, explicit/source routing a protection/restoration mechanism is a tradeoff
   between network resource utilization (cost) and
hierarchical routing will likely be used within future transport
networks, service interruption
   time. Clearly, minimizing service interruption time is desirable, but
   schemes achieving this usually do so all three mechanisms must be supported by at the control
plane. Using hop-by-hop message routing, each node within a expense of network
makes routing decisions based on
   resources, resulting in increased cost to the message destination, and provider. Different
   protection/restoration schemes differ in the local
routing tables. However, achieving efficient load balancing spare capacity
   requirements and
establishing diverse connections service interruption time.

   In light of these tradeoffs, transport providers are impractical using hop-by-hop
routing. Instead, explicit (or source) routing may be used expected to send
signaling messages along a route calculated by the source. This route,
described using
   support a set range of nodes/links, is carried within different levels of service offerings,
   characterized by the signaling
message, and used recovery speed in forwarding the message.

Finally, event of network topology information must not be conveyed outside failures.
   For example, a
trust domain. Thus, hierarchical routing is required to support
signaling across multiple domains. Each signaling message should
contain provider's highest offered service level would
   generally ensure the most rapid recovery from network failures.
   However, such schemes (e.g., 1+1, 1:1 protection) generally use a list
   large amount of the domains traversed, spare restoration capacity, and potentially details of the
route within the domain being traversed.

Signaling messages crossing trust boundaries must are thus not contain
information regarding the details of an internal network topology. This
is particularly important cost
   effective for most customer applications. Significant reductions in traversing E-UNIs and E-NNIs. Connection
routes and identifiers encoded using topology information (e.g., node
identifiers) must also not
   spare capacity can be conveyed over these boundaries.

9.1.1 Connection establishment

Connection establishment is achieved by sending signaling messages
between the source and destination. If inadequate resources are
encountered in establishing a connection, a negative acknowledgment
shall be returned protection and allocated resources shall restoration using
   shared network resources.

   Clients will have different requirements for connection availability.
   These requirements can be released. A positive
acknowledgment shall expressed in terms of the "service level",
   which can be used mapped to acknowledge successful establishment of
a different restoration and protection options
   and priority related connection (including confirmation characteristics, such as holding
   priority(e.g. pre-emptable or not), set-up priority, or restoration
   priority. However, how the mapping of successful cross-connection).
For connections requested over a UNI, a positive acknowledgment shall
be used individual service levels to inform both source and destination clients a
   specific set of when they may
start transmitting data.

The transport network signaling shall protection/restoration options and connection
   priorities will be able determined by individual carriers.

   In order for the network to support both uni-
directional and bi-directional connections. Contention may occur
between two bi-directional connections, or between uni-directional and
bi-directional connections. There shall be at least one attempt multiple grades of service, the
   control plane must support differing protection and at
a most N attempts at contention resolution before returning a negative
acknowledgment where N is restoration
   options on a configurable parameter with devalue value per connection basis.

   In order for the network to support multiple grades of 3.

9.1.2 Connection deletion

When service, the
   control plane must support setup priority, restoration priority and
   holding priority on a per connection is no longer required, connectivity to basis.

   In general, the client following protection schemes shall be removed considered for
   all protection cases within the network:
   - Dedicated protection: 1+1 and network resources shall 1:1
   - Shared protection: 1:N and M:N..
   - Unprotected

   In general, the following restoration schemes should be released.
Partially deleted connections are a serious concern. As a result,
signaling network failures shall not result in partially deleted
connections remaining in considered
   for all restoration cases within the network. An network:
   - Shared restoration capacity.
   - Un-restorable

   Protection and restoration can be done on an end-to-end deletion signaling
message acknowledgment is required to avoid such situations.
Many signaling protocols use basis per
   connection. It can also be done on a single message pass to delete per span or link basis between
   two adjacent network nodes. Specifically, the link can be a
connection. However, in all-optical networks, loss of light will
propagate faster than network
   link between two nodes within the deletion message. Thus, downstream cross-
connects will detect loss of light and potentially trigger protection network where the P&R scheme
   operates across a NNI interface or restoration. Such behavior is not acceptable.
Instead, connection deletion in all-optical networks shall involve a
signaling message sent in drop-side link between the forward direction that shall take edge
   device and a switch node where the
connection out of service, de-allocating P&R scheme operates across a UNI
   interface. End-to-end Path protection and restoration schemes operate
   between access points across all NNI and UNI interfaces supporting
   the resources, but not
removing connection.

   In order for the cross-connection. Upon receipt network to support multiple grades of this message, service, the last
network node
   control plane must respond by sending support differing protection and restoration
   options on a message in per link or span basis within the reverse direction network.

   In order for the network to remove support multiple grades of service, the cross-connect at each node.

Requirement 112.    The following requirements are imposed
   control plane must support differing protection and restoration
   options on
  signaling:
- Hop-by-hop routing, explicit / source-based routing a per link or span basis for dropped customer connections.

   The protection and hierarchical
  routing shall all be supported.
- A negative acknowledgment shall be returned if inadequate resources restoration actions are encountered usually triggered by the
   failure in establishing the networks. However, during the network maintenance
   affecting the protected connections, a connection, and allocated resources
  shall be released.
- A positive acknowledgment shall be returned when a connection has
  been successfully established.
- For network operator need to
   proactively force the traffic on the protected connections requested over a UNI, a positive acknowledgment shall
  be used to inform both source switch
   to its protection connection. Therefore In order to support easy
   network maintenance, it required that management initiated protection
   and destination clients of when they
  may start transmitting data.
- Signaling shall restoration be supported for both uni-directional and bi-
  directional connections.
- When contention occurs in establishing bi-directional connections,
  there supported.

   To support the protection/restoration options: The control plane
   shall be at least one attempt at a most N attempts at
  contention resolution before returning a negative acknowledgment
  where N is a support configurable parameter with devalue value of 3.
- Partially deleted connections shall not remain within protection and restoration options via
   software commands (as opposed to needing hardware reconfigurations)
   to change the network.
- End-to-end acknowledgments protection/restoration mode.

   The control plane shall be used for connection deletion
  requests.
- Connection deletion support mechanisms to establish primary and
   protection paths.

   The control plane shall not result in either restoration or support mechanisms to modify protection being invoked.
- Connection deletion
   assignments, subject to service protection constraints.

   The control plane shall at a minimum use a two pass signaling
  process, removing support methods for fault notification to the cross-connection only after
   nodes responsible for triggering restoration / protection (note that
   the first signaling
  pass has completed.

- Signaling shall not progress through transport plane is designed to provide the network with unresolved
  label contention left behind.
- Acknowledgements of any requests shall not needed information
   between termination points.  This information is expected to be sent until all
  necessary steps
   utilized as appropriate.)

   The control plane shall support mechanisms for signaling rapid re-
   establishment of connection connectivity after failure.

   The control plane shall support mechanisms for reserving bandwidth
   resources for restoration.

   The control plane shall support mechanisms for normalizing connection
   routing (reversion) after failure repair.

   The signaling control plane should implement signaling message
   priorities to ensure request fulfillment have been successful.
- Label contention resolution attempts that restoration messages receive preferential
   treatment, resulting in faster restoration.

   Normal connection management operations (e.g., connection deletion)
   shall not result in infinite
  loops.
Signaling for connection protection and restoration is addressed protection/restoration being initiated.

   Restoration shall not result in miss-connections (connections
   established to a
later section.

9.2 Routing Functions

9.2.1 General Description

Routing is an important component destination other than that intended), even for
   short periods of time (e.g., during contention resolution). For
   example, signaling messages, used to restore connectivity after
   failure, should not be forwarded by a node before contention has been
   resolved.

   In the event of there being insufficient bandwidth available to
   restore all connections, restoration priorities / pre-emption should
   be used to determine which connections should be allocated the control plane. It includes
neighbor discovery, reachability information propagation, network
topology information dissemination, service capability discovery.
   available capacity.

   The
objective amount of neighbor discovery is to provide restoration capacity reserved on the information needed to
identify restoration paths
   determines the neighbor relationship and neighbor connectivity over each
link. Neighbor discovery may be realized via manual configuration or
protocol automatic identification, such as link management protocol
(LMP). Neighbor discovery exists between user network robustness of the restoration scheme to optical
network interface, failures. For
   example, a network node operator may choose to network node interface, network reserve sufficient capacity
   to
network interface. In optical network, each connection involves two
user endpoints. When user endpoint A requests a connection to user
endpoint B, ensure that all shared restorable connections can be recovered in
   the optical event of any single failure event (e.g., a conduit being cut). A
   network needs the reachability information operator may instead reserve more or less capacity than
   required to
select handle any single failure event, or may alternatively
   choose to reserve only a path for fixed pool independent of the connection. If a user endpoint is unreachable, a
connection request to that user endpoint shall be rejected. Network
topology information dissemination is to provide number of
   connections requiring this capacity (i.e., not reserve capacity for
   each node individual connection).

10.2.  Control plane resiliency

   The control plane may be affected by failures in the signaling network with stabilized
   connectivity and consistent information about by software failures (e.g., signaling, topology and
   resource discovery modules).

   Fast detection and recovery from failures in the carrier
network such that a single node is able control plane are
   important to support constrain-based path
selection. Service capability discovery is strongly related allow normal network operation to routing
functions. Specific services continue in the event
   of signaling channel failures.

   The optical control plane signal network require specific
network resource information. Routing functions shall support service
capabilities.

9.2.2 I-UNI, E-UNI, I-NNI protection and E-NNI

There are four types
   restoration options to enable it to self-healing in case of interfaces where failures
   within the routing information
dissemination may occur: I-UNI, E-UNI, I-NNI and E-NNI. Different types
of interfaces control plane.  The control plane shall impose different requirements and functionality due
to their different trust relationships.
Due to business, geographical, technology, economic considerations, support the
global optical network is usually partitioned into several carrier
autonomous systems (AS). Inside each carrier AS, the optical network
may be separate into several routing domains. In each routing domain,
   necessary options to ensure that no service-affecting module of the routing protocol may
   control plane (software modules or may not be the same.

While the I-UNI assumes control plane communications) is a trust relationship,
   single point of failure.  The control plane shall provide reliable
   transfer of signaling messages and flow control mechanisms for easing
   any congestion within the user control plane.  Control plane failures
   shall not cause failure of established data plane connections.
   Control network failure detection mechanisms shall distinguish
   between control channel and software process failures.

   When there are multiple channels (optical fibers or multiple
   wavelengths) between network elements and / or client devices,
   failure of the
transport network form control channel will have a client-server relationship.  Therefore, much bigger impact on the
benefits of dissemination of routing information from
   service availability than in the transport
network single case. It is therefore
   recommended to support a certain level of protection of the user network should be studied carefully. Sufficient,
but only necessary information, should control
   channel. Control channel failures may be disseminated across recovered by either using
   dedicated protection of control channels, or by re-routing control
   traffic within the I-
UNI. In E-UNI, neighbor discovery, reachability information and service
capability discovery are allowed to cross control plane (e.g., using the interface, but any
information related to network resources, topology shall not be
exchanged.

Any network topology self-healing
   properties of IP). To achieve this requires rapid failure detection
   and network resources information is recovery mechanisms. For dedicated control channel protection,
   signaling traffic may be
exchanged across I-NNI. The routing protocol may exchange sufficient
network topology and resource information.

Requirement 113.    However, to support scalability requirements, only switched onto a backup control channel
   between the information necessary for optimized path selection shall be
  exchanged.

Requirement 114.    Over E-NNI only reachability information, next
  routing hop and service capability information should be exchanged.
  Any other network related information shall same adjacent pairs of nodes. Such mechanisms protect
   against control channel failure, but not leak out to other
  networks. Policy based routing against node failure.

   If a dedicated backup control channel is not available between
   adjacent nodes, or if a node failure has occurred, then signaling
   messages should be applied to disseminate
  carrier specific network information.

9.2.3 Requirements for routing information dissemination

Routing protocols must propagate re-routed around the appropriate information
efficiently to network nodes. A major concern failed link / node.

   Fault localization techniques for routing protocol
performance is scalability and stability issues. Scalability requires
that the routing protocol performance isolation of failed control
   resources shall not largely depend on be supported.

   Recovery from signaling process failures can be achieved by switching
   to a standby module, or by re-launching the
scale failed signaling module.

   Recovery from software failures shall result in complete recovery of the
   network (e.g. state.

   Control channel failures may occur during connection establishment,
   modification or deletion. If this occurs, then the number of nodes, control channel
   failure must not result in partially established connections being
   left dangling within the number of links,
end user etc.).

Requirement 115.    The routing protocol design shall keep network. Connections affected by a control
   channel failure during the network
  size effect as small as possible.

Different scalability techniques should establishment process must be considered.

Requirement 116.    Routing protocol shall support hierarchical routing
  information dissemination, including topology information aggregation
  and summarization.

This technique is widely used in conventional networks, such as OSPF
routing for IP networks and PNNI for ATM networks. But removed from
   the tradeoff
between network, re-routed (cranked back) or continued once the number of hierarchies and failure
   has been resolved. In the degree case of network information
accuracy should connection deletion requests
   affected by control channel failures, the connection deletion process
   must be considered carefully. Too many aggregations may lose
network topology information.
- Optical transport switches may contain thousands of physical ports.
  The detailed link state information for a completed once the signaling network element could be
  huge.

Requirement 117.      The routing protocol connectivity is
   recovered.

   Connections shall not be able to minimize
  global information and keep information locally significant as much left partially established as possible.

       There is another tradeoff between accuracy a result of a
   control plane failure.  Connections affected by a control channel
   failure during the network
        topology information establishment process must be removed from the
   network, re-routed (cranked back) or continued once the failure has
   been resolved.  Partial connection creations and deletions must be
   completed once the routing protocol scalability.

Requirement 118.    Routing protocol shall distinguish static routing
  information control plane connectivity is recovered.

11.  Security Considerations

   In this section, security considerations and dynamic routing information.

        Static routing information does not change due requirements for optical
   services and associated control plane requirements are described.
   11.1 Optical Network Security Concerns Since optical service is
   directly related to connection
        operations, such as neighbor relationship, link attributes,
        total link bandwidth, etc. On the other hand, dynamic routing
        information updates due physical network which is fundamental to connection operations, such as link
        bandwidth availability, link multiplexing fragmentation, a
   telecommunications infrastructure, stringent security assurance
   mechanism should be implemented in optical networks. When designing
   equipment, protocols, NMS, and OSS that participate in optical
   service, every security aspect should be considered carefully in
   order to avoid any security holes that potentially cause dangers to
   an entire network, such as Denial of Service (DoS) attack,
   unauthorized access, masquerading, etc.
        The routing protocol operation shall consider the difference

   In terms of security, an optical connection consists of
        these two types aspects.
   One is security of routing information.

Requirement 119.    Only dynamic routing information needs to the data plane where an optical connection itself
   belongs, and the other is security of the control plane.

11.0.1.  Data Plane Security

   - Misconnection shall be
  updated avoided in real time.

Requirement 120.    Routing protocol shall order to keep the user's data
   confidential.  For enhancing integrity and confidentiality of data,
   it may be able helpful to support scrambling of data at layer 2 or
   encryption of data at a higher layer.

11.0.2.  Control Plane Security

   It is desirable to decouple the control plane from the
  dynamic information updating frequency through different types of
  thresholds. Two types data plane
   physically.

   Additional security mechanisms should be provided to guard against
   intrusions on the signaling network. Some of thresholds could these may be defined: absolute
  threshold and relative threshold.  The dynamic routing done with
   the help of the management plane.

   - Network information
  will shall not be disseminated if its difference is still inside advertised across exterior
   interfaces (E-UNI or E-NNI). The advertisement of network information
   across the
  threshold. When an update has not been sent for a specific time (this
  time E-NNI shall be controlled and limited in a configurable the carrier), an update is automatically
  sent. Default time could
   policy based fashion. The advertisement of network information shall
   be 30 minutes.

   All these techniques will impact the isolated and managed separately by each administration.

   - The signaling network resource representation
   accuracy. itself shall be secure, blocking all
   unauthorized access.  The tradeoff between accuracy signaling network topology and addresses
   shall not be advertised outside a carrier's domain of trust.

   - Identification, authentication and access control shall be
   rigorously used for providing access to the routing control plane.

   - Discovery information, including neighbor discovery, service
   discovery, resource discovery and reachability information should be
   exchanged in a secure way.  This is an optional NNI requirement.

   - UNI shall support ongoing identification and authentication of the routing protocol scalability should
   UNI-C entity (i.e., each user request shall be well studied. A well-
   designed routing protocol authenticated).

   - The UNI and NNI should provide the flexibility such that
   the network operators are able to adjust the balance according optional mechanisms to
   their networks' specific characteristics.

9.2.4 Requirements ensure origin
   authentication and message integrity for path selection

The optical network provides connection services to its clients. Path
selection requirements may be determined service parameters. However,
path selection abilities are determined by routing information
dissemination. In this section, we focus on path selection
requirements. Service capabilities, management
   requests such as service type requirements,
bandwidth requirements, protection requirements, diversity
requirements, bit error rate requirements, latency requirements
including/excluding area requirements, can be satisfied via constraint
based path calculation. Since a specific path selection set-up, tear-down and modify and connection
   signaling messages. This is done important in a
single network element, the specific path selection algorithm and its
interaction with the routing protocol are not discussed in this

document. Note that a path consists of a series order to prevent Denial of links.
   Service attacks. The
characteristics of a path are those of the weakest link. For example,
if one NNI (especially E-NNI) should also include
   mechanisms to ensure non-repudiation of connection management
   messages.

   - Information on security-relevant events occurring in the links does not have link protection capability, control
   plane or security-relevant operations performed or attempted in the whole
path should be declared as having no link-based protection.

Requirement 121.    Path selection shall support shortest path as well
  as constraint-based routing. Constraint-based path selection
   control plane shall
  consider be logged in the whole network performance and provide traffic
  engineering capability. management plane.

   -  A carrier would want to operate its network most efficiently, such
  as increasing network throughput and decreasing network blocking
  probability. The possible solution could be shortest path calculation
  or load balancing under congestion conditions.

Requirement 122.    Path selection management plane shall be able to include/exclude
  some specific locations, based on policy.

Requirement 123.    Path selection analyze and exploit logged
   data in order to check if they violate or threat security of the
   control plane.

   - The control plane shall be able to support protection/
  restoration capability. Section 10 discusses this subject generate alarm notifications
   about security related events to the management plane in more
  detail.

Requirement 124.    Path selection an
   adjustable and selectable fashion.

   - The control plane shall be able to support different
  levels of diversity, including diversity routing recovery from successful and protection/
  restoration diversity.
   attempted intrusion attacks.

   - The simplest form of diversity is link
  diversification. More complete notions desired level of diversity can be addressed
  by logical attributes such as shared risk link groups (SRLG).

Requirement 125.    Path selection algorithms shall provide carriers' security depends on the ability to support a wide range type of services interfaces and multiple levels
  of service classes. Parameters such as service type, transparency,
  bandwidth, latency, bit error rate, etc.
   accounting relation between the two adjacent sub-networks or domains.
   Typically, in-band control channels are perceived as more secure than
   out-of-band, out-of-fiber channels, which may be relevant.

   The inputs for path selection include connection end addresses, partly colocated
   with a
   set of requested routing constraints, public network.

11.1.  Service Access Control

   From a security perspective, network resources should be protected
   from unauthorized accesses and constraints of the
   networks. Some of should not be used by unauthorized
   entities. Service Access Control is the mechanism that limits and
   controls entities trying to access network constraints are technology specific,
   such as resources. Especially on
   the constraints in all-optical networks addressed in
   [John_Angela_IPO_draft]. The requested constraints may include
   bandwidth requirement, diversity requirements, path specific
   requirements, as well as restoration requirements.

9.3 Automatic Discovery Functions

This section describes public UNI, Connection Admission Control (CAC) functions should
   also support the specifications for automatic discovery following security features:

   - CAC should be applied to
aid distributed connection management (DCM) in any entity that tries to access network
   resources through the context public UNI (or E-UNI). CAC should include an
   authentication function of
automatically switched transport networks (ASTN/ASON).  This section
describes the requirements for the Automatically Switched Transport
Networks (ASTN) as specified an entity in ITU-T Rec.G.807. Auto-discovery is
applicable order to the User-to-Network Interface (UNI), Network-Node
Interfaces (NNI) and prevent masquerade
   (spoofing). Masquerade is fraudulent use of network resources by
   pretending to the Transport Plane Interfaces (TPI) as shown
in ASTN reference model.

Neighbor Discovery can be described as an instance of auto-discovery
that is used for associating two subnet points that form a trail or different entity. An authenticated entity should
   be given a
link connection service access level in a particular layer network.  The association created
through neighbor discovery is valid so long as the trail or link
connection that forms the association is capable of carrying traffic.
This is referred to as transport plane neighbor discovery.  In addition
to transport plane neighbor discovery, auto-discovery can also configurable policy basis.

   - Each entity should be used
for distributed subnet controller functions to establish adjacencies.
This is referred to as control plane neighbor discovery.
It is worthwhile to mention that the Sub network points that are
associated as part of neighbor discovery do not have authorized to be contained in use network elements with physically adjacent ports.  Thus neighbor
discovery is specific resources according
   to the layer in which connections are to service level given.

   - With help of CAC, usage based billing should be made realized. CAC and consequently is principally useful only when the network has
switching capability at this layer.

Service Discovery can
   usage based billing should be described as an instance of auto-discovery
that is used for verifying and exchanging service capabilities enough stringent to avoid any
   repudiation. Repudiation means that are
supported by an entity involved in a particular link connection or trail.  It is assumed that
service discovery would take place after two Sub Network Points within
   communication exchange subsequently denies the layer network are associated through neighbor discovery.  However,
since service capabilities of a link connection or trail can
dynamically change, service discovery can take place at any time after
neighbor discovery and any number of times as may be deemed necessary.
Resource discovery can be described as an instance fact.

12.  Acknowledgements
   The authors of auto-discovery
that is used for verifying the physical connectivity between two ports
on adjacent network elements in the network.  Resource discovery is
also concerned with the ability to improve inventory management of
network resources, detect configuration mismatches between adjacent
ports, associating port characteristics of adjacent network elements,
etc.

Automatic discovery runs over UNI, NNI and TPI interfaces[reference this document would like to
g.disc].

9.3.1 Neighbor discovery

This section provides acknowledge the requirements
   valuable inputs from John Strand, Yangguang Xu,
   Deborah Brunhard, Daniel Awduche, Jim Luciani, Lynn Neir, Wesam
   Alanqar, Tammy Ferris, Mark Jones and Gerry Ash.

 References

   [carrier-framework]  Y. Xue et al., Carrier Optical Services
   Framework and Associated UNI requirements", draft-many-carrier-
   framework-uni-00.txt, IETF, Nov. 2001.

   [G.807]  ITU-T Recommendation G.807 (2001), "Requirements for the automatic neighbor
   Automatic Switched Transport Network (ASTN)".

   [G.dcm]  ITU-T New Recommendation G.dcm, "Distributed Connection
   Management (DCM)".

   [G.8080] ITU-T New recommendation G.ason, "Architecture for the UNI and NNI
   Automatically Switched Optical Network (ASON)".

   [oif2001.196.0]  M. Lazer, "High Level Requirements on Optical
   Network Addressing", oif2001.196.0.

   [oif2001.046.2]  J. Strand and Physical Interface (PI). This requirement does not
preclude specific manual configurations that may be required Y. Xue, "Routing For Optical Networks
   With Multiple Routing Domains", oif2001.046.2.

   [ipo-impairements]  J. Strand et al.,  "Impairments and Other
   Constraints on Optical Layer Routing", draft-ietf-ipo-
   impairments-00.txt, work in
particular does not specify any mechanism that may be used progress.

   [ccamp-gmpls] Y. Xu et al., "A Framework for
optimizing network management.

Neighbor discovery is primarily concerned with automated discovery of
port connectivity between network elements that form the Generalized Multi-
   Protocol Label Switching (GMPLS)", draft-many-ccamp-gmpls-
   framework-00.txt, July 2001.

   [mesh-restoration] G. Li et al., "RSVP-TE extensions for shared mesh
   restoration in transport
plane and also involves the operations of connectivity verification,
and bootstrapping of channels in the control plane for carrying
discovery information between elements in the transport plane. This
applies to discovery of port connectivity across a UNI between the
elements in the user network and the transport plane.  The information

that is learnt is subject to various policy restrictions between
administrative domains.

Given that Automatic Neighbor Discovery (AND) is applicable across the
whole network, it is important that AND supports protocol independence, networks", draft-li-shared-mesh-
   restoration-00.txt, July 2001.

   [sis-framework]  Yves T'Joens et al., "Service Level
      Specification and should be specified to allow ease of mapping into multiple protocol
specifications. The actual implementation of AND depends on the
protocols that are used Usage Framework",
      draft-manyfolks-sls-framework-00.txt, IETF, Oct. 2000.

   [control-frmwrk] G. Bernstein et al., "Framework for the purpose of automatic neighbor
discovery.

As mentioned earlier, AND runs over both UNI and NNI type interfaces in
the MPLS-based
   control plane.  Given that port connectivity discovery of Optical SDH/SONET Networks", draft-bms-optical-sdhsonet-
   mpls-control-frmwrk-00.txt, IETF, Nov. 2000.

   [ccamp-req]    J. Jiang et al.,  "Common Control and
connectivity verification (e.g., fiber connectivity verification) are
to be performed at the transport plane, PI interfaces (IrDI Measurement
   Plane Framework and IaDI)
are also considered as AND interfaces.  Further information is
available in Draft ITU-T G.ndisc.

Although the minimal set of parameters Requirements",  draft-walker-ccamp-req-00.txt,
   CCAMP, August, 2001.

   [tewg-measure]  W. S. Lai et al., "A Framework for discovery includes the SP Internet Traffic
   Engineering Neasurement", draft-wlai-tewg-measure-01.txt, IETF, May,
   2001.

   [ccamp-g.709]   A. Bellato, "G. 709 Optical Transport Networks GMPLS
   Control Framework", draft-bellato-ccamp-g709-framework-00.txt, CCAMP,
   June, 2001.

   [onni-frame]  D. Papadimitriou, "Optical Network-to-Network Interface
   Framework and User NE names, there are several policy restrictions that are
considered while exchanging these names across untrusted boundaries.
Several security requirements on the information exchanged needs to be
considered.  In addition to these, there are other security/reliability
requirements on Signaling Requirements", draft-papadimitriou-onni-
   frame-01.txt, IETF, Nov. 2000.

   [oif2001.188.0]  R. Graveman et al.,"OIF Security requirement",
   oif2001.188.0.a`
   Author's Addresses

   Yong Xue
   UUNET/WorldCom
   22001 Loudoun County Parkway
   Ashburn, VA 20147
   Phone: +1 (703) 886-5358
   Email: yong.xue@wcom.com

   Monica Lazer
   AT&T
   900 ROUTE 202/206N PO BX 752
   BEDMINSTER, NJ  07921-0000
   mlazer@att.com

   Jennifer Yates,
   AT&T Labs
   180 PARK AVE, P.O. BOX 971
   FLORHAM PARK, NJ  07932-0000
   jyates@research.att.com

   Dongmei Wang
   AT&T Labs
   Room B180, Building 103
   180 Park Avenue
   Florham Park, NJ 07932
   mei@research.att.com

   Ananth Nagarajan
   Sprint
   9300 Metcalf Ave
   Overland Park, KS 66212, USA
   ananth.nagarajan@mail.sprint.com

   Hirokazu Ishimatsu
   Japan Telecom Co., LTD
   2-9-1 Hatchobori, Chuo-ku,
   Tokyo 104-0032 Japan
   Phone: +81 3 5540 8493
   Fax: +81 3 5540 8485
   EMail: hirokazu@japan-telecom.co.jp

   Olga Aparicio
   Cable & Wireless Global
   11700 Plaza America Drive
   Reston, VA 20191
   Phone: 703-292-2022
   Email: olga.aparicio@cwusa.com

   Steven Wright
   Science & Technology
   BellSouth Telecommunications
   41G70 BSC
   675 West Peachtree St. NE.
   Atlanta, GA 30375
   Phone +1 (404) 332-2194
   Email: steven.wright@snt.bellsouth.com

Appendix A Commonly Required Signal Rate

   The table below outlines the actual control plane communications channels.
These requirements are out different signal rates and granularities
   for the SONET and SDH signals.
           SDH        SONET        Transported signal
           name       name
           RS64       STS-192      STM-64 (STS-192) signal without
                       Section      termination of scope any OH.
           RS16       STS-48       STM-16 (STS-48) signal without
                       Section      termination of this document.  Draft ITU-T Rec.
G.dcn discusses these requirements in much detail.

9.3.2 Resource Discovery

Resource discovery happens between neighbors. A mechanism designed for
a technology domain can be applied to any pair OH.
           MS64       STS-192      STM-64 (STS-192); termination of NEs interconnected
through interfaces
                       Line         RSOH (section OH) possible.
           MS16       STS-48       STM-16 (STS-48); termination of the same technology.  However, because resource
discovery means certain information disclosure between two business
domains, it is under the service providers' security
                       Line         RSOH (section OH) possible.
           VC-4-      STS-192c-    VC-4-64c (STS-192c-SPE);
           64c        SPE          termination of RSOH (section OH),
                                     MSOH (line OH) and policy
control. In certain network scenario, a service provider who owns the
transport network may not be willing to disclose any internal
addressing scheme to its client. So a client NE may not have the
neighbor NE address VC-4-64c TCM OH
                                     possible.
           VC-4-      STS-48c-     VC-4-16c (STS-48c-SPE);
           16c        SPE          termination of RSOH (section OH),
                                     MSOH (line OH) and port ID in its NE level resource table.
Interface ports VC-4-16c  TCM
                                     OH possible.
           VC-4-4c    STS-12c-     VC-4-4c (STS-12c-SPE); termination
                       SPE          of RSOH (section OH), MSOH (line
                                     OH) and their characteristics define the network element
resources. Each network can store its resources in a local table that
could include switching granularity supported by the network element,
ability to support concatenated services, range VC-4-4c TCM OH possible.
           VC-4       STS-3c-      VC-4 (STS-3c-SPE); termination of bandwidths supported
by adaptation, physical attributes signal format, transmission bit
rate, optics type, multiplexing structure, wavelength,
                       SPE          RSOH (section OH), MSOH (line OH)
                                     and the
direction VC-4 TCM OH possible.
           VC-3       STS-1-SPE    VC-3 (STS-1-SPE); termination of
                                     RSOH (section OH), MSOH (line OH)
                                     and VC-3 TCM OH possible.
                                     Note: In SDH it could be a higher
                                     order or lower order VC-3, this is
                                     identified by the flow sub-addressing
                                     scheme. In case of information. Resource discovery a lower order
                                     VC-3 the higher order VC-4 OH can
                                     be
achieved through either manual provisioning or automated procedures.
The procedures are generic while the specific mechanisms terminated.
           VC-2       VT6-SPE      VC-2 (VT6-SPE); termination of
                                     RSOH (section OH), MSOH (line OH),
                                     higher order VC-3/4 (STS-1-SPE) OH
                                     and control
information can be technology dependent.

Resource discovery can be achieved in several methods. One VC-2 TCM OH possible.
           -          VT3-SPE      VT3-SPE; termination of section
                                     OH, line OH, higher order STS-1-
                                     SPE OH and VC3-SPE TCM OH
                                     possible.
           VC-12      VT2-SPE      VC-12 (VT2-SPE); termination of
                                     RSOH (section OH), MSOH (line OH),
                                     higher order VC-3/4 (STS-1-SPE) OH
                                     and VC-12 TCM OH possible.
           VC-11      VT1.5-SPE    VC-11 (VT1.5-SPE); termination of
                                     RSOH (section OH), MSOH (line OH),
                                     higher order VC-3/4 (STS-1-SPE) OH
                                     and VC-11 TCM OH possible.
   The tables below outline the
methods is different signals, rates and
   granularities that have been defined for the self-resource discovery by which OTN in G.709.

   OTU type         OTU nominal bit rate        OTU bit rate tolerance
   OTU1             255/238 * 2 488 320 kbit/s       20 ppm
   OTU2             255/237 * 9 953 280 kbit/s
   OTU3             255/236 * 39 813 120 kbit/s

   NOTE - The nominal OTUk rates are approximately: 2,666,057.143 kbit/s
   (OTU1), 10,709,225.316 kbit/s (OTU2) and 43,018,413.559 kbit/s
   (OTU3).

   ODU type         ODU nominal bit rate       ODU bit rate tolerance
   ODU1             239/238 * 2 488 320 kbit/s      20 ppm
   ODU2             239/237 * 9 953 280 kbit/s
   ODU3             239/236 * 39 813 120 kbit/s

   NOTE - The nominal ODUk rates are approximately: 2,498,775.126 kbit/s
   (ODU1), 10 037 273.924 kbit/s (ODU2) and 40 319 218.983 kbit/s
   (ODU3).  ODU Type and Capacity (G.709)

   OPU type   OPU Payload nominal       OPU Payload bit rate
               bit rate tolerance
   OPU1         2488320 kbit/s                   20 ppm
   OPU2         238/237 * 9953280 kbit/s
   OPU3         238/236 * 39813120 kbit/s
   NOTE - The nominal OPUk Payload rates are approximately:
   2,488,320.000 kbit/s (OPU1 Payload), 9,995,276.962 kbit/s (OPU2
   Payload) and 40,150,519.322 kbit/s (OPU3 Payload).

Appendix B:  Protection and Restoration Schemes

   For the NE populates its

resource table with purposes of this discussion, the physical attributes and resources. Neighbor
discovery following
   protection/restoration definitions have been provided:

   Reactive Protection: This is another method a function performed by which NE discovers the adjacencies in either equipment
   management functions and/or the transport plane (i.e. depending on if
   it is equipment protection or facility protection and their port association and populates so on) in
   response to failures or degraded conditions. Thus if the
neighbor NE. After neighbor discovery resource verification and
monitoring must control
   plane and/or management plane is disabled, the reactive protection
   function can still be performed to verify physical attributes to ensure
compatibility. Resource monitoring must performed. Reactive protection requires that
   protecting resources be performed periodically since
neighbor discovery configured and port association are repeated periodically.
Further information can reserved (i.e. they cannot be found in [GMPLS-ARCH].

10. Requirements
   used for service and control plane resiliency

There other services). The time to exercise the protection is a range
   technology specific and designed to protect from service
   interruption.

   Proactive Protection: In this form of failures that can occur within a network, including
node failures (e.g. office outages, natural disasters), link failures
(e.g. fiber cuts, failures arising protection, protection events
   are initiated in response to planned engineering works (often from diverse circuits traversing
shared facilities (e.g. conduit cuts)) and channel failures (e.g. laser
failures).

Failures a
   centralized operations center). Protection events may be divided into those affecting triggered
   manually via operator request or based on a schedule supported by a
   soft scheduling function. This soft scheduling function may be
   performed by either the data management plane and or the control plane .

Requirement 126.    The ASON architecture and associated protocols
  shall include redundancy/protection options such that any single
  failure event shall not impact but
   could also be part of the data plane or equipment management functions. If the
   control plane.

10.1 Service resiliency

Rapid protection/restoration from data plane failures and/or management plane is a crucial
aspect of current disabled and future transport networks. Rapid recovery this is
required by transport network providers to protect service and also to
support stringent Service Level Agreements (SLAs) where
   the soft scheduling function is performed, the proactive protection
   function cannot be performed. [Note that dictate high
reliability and availability for customer connectivity.

The choice In the case of a protection/restoration policy is
   hierarchical model of subnetworks, some protection may remain
   available in the case of partial failure (i.e. failure of a tradeoff between
network resource utilization (cost) and service interruption time.

Clearly, minimized service interruption time is desirable, single
   subnetwork control plane or management plane controller) relates to
   all those entities below the failed subnetwork controller, but schemes
achieving this usually do so at not
   its parents or peers.] Proactive protection requires that protecting
   resources be configured and reserved (i.e. they cannot be used for
   other services) prior to the expense of network resource
utilization, resulting in increased cost protection exercise. The time to
   exercise the provider. Different
protection/restoration schemes operate with different tradeoffs between
spare capacity requirements protection is technology specific and service interruption time.

In light of these tradeoffs, transport providers are expected designed to
support a range of different
   protect from service offerings, with interruption.

   Reactive Restoration: This is a strong
differentiating factor between these service offerings being service
interruption time function performed by either the
   management plane or the control plane. Thus if the control plane
   and/or management plane is disabled, the restoration function cannot
   be performed. [Note that in the event case of network failures. For example, a
provider's highest offered service level would generally ensure the
most rapid recovery from network failures. However, such schemes (e.g.,
1+1, 1:1 protection) generally use a large amount hierarchical model of spare
   subnetworks, some restoration
capacity, and are thus not cost effective for most customer
applications. Significant reductions may remain available in spare the case of
   partial failure (i.e. failure of a single subnetwork control plane or
   management plane controller) relates to all those entities below the
   failed subnetwork controller, but not its parents or peers.]
   Restoration capacity can may be achieved
by instead sharing this capacity across shared among multiple independent failures.

Clients will have different requirements for connection availability.
These requirements can be expressed in terms of demands. A
   restoration path is created after detecting the "service level",
which describes restoration/protection options and priority related
connection characteristics, such as holding priority(e.g. pre-emptable failure.  Path
   selection could be done either off-line or not), set-up priority, on-line. The path
   selection algorithms may also be executed in real-time or restoration priority. Therefore, mapping
of individual service levels non-real
   time depending upon their computational complexity, implementation,
   and specific network context.

   - Off-line computation may be facilitated by simulation and/or
   network planning tools. Off-line computation can help provide
   guidance to subsequent real-time computations.

   - On-line computation may be done whenever a specific set of
protection/restoration options and connection priorities will request is
   received.

   Off-line and on-line path selection may be
determined by individual carriers.

Requirement 127.    In order for the used together to make
   network operation more efficient. Operators could use on-line
   computation to support multiple grades handle a subset of service, the control plane must identify, assign, path selection decisions and track
  multiple protection use
   off-line computation for complicated traffic engineering and restoration options.

For the purposes of this discussion, the following
protection/restoration definitions have been provided:

Reactive Protection: policy
   related issues such as demand planning, service scheduling, cost
   modeling and global optimization.

   Proactive Restoration: This is a function performed by either equipment
management functions and/or the transport
   management plane (i.e. depending on if
it is equipment protection or facility protection and so on) in
response to failures or degraded conditions. the control plane. Thus if the control plane
   and/or management plane is disabled, the reactive protection restoration function
can still cannot
   be performed. Reactive protection requires [Note that protecting
resources be configured and reserved (i.e. they cannot be used for
other services). The time to exercise the protection is technology
specific and designed to protect from service interruption.

Proactive Protection: In this form of protection, protection events are
initiated in response to planned engineering works (often from a
centralized operations center). Protection events may be triggered
manually via operator request or based on a schedule supported by a
soft scheduling function. This soft scheduling function may be
performed by either the management plane or the control plane but could
also be part of the equipment management functions. If the control
plane and/or management plane is disabled and this is where the soft
scheduling function is performed, the proactive protection function
cannot be performed. [Note that In the case of a hierarchical model of
subnetworks, some protection may remain available in the case of
partial failure (i.e. failure of a single subnetwork control plane or
management plane controller) relates to all those entities below the
failed subnetwork controller, but not its parents or peers.] Proactive
protection requires that protecting resources be configured and
reserved (i.e. they cannot be used for other services) prior to the
protection exercise. The time to exercise the protection is technology
specific and designed to protect from service interruption.

Reactive Restoration: This is a function performed by either the
management plane or the control plane. Thus if the control plane and/or
management plane is disabled, the restoration function cannot be
performed. [Note that in in the case of a hierarchical model of
   subnetworks, some restoration may remain available in the case of
   partial failure (i.e. failure of a single subnetwork control plane or
   management plane controller) relates to all those entities below the
   failed subnetwork controller, but not its parents or peers.]
   Restoration capacity may be shared among multiple demands. A Part or
   all of the restoration path is created after before detecting the failure.  Path
selection could be done either off-line or on-line. The path selection
algorithms may also be executed in real-time or non-real time depending
upon their computational complexity, implementation, and specific
network context.
. Off-line computation may be facilitated by simulation and/or network
  planning tools. Off-line computation can help provide guidance to
  subsequent real-time computations.
. On-line computation may be done whenever a connection request is
  received.
Off-line and on-line path selection may be used together to make
network operation more efficient. Operators could use on-line
computation to handle a subset of path selection decisions and use off-
line computation for complicated traffic engineering and policy related
issues such as demand planning, service scheduling, cost modeling and
global optimization.

Proactive Restoration: This is a function performed by either the
management plane or the control plane. Thus if the control plane and/or
management plane is disabled, the restoration function cannot be
performed. [Note that in the case of a hierarchical model of
subnetworks, some restoration may remain available in the case of
partial failure (i.e. failure of a single subnetwork control plane or
management plane controller) relates to all those entities below the
failed subnetwork controller, but not its parents or peers.]
Restoration capacity may be shared among multiple demands. Part or all
of the restoration path is created before detecting the failure
depending on failure
   depending on algorithms used, types of restoration options supported
   (e.g. shared restoration/connection pool, dedicated restoration
   pool), whether the end-end call is protected or just UNI part or NNI
   part, available resources, and so on. In the event restoration path
   is fully pre-allocated, a protection switch must occur upon failure
   similarly to the reactive protection switch.  The main difference
   between the options in this case is that the switch occurs through
   actions of the control plane rather than the transport plane   Path
   selection could be done either off-line or on-line. The path
   selection algorithms may also be executed in real-time or non-real
   time depending upon their computational complexity, implementation,
   and specific network context.
.

   - Off-line computation may be facilitated by simulation and/or
   network planning tools. Off-line computation can help provide
   guidance to subsequent real-time computations.
.

   - On-line computation may be done whenever a connection request is
   received.

   Off-line and on-line path selection may be used together to make
   network operation more efficient. Operators could use on-line
   computation to handle a subset of path selection decisions and use off-
line
   off-line computation for complicated traffic engineering and policy
   related issues such as demand planning, service scheduling, cost
   modeling and global optimization.

Multiple protection/restoration options are required

   Control channel and signaling software failures shall not cause
   disruptions in established connections within the network to
support data plane, and
   signaling messages affected by control plane outages should not
   result in partially established connections remaining within the range of offered services. NNI protection/restoration
schemes operate between two adjacent nodes, with NNI
protection/restoration involving switching to a protection/restoration
connection when a failure occurs. UNI protection schemes operate
between
   network.

   Control channel and signaling software failures shall not cause
   management plane failures.

Appendix C Interconnection of Control Planes

   The interconnection of the edge device IP router (client) and optical control
   planes can be realized in a switch node (i.e. at number of ways depending on the access required
   level of coupling.  The control planes can be loosely or drop),
End-End Path protection/restoration schemes operate between access
points (i.e. connections are protected/restored across all NNI and UNI
interfaces supporting tightly
   coupled.  Loose coupling is generally referred to as the call).

In general, overlay
   model and tight coupling is referred to as the following protection schemes should be considered for
all protection cases within peer model.
   Additionally there is the network:
. Dedicated protection (e.g., 1+1, 1:1)
. Shared protection (e.g., 1:N, M:N). This allows augmented model that is somewhat in between
   the network other two models but more akin to ensure
  high quality service for customers, while still managing its physical
  resources efficiently.
. Unprotected

In general, the following restoration schemes should be considered for
all restoration cases within the network:
Dedicated restoration capacity
Shared restoration capacity. This allows peer model.  The model
   selected determines the network to ensure high
quality following:

   - The details of service for customers, while still managing its physical
resources efficiently.
. Un-restorable

To support the protection/restoration options:

Requirement 128.    The control plane shall support multiple options
  for access (UNI), span (NNI), topology, resource and end-to-end Path
  protection/restoration.

Requirement 129.    The control plane shall support configurable
  protection/restoration options via software commands (as opposed to
  needing hardware reconfigurations) to change reachability information
   advertised between the
  protection/restoration mode.

Requirement 130.    The control plane shall support mechanisms to
  establish primary and protection paths.

Requirement 131. client and optical networks

   - The level of control plane shall support mechanisms to
  modify protection assignments, subject to service protection
  constraints.

Requirement 132. IP routers can exercise in selecting paths
   across the optical network

   The control plane shall support methods for fault
  notification to next three sections discuss these models in more details and the nodes responsible for triggering restoration /

  protection (note that
   last section describes the transport plane is designed to provide coupling requirements from a carrier's
   perspective.

C.1. Peer Model (I-NNI like model)

   Under the
  needed information between termination points.  This information is
  expected to be utilized peer model, the IP router clients act as appropriate.)

Requirement 133.    The control plane shall support mechanisms for
  signaling rapid re-establishment peers of connection connectivity after
  failure.

Requirement 134.    The control plane shall support mechanisms for
  reserving restoration bandwidth.

Requirement 135.    The control plane shall support mechanisms for
  normalizing connection the
   optical transport network, such that single routing after failure repair.

Requirement 136.    The signaling protocol instance
   runs over both the IP and optical domains.  In this regard the
   optical network elements are treated just like any other router as
   far as the control plane should implement
  signaling message priorities to ensure that restoration messages
  receive preferential treatment, resulting in faster restoration.

Requirement 137.    Normal connection operations (e.g., connection
  deletion) shall not result in protection/restoration being initiated.

Requirement 138.    Restoration shall is concerned. The peer model, although not result
   strictly an internal NNI, behaves like an I-NNI in miss-connections
  (connections established to a destination other than the sense that intended),
  even for short periods
   there is sharing of time (e.g., during contention resolution).
  For example, signaling messages, used to restore connectivity after
  failure, should not be forwarded by resource and topology information.

   Presumably a node before contention has been
  resolved.

Requirement 139.    In the event of there being insufficient bandwidth
  available to restore all connections, restoration priorities / pre-
  emption should common IGP such as OSPF or IS-IS, with appropriate
   extensions, will be used to determine which connections should distribute topology information.  One
   tacit assumption here is that a common addressing scheme will also be
  allocated
   used for the available capacity. optical and IP networks.  A common address space can be
   trivially realized by using IP addresses in both IP and optical
   domains.  Thus, the optical networks elements become IP addressable
   entities.

   The amount obvious advantage of restoration capacity reserved on the restoration paths
determines peer model is the robustness of seamless
   interconnection between the restoration scheme to failures. For
example, a network operator may choose to reserve sufficient capacity
to ensure client and optical transport networks.
   The tradeoff is that all shared restorable connections can be recovered in the event of any single failure event (e.g., a conduit being cut). A
network operator may instead reserve more or less capacity than tight integration and the optical specific
   routing information that
required to handle any single failure event, or may alternatively
choose must be known to reserve only a fixed pool independent of the number of
connections requiring this capacity (i.e., not reserve capacity for
each individual connection).

10.2 Control plane resiliency

Requirement 140. IP clients.

   The discussion above has focused on the client to optical control
   plane network shall support
  protection and restoration options to enable it to be robust to
  failures.

Requirement 141. inter-connection.  The discussion applies equally well to
   inter-connecting two optical control plane shall support the necessary
  options to ensure that no service-affecting module of planes.

C.2. Overlay (UNI-like model)

   Under the control
  plane (software modules or control plane communications) is a single
  point of failure.

Requirement 142.    The control plane should support options to enable
  it to be self-healing.

Requirement 143.    The control plane shall provide reliable transfer
  of signaling messages and flow control mechanisms for restricting overlay model, the
  transmission of signaling packets where appropriate.

The control plane may be affected by failures in signaling network
connectivity and by software failures (e.g., signaling, IP client routing, topology
   distribution, and
resource discovery modules).

Requirement 144.    Control plane failures shall not cause failure signaling protocols are independent of
  established data plane connections.

Fast detection the routing,
   topology distribution, and recovery from failures in signaling protocols at the control plane are
important optical layer.
   This model is conceptually similar to allow normal network operation the classical IP over ATM
   model, but applied to continue in an optical sub-network directly.

   Though the event of
signaling channel failures.

Requirement 145.    Control network failure detection mechanisms shall
  distinguish between control channel overlay model dictates that the client and software process failures.

Different recovery techniques optical network
   are initiated for independent this still allows the different failures.
When there are multiple channels (optical fibers or multiple
wavelengths) between optical network elements to re-use IP
   layer protocols to perform the routing and / or signaling functions.

   In addition to the protocols being independent the addressing scheme
   used between the client devices, failure
of and optical network must be independent in
   the control channel will have a much bigger impact on overlay model.  That is, the service
availability than use of IP layer addressing in the single case. It is therefore recommended to
support a certain level of protection of
   clients must not place any specific requirement upon the control channel. Control
channel failures may be recovered by either using dedicated protection
of control channels, or by re-routing control traffic addressing
   used within the optical control plane (e.g., using plane.

   The overlay model would provide a UNI to the self-healing properties of IP). To
achieve this requires rapid failure detection client networks through
   which the clients could request to add, delete or modify optical
   connections.  The optical network would additionally provide
   reachability information to the clients but no topology information
   would be provided across the UNI.

C.3. Augmented model (E-NNI like model)

   Under the augmented model, there are actually separate routing
   instances in the IP and recovery mechanisms. optical domains, but information from one
   routing instance is passed through the other routing instance.  For dedicated control channel protection, signaling traffic may
   example, external IP addresses could be
switched onto a backup control channel carried within the optical
   routing protocols to allow reachability information to be passed to
   IP clients.  A typical implementation would use BGP between the same adjacent pairs
of nodes. Such mechanisms protect against control channel failure, but IP
   client and optical network.

   The augmented model, although not against node failure.

Requirement 146.    If a dedicated backup control channel strictly an external NNI, behaves
   like an E-NNI in that there is not
  available between adjacent nodes, or if limited sharing of information.

   Generally in a node failure has occurred,
  then signaling messages should carrier environment there will be re-routed around more than just IP
   routers connected to the failed link /
  node.

Requirement 147.    Fault localization techniques for the isolation optical network.  Some other examples of
  failed control resources shall be supported.

Recovery from signaling process failures can
   clients could be achieved by switching
to a standby module, ATM switches or by re-launching the failed signaling module.

Requirement 148.    Recovery from software failures shall result in
  complete recovery of network state.

Control channel failures SONET ADM equipment.  This may occur during connection establishment,
modification or deletion. If this occurs, then drive
   the control channel
failure must decision towards loose coupling to prevent undue burdens upon
   non-IP router clients.  Also, loose coupling would ensure that future
   clients are not result in partially established connections being left
dangling within the network. Connections affected hampered by legacy technologies.

   Additionally, a control channel
failure during the establishment process must be removed from the
network, re-routed (cranked back) or continued once the failure has
been resolved. In carrier may for business reasons want a separation
   between the case of connection deletion requests affected by
control channel failures, client and optical networks.  For example, the connection deletion process must ISP
   business unit may not want to be
completed once tightly coupled with the signaling optical
   network connectivity is recovered.

Requirement 149.    Connections shall not business unit.  Another reason for separation might be left partially established
  as just
   pure politics that play out in a result large carrier.  That is, it would
   seem unlikely to force the optical transport network to run that same
   set of a control plane failure.

Requirement 150.    Connections affected by a control channel failure
  during protocols as the establishment process must be removed from IP router networks.  Also, by forcing the network,
  re-routed (cranked back) or continued once the failure has been
  resolved.

Requirement 151.    Partial connection creations and deletions must be
  completed once the control plane connectivity is recovered.

11. Security concerns and requirements

In this section, security concerns and requirements of optical
connections are described.

11.1 Data Plane Security and Control Plane Security

In terms
   same set of security, an optical connection consists protocols in both networks the evolution of two aspects.
One the networks
   is security of directly tied together.  That is, it would seem you could not
   upgrade the data plane where an optical connection itself
belongs, and transport network protocols without taking into
   consideration the other is security of impact on the control plane by which IP router network (and vice versa).

   Operating models also play a role in deciding the level of coupling.
   [Freeland] gives four main operating models envisioned for an optical connection is controlled.

11.1.1 Data Plane Security

Requirement 152.    Misconnection shall be avoided in order to keep
  user's data confidential.

Requirement 153.    For enhancing integrity and confidentiality
   transport network: - ISP owning all of
  data, it may be helpful its own infrastructure (i.e.,
   including fiber and duct to support scrambling of data at layer 2 the customer premises)

   - ISP leasing some or
  encryption all of data at its capacity from a higher layer.

11.1.2 Control Plane Security

It is desirable to decouple third party

   - Carriers carrier providing layer 1 services

   - Service provider offering multiple layer 1, 2, and 3 services over
   a common infrastructure
   Although relatively few, if any, ISPs fall into category 1 it would
   seem the control plane from mostly likely of the data plane
physically.

Additional security mechanisms should be provided four to guard against
intrusions on the signaling network.

Requirement 154.    Network information shall not be advertised across
  exterior interfaces (E-UNI or E-NNI). The advertisement of network
  information across use the E-NNI shall be controlled and limited in a
  configurable policy based fashion. peer model.  The advertisement of network
  information shall be isolated and managed separately by each
  administration.

Requirement 155.    Identification, authentication and access control
  shall be rigorously used for providing access other
   operating models would lend themselves more likely to the control plane.

Requirement 156.    UNI shall support ongoing identification and
  authentication of the UNI-C entity (i.e., each user request shall be
  authenticated.

Editor's Note: The control plane shall have choose an audit trail and log with
timestamp recording access.

11.2 Service Access Control

>From a security perspective, network resources should be protected from
unauthorized accesses and should not be used by unauthorized entities.
Service Access Control is the mechanism that limits and controls
entities trying to access network resources. Especially on the public
UNI, Connection Admission Control (CAC) should be implemented
   overlay model.  Most carriers would fall into category 4 and
support the following features:

Requirement 157.    CAC should be applied to any entity that tries to
  access network resources through the public UNI. CAC should include
  an authentication function of thus
   would most likely choose an entity in order to prevent
  masquerade (spoofing). Masquerade is fraudulent use of network
  resources by pretending to be a different entity. An authenticated
  entity should be given a service access level in a configurable
  policy basis.

Requirement 158.    Each entity should be authorized to use network
  resources according to the service level given.

Requirement 159.    With help of CAC, usage based billing should be
  realized. CAC and usage based billing should be enough stringent to
  avoid any repudiation. Repudiation means that an entity involved in a
  communication exchange subsequently denies the fact.

11.3 Optical Network Security Concerns

Since optical service is directly related to the layer 1 network that
is fundamental for telecom infrastructure, stringent security assurance
mechanism should be implemented in optical networks. When designing
equipment, protocols, NMS, and OSS that participate in optical service,
every security aspect should be considered carefully in order to avoid
any security holes that potentially cause dangers to an entire network,
such as DoS attack, unauthorized access and etc.

Acknowledgements

The authors of this document would like to acknowledge the valuable
inputs from Yangguang Xu, Deborah Brunhard, Daniel Awduche, Jim
Luciani, Mark Jones and Gerry Ash.

References

[carrier-framework]  Y. Xue et al., Carrier Optical Services Framework
and Associated UNI requirements", draft-many-carrier-framework-uni-
00.txt, IETF, Nov. 2001.
[G.807]  ITU-T Recommendation G.807 (2001), "Requirements for the
Automatic Switched Transport Network (ASTN)".
[G.dcm]  ITU-T New Recommendation G.dcm, "Distributed Connection
Management (DCM)".
[G.ason] ITU-T New recommendation G.ason, "Architecture for the
Automatically Switched Optical Network (ASON)".
[oif2001.196.0]  M. Lazer, "High Level Requirements on Optical Network
Addressing", oif2001.196.0.
[oif2001.046.2]  J. Strand and Y. Xue, "Routing For Optical Networks
With Multiple Routing Domains", oif2001.046.2.
[ipo-impairements]  J. Strand et al.,  "Impairments and Other
Constraints on Optical Layer Routing", draft-ietf-ipo-impairments-
00.txt, work in progress.
[ccamp-gmpls] Y. Xu et al., "A Framework for Generalized Multi-Protocol
Label Switching (GMPLS)", draft-many-ccamp-gmpls-framework-00.txt, July
2001.
[mesh-restoration] G. Li et al., "RSVP-TE extensions for shared mesh
restoration in transport networks", draft-li-shared-mesh-restoration-
00.txt, July 2001.
[sis-framework]  Yves T'Joens et al., "Service Level

   Specification and Usage Framework",
   draft-manyfolks-sls-framework-00.txt, IETF, Oct. 2000.
[control-frmwrk] G. Bernstein et al., "Framework for MPLS-based control
of Optical SDH/SONET Networks", draft-bms-optical-sdhsonet-mpls-
control-frmwrk-00.txt, IETF, Nov. 2000.
[ccamp-req]    J. Jiang et al.,  "Common Control and Measurement Plane
Framework and Requirements",  draft-walker-ccamp-req-00.txt, CCAMP,
August, 2001.
[tewg-measure]  W. S. Lai et al., "A Framework for Internet Traffic
Engineering Neasurement",
draft-wlai-tewg-measure-01.txt, IETF, May, 2001.
[ccamp-g.709]   A. Bellato, "G. 709 Optical Transport Networks GMPLS
Control Framework",
draft-bellato-ccamp-g709-framework-00.txt, CCAMP, June, 2001.
[onni-frame]  D. Papadimitriou, "Optical Network-to-Network Interface
Framework and Signaling Requirements", draft-papadimitriou-onni-frame-
01.txt, IETF, Nov. 2000.
[oif2001.188.0]  R. Graveman et al.,"OIF Security requirement",
oif2001.188.0.a`

Author's Addresses

Yong Xue
UUNET/WorldCom
22001 Loudoun County Parkway
Ashburn, VA 20147
Phone: +1 (703) 886-5358
Email: yxue@uu.net

John Strand
AT&T Labs
100 Schulz Dr.,
Rm 4-212 Red Bank,
NJ 07701, USA
Phone: +1 (732) 345-3255
Email: jls@att.com

Monica Lazer
AT&T
900 ROUTE 202/206N PO BX 752
BEDMINSTER, NJ  07921-0000
mlazer@att.com

Jennifer Yates,
AT&T Labs
180 PARK AVE, P.O. BOX 971
FLORHAM PARK, NJ  07932-0000
jyates@research.att.com

Dongmei Wang
AT&T Labs
Room B180, Building 103
180 Park Avenue
Florham Park, NJ 07932
mei@research.att.com

Ananth Nagarajan
Wesam Alanqar
Lynn Neir
Tammy Ferris
Sprint
9300 Metcalf Ave
Overland Park, KS 66212, USA
ananth.nagarajan@mail.sprint.com
wesam.alanqar@mail.sprint.com
lynn.neir@mail.sprint.com
tammy.ferris@mail.sprint.com

Hirokazu Ishimatsu
Japan Telecom Co., LTD
2-9-1 Hatchobori, Chuo-ku,
Tokyo 104-0032 Japan
Phone: +81 3 5540 8493
Fax: +81 3 5540 8485
EMail: hirokazu@japan-telecom.co.jp

Olga Aparicio
Cable & Wireless Global
11700 Plaza America Drive
Reston, VA 20191
Phone: 703-292-2022
Email: olga.aparicio@cwusa.com

Steven Wright
Science & Technology
BellSouth Telecommunications
41G70 BSC
675 West Peachtree St. NE.
Atlanta, GA 30375
Phone +1 (404) 332-2194
Email: steven.wright@snt.bellsouth.com overlay model architecture.