INTERNET-DRAFT                                                  Yong Xue
Document: draft-ietf-ipo-carrier-requirements-01.txt draft-ietf-ipo-carrier-requirements-02.txt       Worldcom Inc.
Category: Informational                                         (Editor)

Expiration Date: September, 2002

                                                            Monica Lazer
                                                          Jennifer Yates
                                                            Dongmei Wang
                                                                    AT&T

                                                        Ananth Nagarajan
                                                                  Sprint

                                                      Hirokazu Ishimatsu
                                                  Japan Telecom Co., LTD

                                                           Steven Wright
                                                               Bellsouth

                                                           Olga Aparicio
                                                 Cable & Wireless Global
                                                            March, 2002.

                 Carrier Optical Services Requirements

Status of this Memo

   This document is an Internet-Draft and is in full conformance with
   all provisions of Section 10 of RFC2026. Internet-Drafts are working
   documents of the Internet Engineering Task Force (IETF), its areas,
   and its working groups.  Note that other groups may also distribute
   working documents as Internet-Drafts.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or rendered obsolete by other documents
   at any time. It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/ietf/1id-abstracts.txt.

   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html.

   Abstract
   This Internet Draft describes the major carrier's service
   requirements for the automatic switched optical networks
   (ASON) from both an end-user's as well as an operator's
   perspectives. Its focus is on the description of the
   service building blocks and service-related control
   plane functional requirements. The management functions
   for the optical services and their underlying networks
   are beyond the scope of this document and will be addressed
   in a separate document.

   Table of Contents
   1. Introduction                                         3
    1.1 Justification                                        3                                      4
    1.2 Conventions used in this document                    3                  4
    1.3 Value Statement                                      3                                    4
    1.4 Scope of This Document                               4                             5
   2. Abbreviations                                          5                                        7
   3. General Requirements                                   5                                 7
    3.1 Separation of Networking Functions                   5                 7
    3.2 Separation of Call and Connection Control          8
    3.3 Network and Service Scalability                      6
    3.3                    9
    3.4 Transport Network Technology                         6
    3.4                      10
    3.5 Service Building Blocks                              7                           11
   4. Service Model Models and Applications                         7                     11
    4.1 Service and Connection Types                         7                      11
    4.2 Examples of Common Service Models                    8                 12
   5. Network Reference Model                                9                             13
    5.1 Optical Networks and Subnetworks                     9                  13
    5.2 Network Interfaces                                   9                                14
    5.3 Intra-Carrier Network Model                         11                       17
    5.4 Inter-Carrier Network Model                         12                       18
   6. Optical Service User Requirements                     13                   19
    6.1 Common Optical Services                             13                           19
    6.2 Bearer Interface Types                            20
    6.3 Optical Service Invocation                          14
    6.3 Bundled Connection                                  16                        20
    6.4 Levels of Transparency                              17
    6.5 Optical Connection granularity                      17
    6.6 Granularity                    22
    6.5 Other Service Parameters and Requirements           18         23
   7. Optical Service Provider Requirements                 19               24
    7.1 Access Methods to Optical Networks                  19                24
    7.2 Dual Homing and Network Interconnections            19          24
    7.3 Inter-domain connectivity                           20                         25
    7.4 Bearer Interface Types                              21
    7.5 Names and Address Management                        21
    7.6                      26
    7.5 Policy-Based Service Management Framework           22
    7.7 Support of Hierarchical Routing and Signaling       22         26
   8. Control Plane Functional Requirements for Optical
      Services                                              23                                            27
    8.1 Control Plane Capabilities and Functions            23          27
    8.2 Signaling Control Message Transport Network                                   24                 29
    8.3 Control Plane Interface to Data Plane               25             31
    8.4 Management Plane Interface to Data Plane            25          31
    8.5 Control Plane Interface to Management Plane         26       31
    8.6 Control Plane Interconnection                       27                     32
   9. Requirements for Signaling, Routing and Discovery     27   33
    9.1 Requirements for information sharing over UNI,
        I-NNI and E-NNI                                           27                                   33
    9.2 Signaling Functions                                 28                               33
    9.3 Routing Functions                                   30                                 34
    9.4 Requirements for path selection                     32                   35
    9.5 Automatic Discovery Functions                       32                     36
   10. Requirements for service and control plane
       resiliency 34                                         37
    10.1 Service resiliency                                 34                               38
    10.2 Control plane resiliency                           37                         40
   11. Security Considerations                              40                            41
    11.1 Optical Network Security Concerns                  40                41
    11.2 Service Access Control                           42
   12. Acknowledgements                                   43
   13. References                                         43
   Authors' Addresses                                     45
   Appendix: Interconnection of Control Planes            47

1. Introduction

   Next generation

   Optical transport networks are evolving from the current TDM-based
   SONET/SDH optical networks as defined by ITU Rec. G.803 [ITU-G803] to
   the emerging WDM-based optical transport network networks (OTN) will
   consist of optical cross-connects (OXC), DWDM optical line systems
   (OLS) and optical add-drop multiplexers (OADM) based on the
   architecture as defined by
   the ITU Rec. G.872 in [G.872]. The OTN is
   bounded by a set of [ITU-G872]. Therefore in the near future,
   carrier optical channel access points and has transport networks will consist of a layered
   structure consisting mixture of optical channel, multiplex section the
   SONET/SDH-based sub-networks and
   transmission section sub-layer networks. the WDM-based wavelength or fiber
   switched OTN sub-networks. The OTN networks can be either transparent
   or opaque depending upon if O-E-O functions are utilized within the
   sub-networks. Optical networking encompasses the functionalities for
   the establishment, transmission, multiplexing, switching of optical
   connections carrying a wide range of user signals of varying formats
   and bit rate.

   The ultimate goal is to enhance

   Some of the OTN with an intelligent optical
   layer control plane to dynamically provision network resources and to
   provide network survivability using ring biggest challenges for the carriers are bandwidth
   management and mesh-based protection
   and restoration techniques. fast service provisioning in such a multi-technology
   networking environment. The resulting intelligent networks are
   called emerging and rapidly evolving automatic
   switched optical networks or ASON [G.8080].

   The emerging and rapidly evolving ASON technologies are technology [ITU-G8080, ITU-G807] is
   aimed at providing optical networks with intelligent networking
   functions and capabilities in its control plane to enable wavelength switching, rapid
   optical connection provisioning and provisioning, dynamic rerouting. rerouting as well as
   multiplexing and switching at different granularity level, including
   fiber, wavelength and TDM time slots. The same
   technology will also be able to ASON control TDM based plane should
   not only enable the new networking functions and capabilities for the
   emerging OTN networks, but significantly enhance the service
   provisioning capabilities for the existing SONET/SDH optical
   transport network networks as defined by ITU Rec. G.803 [G.803]. This
   well.

   The ultimate goals should be to allow the carriers to quickly and
   dynamically provision network resources and to enhance network
   survivability using ring and mesh-based protection and restoration
   techniques. The carriers see that this new networking platform will
   create tremendous business opportunities for the network operators
   and service providers to offer new services to the market. market, reduce
   their network Capital and Operational expenses (CAPEX and OPEX), and
   improve their network efficiency.

1.1. Justification

   The charter of the IPO WG calls for a document on "Carrier Optical
   Services Requirements" for IP/Optical networks. This document
   addresses that aspect of the IPO WG charter. Furthermore, this
   document was accepted as an IPO WG document by unanimous agreement at
   the IPO WG meeting held on March 19, 2001, in Minneapolis, MN, USA.
   It presents a carrier and end-user perspective on optical network
   services and requirements.

1.2. Conventions used in this document

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in RFC 2119.

1.3. Value Statement

   By deploying ASON technology, a carrier expects to achieve the
   following benefits from both technical and business perspectives:

   - Rapid Circuit Provisioning: ASON technology will enable the dynamic
   end-to-end provisioning of the optical connections across the optical
   network by using standard routing and signaling protocols.

   - Enhanced Survivability: ASON technology will enable the network to
   dynamically reroute an optical connection in case of a failure using
   mesh-based network protection and restoration techniques, which
   greatly improves the cost-effectiveness compared to the current line
   and ring protection schemes in the SONET/SDH network.

   - Cost-Reduction: ASON networks will enable the carrier to better
   utilize the optical network , thus achieving significant unit cost
   reduction per Megabit due to the cost-effective nature of the optical
   transmission technology, simplified network architecture and reduced
   operation cost.

   - Service Flexibility: ASON technology will support provisioning of
   an assortment of existing and new services such as protocol and bit-
   rate independent transparent network services, and bandwidth-on-
   demand services.

   - Enhanced Interoperability: ASON technology will use a control plane
   utilizing industry and international standards architecture and
   protocols, which facilitate the interoperability of the optical
   network equipment from different vendors.

   In addition, the introduction of a standards-based control plane
   offers the following potential benefits:

   - Reactive traffic engineering at optical layer that allows network
   resources to be dynamically allocated to traffic flow.

   - Reduce the need for service providers to develop new operational
   support systems software for the network control and new service
   provisioning on the optical network, thus speeding up the deployment
   of the optical network technology and reducing the software
   development and maintenance cost.

   - Potential development of a unified control plane that can be used
   for different transport technologies including OTN, SONET/SDH, ATM
   and PDH.

1.4.  Scope of This Document this document

   This document is aimed at providing, intended to provide, from the carrier's carriers perspective,
   a service framework and some associated requirements in relation to
   the optical services to be offered in the next generation optical
   transport networking environment and their service control and
   management functions.  As such, this document concentrates on the
   requirements driving the work towards realization of ASON. the automatic
   switched optical networks.  This document is intended to be protocol-neutral. protocol-
   neutral, but the specific goals include providing the requirements to
   guide the control protocol development and enhancement within IETF in
   terms of reuse of IP-centric control protocols in the optical
   transport network.

   Every carrier's needs are different. The objective of this document
   is NOT to define some specific service models. Instead, some major
   service building blocks are identified that will enable the carriers
   to mix and match use them in order to create the best service platform most
   suitable to their business model. These building blocks include
   generic service types, service enabling control mechanisms and
   service control and management functions.

   The ultimate goal is to
   provide the fundamental principles and basic set of requirements to guide for the
   control protocol developments
   within IETF in terms plane of IP over the automatic switched optical technology.

   In this document, we consider IP networks have been
   provided in a major client to series of ITU Recommendations under the optical
   network, but umbrella of the same requirements
   ITU ASTN/ASON architectural and principles should be equally
   applicable to non-IP clients such functional requirements as SONET/SDH, ATM, ITU G.709, etc.

2.  Abbreviations

          ASON     Automatic Switched listed
   below:

   Architecture:

    - ITU-T Rec. G.8070/Y.1301 (2001), Requirements for the Automatic
   Switched Transport Network (ASTN)[ASTN]

    - ITU-T Rec. G.8080/Y.1304  (2001), Architecture of the Automatic
   Switched Optical Network (ASON)[ASON]

   Signaling:

    - ITU-T Rec.  G.7713/Y.1704 (2001), Distributed Call and Connection
   Management (DCM)[DCM]

   Routing:

    - ITU-T Draft Rec. G.7715/Y.1706 (2002), Routing Architecture and
   requirements for ASON Networks (work in progress)[ASONROUTING]

   Discovery:

   - ITU-T Rec. G.7714/Y.1705 (2001), Generalized Automatic Discovery
[DISC]

   Control Transport Network:

   - ITU-T Rec. G.7712/Y.1703 (2001), Architecture and Specification of
   Data Communication Network[DCN]
   This document provides further detailed requirements based on this
   ASTN/ASON framework. In addition, even though we consider IP a major
   client to the optical network in this document, the same requirements
   and principles should be equally applicable to non-IP clients such as
   SONET/SDH, ATM, ITU G.709, etc.

2.  Abbreviations

          ASON    Automatic Switched Optical Networking
          ASTN    Automatic Switched Transport Network
          CAC     Connection Admission Control
          E-NNI   Exterior
          NNI
          E-UNI   Exterior     Node-to-Node Interface
          UNI     User-to-Network Interface
          IWF     Inter-Working Function
          I-NNI   Interior NNI
          I-UNI   Interior UNI
          E-NNI   Exterior NNI     Node-to-Node Interface
          NE      Network Element
          OTN     Optical Transport Network
          OLS     Optical Line System
          PI      Physical Interface
          SLA     Service Level Agreement
          UNI     User-to-Network Interface

3. General Requirements

   In this section, a number of generic requirements related to the
   service control and management functions are discussed.

3.1. Separation of Networking Functions

   It makes logical sense to segregate the networking functions within
   each layer network into three logical functional planes: control
   plane, data plane and management plane. They are responsible for
   providing network control functions, data transmission functions and
   network management functions respectively. The crux of the ASON
   network is the networking intelligence that contains automatic
   routing, signaling and discovery functions to automate the network
   control functions.

   Control Plane: includes the functions related to networking control
   capabilities such as routing, signaling, and policy control, as well
   as resource and service discovery. These functions are automated.

   Data Plane (transport plane): includes the functions related to
   bearer channels and signal transmission.

   Management Plane: includes the functions related to the management
   functions of network element, networks and network resources and
   services. These functions are less automated as compared to control
   plane functions.

   Each plane consists of a set of interconnected functional or control
   entities, physical or logical, responsible for providing the
   networking or control functions defined for that network layer.

   The separation of the control plane from both the data and management
   plane is beneficial to the carriers in that it:

   - Allows equipment vendors to have a modular system design that will
   be more reliable and maintainable thus reducing the overall systems
   ownership and operation cost.

   - Allows carriers to have the flexibility to choose a third party
   vendor control plane software systems as its control plane solution
   for its switched optical network.

   - Allows carriers to deploy a unified control plane and
   OSS/management systems to manage and control different types of
   transport networks it owes.

   - Allows carriers to use a separate control network specially
   designed and engineered for the control plane communications.

   The separation of control, management and transport function is
   required and it shall accommodate both logical and physical level
   separation.

   Note that it is in contrast to the IP network where the control
   messages and user traffic are routed and switched based on the same
   network topology due to the associated in-band signaling nature of
   the IP network.

3.2.  Network Separation of call and Service Scalability

   Although specific applications or networks may be connection control

   To support many enhanced optical services, such as scheduled
   bandwidth on demand and bundled connections, a small scale, call model based on
   the separation of the call control plane protocol and functional capabilities shall not
   limit large-scale networks

   In terms of connection control is
   essential.

   The call control is responsible for the scale end-to-end session
   negotiation, call admission control and complexity of call state maintenance while
   connection control is responsible for setting up the future optical network, connections
   associated with a call across the following assumption network. A call can be made when considering correspond to
   zero, one or more connections depending upon the scalability
   and performance that are required number of the optical control and
   management functions.  - There may be up
   connections needed to hundreds support the call.

   The existence of the connection depends upon the existence of its
   associated call session and connection can be deleted and re-
   established while still keeping the call session up.

   The call control shall be provided at an ingress port or gateway port
   to the network such as UNI and E-NNI.

   The control plane shall support the separation of the call control
   from the connection control.

   The control plane shall support call  admission control on call setup
   and connection admission control on connection setup.

3.3.  Network and Service Scalability

   Although some specific applications or networks may be on a small
   scale, the control plane protocol and functional capabilities shall
   support large-scale networks.

   In terms of the scale and complexity of the future optical network,
   the following assumption can be made when considering the scalability
   and performance that are required of the optical control and
   management functions.

   - There may be up to thousands of OXC nodes and the same or higher
   order of magnitude of OADMs per carrier network.

   - There may be up to thousands of terminating ports/wavelength per
   OXC node.

   - There may be up to hundreds of parallel fibers between a pair of
   OXC nodes.

   - There may be up to hundreds of wavelength channels transmitted on
   each fiber.

   In relation to the frequency and duration of the optical connections:

   - The expected end-to-end connection setup/teardown time should be in
   the order of seconds. seconds, preferably less.

   - The expected connection holding times should be in the order of
   minutes or greater.

   - The expected number of connection attempts at UNI should be in the
   order of 100's.

   - There may be up to millions of simultaneous optical connections
   switched across a single carrier network.

   Note that even though automated rapid optical connection provision provisioning
   is required, but the carriers expect the majority of provisioned
   circuits, at least in short term, to have a long lifespan ranging
   from months to years.

3.3.

   In terms of service provisioning, some carriers may choose to perform
   testing prior to turning over to the customer.

3.4. Transport Network Technology

   Optical services can be offered over different types of underlying
   optical transport technologies including both TDM-based SONET/SDH
   network and WDM-based OTN networks.

   For this document, standards-based transport technologies SONET/SDH
   as defined in the ITU Rec. G.803 and OTN implementation framing as
   defined in ITU Rec. G.709 shall be supported.

   Note that the service characteristics such as bandwidth granularity
   and signaling framing hierarchy to a large degree will be determined
   by the capabilities and constraints of the server layer network.

3.4.

3.5.  Service Building Blocks

   The primary goal of this document is to identify a set of basic
   service building blocks the carriers can mix and match them use to create the best
   suitable service models that serve their business needs.

   The service building blocks are comprised of a well-defined set of
   service
   capabilities and a basic set of service control and management functions, which offer functions.
   These capabilities and functions should support a basic set of
   services and
   additionally enable a carrier to define build enhanced services through
   extensions and customizations. Examples of the building blocks
   include the connection types, provisioning methods, control
   interfaces, policy control functions, and domain internetworking
   mechanisms, etc.

4.  Service Model and Applications

   A carrier's optical network supports multiple types of service
   models. Each service model may have its own service operations,
   target markets, and service management requirements.

4.1.  Service and Connection Types

   The optical network is primarily offering high bandwidth connectivity
   in the form of connections, where a connection is defined to be a
   fixed bandwidth connection between two client network elements, such
   as IP routers or ATM switches, established across the optical
   network. A connection is also defined by its demarcation from ingress
   access point, across the optical network, to egress access point of
   the optical network.

   The following connection capability types topologies must be supported:

   - Uni-directional Bi-directional point-to-point connection

   - Bi-directional Uni-directional point-to-point connection

   - Uni-directional point-to-multipoint connection

   For point-to-point connection, the following three types of network
   connections based on different connection set-up control methods
   shall be supported:

   - Permanent connection (PC): Established hop-by-hop directly on each
   ONE along a specified path without relying on the network routing and
   signaling capability. The connection has two fixed end-points and
   fixed cross-connect configuration along the path and will stays
   permanently until it is deleted. This is similar to the concept of
   PVC in ATM.

   - Switched connection (SC): Established through UNI signaling
   interface and the connection is dynamically established by network
   using the network routing and signaling functions. This is similar to
   the concept of SVC in ATM.

   - Soft permanent connection (SPC): Established by specifying two PC
   at end-points and let the network dynamically establishes a SC
   connection in between. This is similar to the SPVC concept in ATM.

   The PC and SPC connections should be provisioned via management plane
   to control interface and the SC connection should be provisioned via
   signaled UNI interface.

4.2.  Examples of Common Service Models

   Each carrier can defines may define its own service model based on it business
   strategy and environment. The following are three example service
   models that carriers may use: use.

4.2.1.  Provisioned Bandwidth Service (PBS)

   The PBS model provides enhanced leased/private line services
   provisioned via service management interface (MI)  using either PC or
   SPC type of connection. The provisioning can be real-time or near
   real-time. It has the following characteristics:

   - Connection request goes through a well-defined management interface

   - Client/Server relationship between clients and optical network.

   - Clients have no optical network visibility and depend on network
   intelligence or operator for optical connection setup.

4.2.2.  Bandwidth on Demand Service (BDS)

   The BDS model provides bandwidth-on-demand dynamic connection
   services via signaled user-network interface (UNI). The provisioning
   is real-time and is using SC type of optical connection. It has the
   following characteristics:

   - Signaled connection request via UNI directly from the user or its
   proxy.

   - Customer has no or limited network visibility depending upon the
   control interconnection model used and network administrative policy.

   - Relies on network or client intelligence for connection set-up
   depending upon the control plane interconnection model used.

4.2.3.  Optical Virtual Private Network (OVPN)

   The OVPN model provides virtual private network at the optical layer
   between a specified set of user sites.  It has the following
   characteristics:

   - Customers contract for specific set of network resources such as
   optical connection ports, wavelengths, etc.

   - Closed User Group (CUG) concept is supported as in normal VPN.

   - Optical connection can be of PC, SPC or SC type depending upon the
   provisioning method used.

   - An OVPN site can request dynamic reconfiguration of the connections
   between sites within the same CUG.

   - Customer A customer may have limited or full visibility and control of
   contracted network resources depending upon up
   to the extent allowed by the customer service contract.

   At a minimum, the PBS, BDS and OVPN service models described above
   shall be supported by the control functions.

5.  Network Reference Model

   This section discusses major architectural and functional components
   of a generic carrier optical network, which will provide a reference
   model for describing the requirements for the control and management
   of carrier optical services.

5.1.  Optical Networks and Subnetworks

   As mentioned before, there are two main types of optical networks
   that are currently under consideration: SDH/SONET network as defined
   in ITU Rec. G.803, and OTN as defined in ITU Rec. G.872.

   We assume an OTN is composed of a set of optical cross-connects (OXC)
   and optical add-drop multiplexer (OADM) which are interconnected in a
   general mesh topology using DWDM optical line systems (OLS).

   It is often convenient for easy discussion and description to treat
   an optical network as an subnetwork cloud, in which the details of
   the network become less important, instead focus is on the function
   and the interfaces the optical network provides. In general, a
   subnetwork can be defined as a set of access points on the network
   boundary and a set of point-to-point optical connections between
   those access points.

5.2.  Network Interfaces

   A generic carrier network reference model describes a multi-carrier
   network environment. Each individual carrier network can be further
   partitioned into domains or sub-networks based on administrative,
   technological or architectural reasons.  The demarcation between
   (sub)networks can be either logical or physical and  consists of a
   set of reference points identifiable in the optical network. From the
   control plane perspective, these reference points define a set of
   control interfaces in terms of optical control and management
   functionality. The following figure 5.1 is an illustrative diagram
   for this.

                            +---------------------------------------+
                            |            single carrier network     |
         +--------------+   |                                       |
         |              |   | +------------+        +------------+  |
         |   IP         |   | |            |        |            |  |
         |   Network    +-EUNI+    +--UNI+  Optical   +-I-UNI--+   +---UNI--+ Carrier IP |  |
         |              |   | | Subnetwork |        |   network  |  |
         +--------------+   | | (Domain A) +--+     |            |  |
                            | +------+-----+  |     +------+-----+  |
                            |        |        |            |        |
                            |      I-NNI    I-NNI        I-UNI    E-NNI         UNI       |
         +--------------+   |        |        |            |        |
         |              |   | +------+-----+  |     +------+-----+  |
         |   IP         +-EUNI|         +--UNI+            |  +-----+            |  |
         |   Network    |   | |   Optical  |        |  Optical   |  |
         |              |   | | Subnetwork +-I-NNI--+ +-E-NNI--+ Subnetwork |  |
         +--------------+   | | (Domain A) |        | (Domain B) |  |
                            | +------+-----+        +------+-----+  |
                            |        |                     |        |
                            +---------------------------------------+
                                   E-UNI
                                    UNI                  E-NNI
                                     |                     |
                              +------+-------+     +----------------+     +-------+--------+
                              |              |     |                |
                              | Other Client |     |  Other Carrier |
                              |   Network    |     |    Network     |
                              | (ATM/SONET)  |     |                |
                              +--------------+     +----------------+

                         Figure 5.1 Generic Carrier Network Reference
Model

   The network interfaces encompass two aspects of the networking
   functions: user data plane interface and control plane interface. The
   former concerns about user data transmission across the physical
   network interface and the latter concerns about the control message
   exchange across the network interface such as signaling, routing,
   etc. We call the former physical interface (PI) and the latter
   control plane interface. Unless otherwise stated, the control
   interface is assumed in the remaining of this document.

5.2.1.  Control Plane Interfaces

   Control interface defines a relationship between two connected
   network entities on both side of the interface. For each control
   interface, we need to define an architectural function each side
   plays and a controlled set of information that can be exchanged
   across the interface. The information flowing over this logical
   interface may include, but not limited to:

   - Endpoint name and address

   - Reachability/summarized network address information

   - Topology/routing information

   - Authentication and connection admission control information

   - Connection management signaling messages

   - Network resource control information

   Different types of the interfaces can be defined for the network
   control and architectural purposes and can be used as the network
   reference points in the control plane. In this document, the
   following set of interfaces are defined as shown in Figure 5.1: 5.1. The
   User-Network Interface (UNI) is a bi-directional signaling interface
   between service requester and service provider control entities. We further differentiate between interior UNI (I-UNI) and
   exterior UNI (E-UNI) as follows:

   - E-UNI: A UNI interface for which the The
   service request control entity resides outside the carrier network
   control domain.

   - I-UNI: A UNI interface for which the service requester control
   entity resides within the carrier network control domain.

   The reason for doing so is that we can differentiate a class of UNI
   where there is trust relationship between the client equipment and
   the optical network. This private nature of UNI may have similar
   functionality to the NNI in that it may allow for controlled routing
   information to cross the UNI. Specifics of the I-UNI are currently
   under study.

   The Network-Network Interface  (NNI) is a bi-directional signaling
   interface between two optical network elements or sub-networks.

   We differentiate between interior (I-NNI) and exterior (E-NNI) NNI as
   follows:

   - E-NNI: A NNI interface between two control plane entities belonging
   to different control domains.

   - I-NNI: A NNI interface between two control plane entities within
   the same control domain in the carrier network.

   It should be noted that it is quite common to use E-NNI between two
   sub-networks within the same carrier network if they belong to
   different control domains. Different types of interface, interior vs.
   exterior, have different implied trust relationship for security and
   access control purposes. Trust relationship is not binary, instead a
   policy-based control mechanism need to be in place to restrict the
   type and amount of information that can flow cross each type of
   interfaces depending the carrier's service and business requirements.
   Generally, two networks have a trust relationship if they belong to
   the same administrative domain.

   Interior

   An example of an interior interface examples include is an I-NNI between two optical
   network elements in a single control domain or an I-UNI interface
   between the optical transport network and an IP client network owned
   by the same carrier. domain. Exterior interface
   examples include an E-NNI between two different carriers or an E-UNI a UNI
   interface between a carrier optical network and its customers.

   The control plane shall support the UNI and NNI interface described
   above and the interfaces shall be configurable in terms of the type
   and amount of control information exchange and their behavior shall
   be consistent with the configuration (i.e., exterior versus interior
   interfaces).

5.3. Intra-Carrier Network Model

   Intra-carrier network model is
   concerned about concerns the network service control and
   management issues within networks owned by a single carrier.

5.3.1. Multiple Sub-networks

   Without loss of generality, the optical network owned by a carrier
   service operator can be depicted as consisting of one or more optical
   sub-networks interconnected by direct optical links. There may be
   many different reasons for more than one optical sub-networks It may
   be the result of using hierarchical layering, different technologies
   across access, metro and long haul (as discussed below), or a result
   of business mergers and acquisitions or incremental optical network
   technology deployment by the carrier using different vendors or
   technologies.

   A sub-network may be a single vendor and single technology network.
   But in general, the carrier's optical network is heterogeneous in
   terms of equipment vendor and the technology utilized in each sub-
   network.

5.3.2.  Access, Metro and Long-haul networks

   Few carriers have end-to-end ownership of the optical networks. Even
   if they do, access, metro and long-haul networks often belong to
   different administrative divisions as separate optical sub-networks.
   Therefore Inter-(sub)-networks interconnection is essential in terms
   of supporting the end-to-end optical service provisioning and
   management. The access, metro and long-haul networks may use
   different technologies and architectures, and as such may have
   different network properties.

   In general, an end-to-end optical connection connectivity may easily cross multiple
   sub-networks with the following possible scenarios scenarios:
   Access -- Metro -- Access
   Access - Metro -- Long Haul -- Metro - Access

5.3.3.  Implied Control Constraints

   The carrier's optical network is in general treated as a trusted
   domain, which is defined as a network under a single technical
   administration with implied trust relationship. Within a trusted
   domain, all the optical network elements and sub-networks are
   considered to be secure and trusted by each other at a defined level.
   In the intra-carrier model interior interfaces (I-NNI and I-UNI) are
   generally assumed.

   One business application for the interior UNI is the case where a
   carrier service operator offers data services such as IP, ATM and
   Frame Relay over its optical core network. Data services network
   elements such as routers and ATM switches are considered to be
   internal optical service client devices. The topology information for
   the carrier optical network may be shared with the internal client
   data networks.

5.4.  Inter-Carrier Network Model

   The inter-carrier model focuses on the service and control aspects
   between different carrier networks and describes the internetworking
   relationship between them.

5.4.1.  Carrier Network Interconnection

   Inter-carrier interconnection provides for connectivity among
   different between
   optical network operators. To provide the global reach end-
   to-end end-to-end
   optical services, the optical service control and management between
   different carrier networks become becomes essential. The normal It is possible to
   support distributed peering within the IP client layer network where
   the connectivity between the carriers may include:

   Private Peering: Two carriers set up dedicated connection between
   them via a private arrangement.

   Public Peering: Two carriers set up a point-to-point connection
   between them at a public optical network access points (ONAP)

   Due to the nature of the automatic optical switched network, it is
   possible to support the distributed peering for the IP client layer
   network where the connection between two distant IP routers can be
   connected achieved via
   an optical connection. transport network.

5.4.2. Implied Control Constraints

   In the inter-carrier network model, each carrier's optical network is
   a separate administrative domain. Both the UNI interface between the
   user and the carrier network and the NNI interface between two
   carrier's networks are crossing the carrier's administrative boundary
   and therefore are by definition exterior interfaces.

   In terms of control information exchange, the topology information
   shall not be allowed to across cross both E-NNI and E-UNI UNI interfaces.

6.  Optical Service User Requirements

   This section describes the user requirements for optical services,
   which in turn impose the requirements on service control and
   management for the network operators. The user requirements reflect
   the perception of the optical service from a user's point of view.

6.1.  Common Optical Services

   The basic unit of an optical transport service is a fixed-bandwidth
   optical
   connection connectivity between connected parties. However different services are
   created based on its supported signal characteristics (format, bit
   rate, etc), the service invocation methods and possibly the
   associated Service Level Agreement (SLA) provided by the service
   provider.

   At present, the following are the major optical services provided in
   the industry:

   - SONET/SDH, with different degrees of transparency

   - Optical wavelength services: opaque or transparent services

   - Ethernet at 1 Gbps and 10 Gbps

   - Storage Area Networks (SANs) based on FICON, ESCON and Fiber
   Channel

   The services mentioned above shall be provided by the optical
   transport layer of the network being provisioned using the same
   management, control and data planes.

   Opaque

   Optical Wavelength Service refers to transport services where signal
   framing is negotiated between the client and the network operator
   (framing and bit-rate dependent), and only the payload is carried
   transparently. SONET/SDH transport is most widely used for network-wide network-
   wide transport. Different levels of transparency can be achieved in
   the SONET/SDH
   transmission and is discussed in Section 6.4.

   Transparent Service assumes protocol and rate independency. However,
   since any optical connection is associated with a signal bandwidth,
   for transparent optical services, knowledge of the maximum bandwidth
   is required. transmission.

   Ethernet Services, specifically 1Gb/s and 10Gbs Ethernet services,
   are gaining more popularity due to the lower costs of the customers'
   premises equipment and its simplified management requirements
   (compared to SONET or SDH).

   Ethernet services may be carried over either SONET/SDH (GFP mapping)
   or WDM networks. The Ethernet service requests will require some
   service specific parameters: priority class, VLAN Id/Tag, traffic
   aggregation parameters.

   Storage Area Network (SAN) Services. ESCON and FICON are proprietary
   versions of the service, while Fiber Channel is the standard
   alternative. As is the case with Ethernet services, SAN services may
   be carried over either SONET/SDH (using GFP mapping) or WDM networks.

   Currently SAN services require only point-to-point connections, but
   it is envisioned that in the future they may also require multicast
   connections.

   The control plane shall provide the carrier with the capability
   functionality to to provision, control and manage all the services
   listed above.

6.2.  Bearer Interface Types

   All the bearer interfaces implemented in the ONE shall be supported
   by the control plane and associated signaling protocols.

   The following interface types shall be supported by the signaling
   protocol:
   - SDH/SONET
   - 1 Gb Ethernet, 10 Gb Ethernet (WAN mode)
   - 10 Gb Ethernet (LAN mode)
   - FC-N (N= 12, 50, 100, or 200) for Fiber Channel services
   - OTN (G.709)
   - PDH

6.3.  Optical Service Invocation

   As mentioned earlier, the methods of service invocation play an
   important role in defining different services.

6.2.1.

6.3.1. Provider-Controlled Service Provisioning

   In this scenario, users forward their service request to the provider
   via a well-defined service management interface. All connection
   management operations, including set-up, release, query, or
   modification shall be invoked from the management plane.

6.2.2.

6.3.2. User-Control Service Provisioning

   In this scenario, users forward their service request to the provider
   via a well-defined UNI interface in the control plane (including
   proxy signaling). All connection management operation requests,
   including set-up, release, query, or modification shall be invoked
   from directly connected user devices, or its signaling representative
   (such as a signaling proxy).

6.3.3. Call set-up requirements

   In summary the following requirements for the control plane have been
   identified:

   - The control plane shall support action results result codes as responses to
   any requests over the control interfaces.

   - The control plane shall support requests for connection call set-up, subject
   to policies in effect between the user and the network.

   - The control plane shall support the destination client device's
   decision to accept or reject connection creation call set-up requests from the
   initiating source
   client's device.

   - The control plane shall support requests for connection call set-up and
   deletion across multiple subnetworks over both Interior and Exterior Network
   Interfaces. (sub)networks.

   - NNI signaling shall support requests for connection call set-up, subject to
   policies in effect between the subnetworks. (sub)networks.

   - Connection Call set-up shall be supported for both uni-directional and
   bi-directional bi-
   directional connections.

   - Upon connection call request initiation, the control plane shall generate a
   network unique Connection-ID Call-ID associated with the connection, to be used for
   information retrieval or other activities related to that connection.

   - CAC shall be provided as part of the call control plane functionality. It
   is the role of the CAC function to determine if there is
   sufficient free resource available downstream to allow a new
   connection.

   - When a connection request is received across the NNI, it is
   necessary to ensure that the resources exist within the downstream
   subnetwork to establish the connection.

   - If sufficient resources are available, the CAC may permit the
   connection request call can be
   allowed to proceed.

   - If sufficient resources are not available, the CAC shall send an
   appropriate notification upstream towards the originator of the
   connection request that the request has been denied. proceed based on resource availability and authentication.

   - Negotiation for connection call set-up for multiple service level options
   shall be supported across the NNI. supported.

   - The policy management system must determine what kind of
   connections calls can
   be set up across a given NNI. up.

   - The control plane elements need the ability to rate limit (or pace)
   call setup attempts into the network.

   - The control plane shall report to the management plane, the
   Success/Failures of a connection request call request.

   - Upon a connection request failure, the control plane shall report
   to the management plane a cause code identifying the reason for the
   failure.

   Upon a connection request failure:

   - The control plane shall report to the management plane a cause code
   identifying the reason for the
   failure

   - and all allocated resources shall be released. A negative
   acknowledgment shall be returned across to the NNI

   - Allocated resources shall be released. source.

   - Upon a connection request success:

   - A success a positive acknowledgment shall
   be returned to the source when a connection has been successfully established.

   - The positive acknowledgment shall be transmitted both downstream
   and upstream, over
   established, the NNI, to inform both source and destination
   clients of when they may start transmitting data.

   The control plane shall support the client's request for connection
   tear down.

   NNI signaling be notified.

   - The control plane shall support requests for connection tear down call release by connection-ID. Call-
   ID.

   - The control plane shall allow either end to initiate connection
   release procedures.

   NNI signaling flows shall allow any end point or any intermediate
   node to initiate the connection call release over the NNI. procedures.

   - Upon connection teardown call release completion all resources associated with the
   connection call
   shall become available for access for new requests.

   - The management plane shall be able to tear down release calls or connections
   established by the control plane both gracefully and forcibly on
   demand.

   - Partially deleted calls or connections shall not remain within the
   network.

   - End-to-end acknowledgments shall be used for connection deletion
   requests.

   - Connection deletion shall not result in either restoration or
   protection being initiated.

   Connection deletion shall at a minimum use a two pass signaling
   process, removing the cross-connection only after the first signaling
   pass has completed.

   The control plane shall support management plane and client's device
   request for connection attributes or status query.

   - The control plane shall support management plane and neighboring
   device (client or intermediate node) request requests for connection
   attributes or status query.

   - The control plane shall support action results code responses to any
   requests over the control interfaces.

   The management plane shall be able to query on demand the status of
   the connection

   The UNI UNI shall support initial registration and updates of the UNI-C
   with the network via the control plane.

6.4.  Optical Connection granularity

   The UNI shall service granularity is determined by the specific technology,
   framing and bit rate of the physical interface between the ONE and
   the client at the edge and by the capabilities of the ONE. The
   control plane needs to support registration signaling and updates routing for all the
   services supported by the UNI-C entity ONE. In general, there should not be a one-
   to-one correspondence imposed between the granularity of the clients service
   provided and user interfaces that it controls. the maximum capacity of the interface to the user.

   The UNI control plane shall support network queries of the client devices. ITU Rec. G.709 connection
   granularity for the OTN network.

   The UNI control plane shall support detection of client devices or of edge ONE
   failure.

6.3.  Bundled Connection

   Bundled connections differ from simple basic connections in that a the SDH/SONET connection request may generate multiple parallel connections bundled
   together as one virtual connection.

   Multiple point-to-point connections may granularity.

   Sub-rate interfaces shall be managed supported by the network so optical control plane
   such as to appear VT /TU granularity (as low as a single compound connection to 1.5 Mb/s).

   In addition, 1 Gb and 10 Gb granularity shall be supported for 1 Gb/s
   and 10 Gb/s (WAN mode) Ethernet framing types, if implemented in the end-points.
   Examples of such bundled connections are connections based on virtual
   concatenation, diverse routing, or restorable connections.
   hardware.

   The actions required to manage compound connections are following fiber channel interfaces shall be supported by the same as
   control plane if the ones outlined for given interfaces are available on the management of basic connections.

6.4.  Levels of Transparency

   Opaque connections are framing and bit-rate dependent equipment:

   - FC-12
   - FC-50
   - FC-100
   - FC-200

   Encoding of service types in the exact
   signal framing is known or needs to protocols used shall be negotiated between network
   operator and its clients. However, there may such that
   new service types can be multiple levels added by adding new code point values or
   objects.

6.5.  Other Service Parameters and Requirements

6.5.1.  Classes of
   transparency for individual framing types. Current transport networks
   are mostly based on SONET/SDH technology. Therefore, multiple levels
   have Service

   We use "service level" to be considered when defining specific optical services.

   The example below shows multiple levels describe priority related characteristics
   of transparency applicable connections, such as holding priority, set-up priority, or
   restoration priority. The intent currently is to
   SONET/SDH transport.

   - Bit transparency in the SONET/SDH frames. This means that allow each carrier
   to define the OXCs
   will not terminate any byte actual service level in the SONET OH bytes.

   - SONET Line and section OH (SDH multiplex and regenerator section
   OH) are normally terminated terms of priority, protection,
   and the network can monitor restoration options. Therefore, individual carriers will
   determine mapping of individual service levels to a large specific set of parameters.

   However, if this level of transparency is used, the TOH will
   quality features.

   The control plane shall be
   tunneled in unused bytes capable of the non-used frames and will be recovered
   at the terminating ONE with their original values.

   - Line mapping individual service
   classes into specific protection and section OH are forwarded transparently, keeping their
   integrity thus providing the customer the / or restoration options.

6.5.2.  Diverse Routing Attributes

   The ability to better determine
   where route service paths diversely is a failure has occurred, this highly desirable
   feature. Diverse routing is very helpful when one of the connection traverses several carrier networks.

   - G.709 OTN signals

6.5.  Optical Connection granularity

   The service granularity is determined by the specific technology,
   framing and bit rate of the physical interface between the ONE parameters and
   the client is
   specified at the edge and by time of the capabilities connection creation. The following
   provides a basic set of requirements for the ONE. diverse routing support.

   The control plane needs routing algorithms shall be able to support signaling route a single
   demand diversely from N previously routed demands in terms of link
   disjoint path, node disjoint path and routing for all the
   services supported by the ONE.

   The physical connection is characterized by the nominal optical
   interface rate SRLG disjoint path.

7.  Optical Service Provider Requirements

   This section discusses specific service control and other properties such as protocol supported.
   However, management
   requirements from the consumable attribute is bandwidth. In general, there
   should not service provider's point of view.

7.1.  Access Methods to Optical Networks

   Multiple access methods shall be a one-to-one correspondence imposed between supported:

   - Cross-office access (User NE co-located with ONE)

   - Direct remote access (Dedicated links to the
   granularity user)

   - Remote access via access sub-network (via a
   multiplexing/distribution sub-network)

   All of the service provided above access methods must be supported.

7.2.  Dual Homing and the maximum capacity Network Interconnections

   Dual homing is a special case of the
   interface access network. Client devices
   can be dual homed to the user. The bandwidth utilized by the client becomes same or different hub, the logical connection, for which same or different
   access network, the customer will be charged.

   In addition, sub-rate interfaces shall be supported by same or different core networks, the optical
   control plane such as VT /TU granularity (as low as 1.5 Mb/s) same or
   different carriers.  The control plane shall support the ITU Rec. G.709 connection
   granularity for the OTN network. different levels of dual homing connectivity
   result in many different combinations of configurations. The control plane main
   objective for dual homing is for enhanced survivability.

   Dual homing must be supported. Dual homing shall support not require the SDH and SONET connection
   granularity.

   In addition, 1 Gb and 10 Gb granularity shall be supported use
   of multiple addresses for 1 Gb/s
   and 10 Gb/s (WAN mode) Ethernet framing types, if implemented in the
   hardware.

   For SAN services the following interfaces have been defined and shall
   be supported same client device.

7.3.  Inter-domain connectivity

   A domain is a portion of a network, or an entire network that is
   controlled by the a single control plane if entity.  This section discusses
   the given interfaces various requirements for connecting domains.

7.3.1.  Multi-Level Hierarchy

   Traditionally current transport networks are
   available on the equipment:
   - FC-12
   - FC-50
   - FC-100
   - FC-200

   Therefore, sub-rate fabric granularity shall support VT-x/TU-1n
   granularity down divided into core inter-
   city long haul networks, regional intra-city metro networks and
   access networks. Due to VT1.5/TU-l1, consistent with the hardware.

   Encoding of service types differences in transmission technologies,
   service, and multiplexing needs, the protocols used shall be such that
   new service three types can be added by adding new code point values or
   objects.

6.6.  Other Service Parameters and Requirements

6.6.1.  Classes of Service

   We use "service level" to describe priority related characteristics networks are
   served by different types of connections, such as holding priority, set-up priority, or
   restoration priority. network elements and often have
   different capabilities. The intent currently is to allow each carrier
   to define the actual service diagram below shows an example three-
   level in terms of priority, protection,
   and restoration options. Therefore, individual carriers will
   determine mapping of individual service levels to a specific set of
   quality features.

   Specific protection hierarchical network.

                          +--------------+
                          |  Core Long   |
               +----------+   Haul       +---------+
               |          | Subnetwork   |         |
               |          +--------------+         |
       +-------+------+                    +-------+------+
       |              |                    |              |
       |  Regional    |                    |  Regional    |
       |  Subnetwork  |                    |  Subnetwork  |
       +-------+------+                    +-------+------+
               |                                   |
       +-------+------+                    +-------+------+
       |              |                    |              |
       | Metro/Access |                    | Metro/Access |
       |  Subnetwork  |                    |  Subnetwork  |
       +--------------+                    +--------------+

                    Figure 2 Multi-level hierarchy example

   Routing and restoration options are discussed in Section
   10. However, it should be noted that while high grade services may
   require allocation of protection or restoration facilities, there may
   be an application for a low grade of service signaling for which preemptable
   facilities may be used.

   Multiple service level options multi-level hierarchies shall be supported and the user shall
   to allow carriers to configure their networks as needed.

7.3.2.  Network Interconnections

   Subnetworks may have the option of selecting over the UNI a service level for an
   individual connection.

   The control plane shall be capable multiple points of mapping individual service
   classes into specific protection inter-connections. All
   relevant NNI functions, such as routing, reachability information
   exchanges, and inter-connection topology discovery must recognize and / or restoration options.

6.6.2.  Connection Latency

   Connection latency is a parameter required for
   support multiple points of time-
   sensitive services like Fiber Channel services. Connection latency is
   dependent on the circuit length, and as such for these services, it inter-connections between subnetworks.
   Dual inter-connection is essential that shortest path algorithms are often used and end to-end
   latency is verified before acknowledging circuit availability. as a survivable architecture.

   The control plane shall provide support latency-based for routing constraint
   (such as distance) as a path selection parameter.

6.6.3.  Diverse Routing Attributes

   The ability to route service paths diversely is a highly desirable
   feature. Diverse routing is one and signaling for
   subnetworks having multiple points of the connection parameters interconnection.

7.4.  Names and is
   specified at Address Management

7.4.1.  Address Space Separation

   To ensure the time scalability of and smooth migration toward to the connection creation. The following
   provides a basic set of requirements for
   optical switched network, the diverse routing support.

   Diversity between two links being separation of three address spaces are
   required:

   - Internal transport network addresses: This is used for routing should be defined
   in terms of link disjointness, node disjointness or Shared Risk Link
   Groups (SRLG) that is defined as a group of links which share some
   risky resources, such as a specific sequence of conduits or a
   specific office. A SRLG
   control plane messages within the transport network.

   - Transport Network Assigned (TNA) address: This is a relationship between routable
   address in the links that
   should be characterized by two parameters:

   - Type of Compromise: Examples would be shared fiber cable, shared
   conduit, shared right-of-way (ROW), shared link on an optical ring,
   shared office - no power sharing, etc.) transport network.

   - Extent of Compromise: For compromised outside plant, this would be Client addresses: This address has significance in the length of clientlayer.

7.4.2.  Directory Services

   Directory Services shall support address resolution and translation
   between various user edge device names and corresponding optical
   network addresses.  UNI shall use the sharing.

   The user naming schemes for
   connection request.

7.4.3.  Network element Identification

   Each control plane routing algorithms domain and each network element within it shall be
   uniquely identifiable.

7.5.  Policy-Based Service Management Framework

   The IPO service must be supported by a robust policy-based management
   system to be able to route a single
   demand diversely from N previously routed demands in terms make important decisions.

   Examples of link
   disjoint path, node disjoint path and SRLG disjoint path.

7.  Optical Service Provider Requirements

   This section discusses specific service control and management
   requirements from the service provider's point of view.

7.1.  Access Methods to Optical Networks

   Multiple access methods shall be supported: policy decisions include:

   - Cross-office access (User NE co-located with ONE) In this scenario
   the user edge device resides in the same office as the ONE and has
   one or more physical connections to the ONE. Some What types of these access connections may be in use, while others may can be idle pending set up for a new
   connection request. given UNI?
   - Direct remote access

   In this scenario the user edge device is remotely located from the
   ONE What information can be shared and has inter-location connections to the ONE over multiple fiber
   pairs or via a DWDM system. Some of these connections may what information must be
   restricted in use,
   while others may be idle pending a new connection request. automatic discovery functions?

   - Remote access via access sub-network

   In this scenario remote user edge devices What are connected to the ONE
   via a multiplexing/distribution sub-network. Several levels of
   multiplexing may security policies over signaling interfaces?

   - What border nodes should be assumed in this case. This scenario is applicable used when routing depend on factors
   including, but not limited to metro/access subnetworks of signals from multiple users, out, source and destination address, border
   nodes loading, time of
   which only a subset have connectivity connection request.

   Requirements:

   - Service and network policies related to the ONE.

   All configuration and
   provisioning, admission control, and support of the above access methods Service Level
   Agreements (SLAs) must be supported.

7.2.  Dual Homing flexible, and Network Interconnections

   Dual homing is a special case of the access network. Client devices
   can be dual homed to at the same or different hub, time simple and
   scalable.

   - The policy-based management framework must be based on standards-
   based policy systems (e.g., IETF COPS).

   - In addition, the same or different
   access network, IPO service management system must support and be
   backwards compatible with legacy service management systems.

8.  Control Plane Functional Requirements for Optical Services

   This section addresses the same or different core networks, requirements for the same or
   different carriers.  The different levels of dual homing connectivity
   result optical control plane
   in many different combinations support of configurations. The main
   objective for dual homing is for enhanced survivability. service provisioning.

   The different configurations scope of dual homing will have great impact on
   admission control, reachability information exchanges,
   authentication, neighbor and service discovery across the interface.

   Dual homing must be supported.

7.3.  Inter-domain connectivity

   A domain is a portion control plane include the control of a network, or the interfaces
   and network resources within an entire optical network that is
   controlled by a single control plane entity.  This section discusses and the various requirements for connecting domains.

7.3.1.  Multi-Level Hierarchy

   Traditionally current transport networks are divided into core inter-
   city long haul networks, regional intra-city metro networks interfaces
   between the optical network and
   access its client networks. Due to the differences in transmission technologies,
   service, In other words,
   it should include both NNI and multiplexing needs, the three types of networks UNI aspects.

8.1.  Control Plane Capabilities and Functions

   The control capabilities are
   served supported by different types of network elements the underlying control
   functions and often have
   different capabilities. protocols built in the control plane.

8.1.1.  Network Control Capabilities

   The diagram below shows an example three-
   level hierarchical network.

                              +--------------+
                              |  Core Long   |
               + -------------+   Haul       +-------------+
               |              | Subnetwork   |             |
               |              +-------+------+             |
       +-------+------+                            +-------+------+
       |              |                            |              |
       |  Regional    |                            |  Regional    |
       |  Subnetwork  |                            |  Subnetwork  |
       +-------+------+                            +-------+------+
               |                                           |
       +-------+------+                            +-------+------+
       |              |                            |              |
       | Metro/Access |                            | Metro/Access |
       |  Subnetwork  |                            |  Subnetwork  |
       +--------------+                            +--------------+

                    Figure 2 Multi-level hierarchy example

   Functionally we can often see clear split among following capabilities are required in the 3 types of
   networks: Core long-haul network deals primarily with facilities
   transport control plane
   to successfully deliver automated provisioning for optical services:
   - Network resource discovery
   - Address assignment and switching. SONET signals at STS-1 resolution

   - Routing information propagation and higher rates
   constitute the units of transport. Regional networks will be more
   closely tied to service support dissemination

   - Path calculation and VT-level signals need to selection

   - Connection management

   These capabilities may be also
   switched. As an example of interaction supported by a device switching DS1 signals
   interfaces to other such devices over the long-haul network via STS-1
   links. Regional networks will also groom traffic combination of functions
   across the Metro
   networks, which generally have direct interfaces to clients, control and
   support a highly varied mix of services.  It should be noted that,
   although not shown in Figure 2, metro/access subnetworks may have
   interfaces to the core network, without having management planes.

8.1.2.  Control Plane Functions for network control

   The following are essential functions needed to go through a
   regional network. support network
   control capabilities:

   - Signaling
   - Routing
   - Automatic resource, service and signaling neighbor discovery

   Specific requirements for multi-level hierarchies shall be supported
   to allow carriers to configure their networks as needed.

7.3.2.  Network Interconnections

   Subnetworks may have multiple points of inter-connections. All
   relevant NNI functions, such as routing, reachability information
   exchanges, signaling, routing and inter-connection topology discovery must recognize and
   support multiple points of inter-connections between subnetworks.
   Dual inter-connection is often used as a survivable architecture.

   Such an inter-connection is a special case of a mesh network,
   especially if these subnetworks are connected via an I-NNI, i.e.,
   they are within the same administrative domain.  In this case the
   control plane requirements described
   addressed in Section 8 will also apply 9.

   The general requirements for the inter-connected subnetworks, control plane functions to support
   optical networking and are therefore not discussed
   here.

   However, there are additional requirements if the interconnection is
   across different domains, via an E-NNI.  These additional
   requirements include service functions include:

   - The control plane must have the communication of failure handling functions,
   routing, load sharing, etc. while adhering capability to pre-negotiated
   agreements on these functions across establish, teardown
   and maintain the boundary nodes of end-to-end connection, and the
   multiple domains.  Subnetwork interconnection may also be achieved
   alternatively via a separate subnetwork.  In this case, hop-by-hop connection
   segments between any two end-points.

   - The control plane must have the above capability to support traffic-
   engineering requirements stay the same, but need including resource discovery and
   dissemination, constraint-based routing and path computation.

   - The control plane shall support network status or action result
   code responses to be communicated any requests over the
   interconnecting subnetwork, similar to the E-NNI scenario described
   above.

7.4.  Bearer Interface Types

   All the bearer interfaces implemented in the ONE shall be supported
   by the control interfaces.

   - The control plane shall support call admission control on UNI and associated signaling protocols.
   connection-admission control on NNI.

   - The following interface types control plane shall be supported by support graceful release of network
   resources associated with the signaling
   protocol:
   - SDH
   - SONET
   - 1 Gb Ethernet, 10 Gb Ethernet (WAN mode)
   - 10 Gb Ethernet (LAN mode)
   - FC-N (N= 12, 50, 100, connection after aUpon successful
   connection teardown or 200) for Fiber Channel services failed connections.

   - OTN (G.709) The control plane shall support management plane request for
   connection attributes/status query.

   - PDH The control plane must have the capability to support various
   protection and restoration schemes.

   - Transparent optical

7.5.  Names Control plane failures shall not affect active connections and Address Management

7.5.1.  Address Space Separation

   To ensure
   shall not adversely impact the scalability of transport and smooth migration toward to the
   optical switched network, the data planes.

   - The control plane should allow separation of three address spaces are
   required: major control function
   entities including routing, signaling and discovery and should allow
   different control distribution of those functionalities, including
   centralized, distributed or hybrid.

   - Internal The control plane should allow physical separation of the control
   plane from the transport network addresses
   - Transport Network Assigned (TNA) address
   - Client addresses.

7.5.2.  Directory Services

   Directory Services shall be supported to enable operator plane to query support either tightly coupled or
   loosely coupled control plane solutions.

   - The control plane should support the
   optical network for routing and signaling proxy to
   participate in the optical network address of a specified user.
   Address resolution normal routing and translation between various user edge device
   names signaling message exchange and corresponding optical network addresses shall be supported.
   UNI shall use the user naming schemes
   processing.

   - Security and resilience are crucial issues for connection request.

7.5.3.  Network element Identification

   Each network element within a single the control domain shall plane
   and will be uniquely
   identifiable. The identifiers may be re-used across multiple domains.
   However, unique identification addressed in Section 10 and 11 of this document.

8.2.  Control Message Transport Network

   The control message transport network is a transport network element becomes possible
   by associating its local identity with the global identity for
   control plane messages and it consists of its
   domain.

7.6.  Policy-Based Service Management Framework

   The IPO service must be supported by a robust policy-based management
   system to be able to make important decisions.

   Examples of policy decisions include: - What types of connections can
   be set up for a given UNI?

   - What information can be shared and what information of control channels
   that interconnect the nodes within the control plane. Therefore, the
   control message transport network must be
   restricted in automatic discovery functions?

   - What are accessible by each of the security policies over signaling interfaces?
   - What border
   communicating nodes should be used when routing depend (e.g., OXCs). If an out-of-band IP-based control
   message transport network is an overlay network built on factors
   including, but not limited to source and destination address, border
   nodes loading, time top of connection request.

   Requirements: - Service and the
   IP data network policies related to configuration
   and provisioning, admission control, and support of Service Level
   Agreements (SLAs) using some tunneling technologies, these tunnels must
   be flexible, and standards-based such as IPSec, GRE, etc.

   - The control message transport network must terminate at each of the same time simple and
   scalable.
   nodes in the transport plane.

   - The policy-based management framework must control message transport network shall not be based on standards-
   based policy systems (e.g. IETF COPS).

   - In addition, assumed to have
   the IPO service management system must support and be
   backwards compatible with legacy service management systems.

7.7.  Support of Hierarchical Routing and Signaling

   The routing protocol(s) shall support hierarchical routing
   information dissemination, including same topology information aggregation
   and summarization.

   The routing protocol(s) shall minimize global information and keep
   information locally significant as much as possible.

   Over external interfaces only reachability information, next routing
   hop and service capability information should be exchanged. Any other
   network related information shall not leak out to other networks.

8.  Control Plane Functional Requirements for Optical Services

   This section addresses the requirements for data plane, nor shall the optical control data plane
   in support of service provisioning.

   The scope of the and
   control plane include the traffic be assumed to be congruently routed.

   A control of channel is the interfaces
   and network resources within an optical communication path for transporting control
   messages between network nodes, and over the interfaces UNI (i.e., between the optical network its client networks. In other words, it
   include NNI and
   UNI aspects.

8.1.  Control Plane Capabilities entity on the user side (UNI-C) and Functions the UNI entity on the network
   side (UNI-N)). The control capabilities are supported by the underlying messages include signaling messages,
   routing information messages, and other control
   functions maintenance protocol
   messages such as neighbor and protocols built service discovery.

   The following three types of signaling in the control plane.

8.1.1.  Network Control Capabilities channel shall
   be supported:

   - In-band signaling: The following capabilities signaling messages are required carried over a
   logical communication channel embedded in the network control plane
   to successfully deliver automated provisioning for data-carrying optical services:
   - Neighbor, service and topology discovery
   link or channel. For example, using the overhead bytes in SONET data
   framing as a logical communication channel falls into the in-band
   signaling methods.

   - Address assignment and resolution

   - Routing information propagation and dissemination

   - Path calculation and selection

   - Connection management

   These capabilities In fiber, Out-of-band signaling: The signaling messages are carried
   over a dedicated communication channel separate from the optical
   data-bearing channels, but within the same fiber. For example, a
   dedicated wavelength or TDM channel may be supported by a combination of functions
   across used within the control and same fiber
   as the management planes.

8.1.2.  Control Plane Functions for network control data channels.

   - Out-of-fiber signaling: The following signaling messages are essential functions needed carried over a
   dedicated communication channel or path within different fibers to support
   those used by the optical data-bearing channels. For example,
   dedicated optical fiber links or communication path via separate and
   independent IP-based network infrastructure are both classified as
   out-of-fiber signaling.

   The UNI control capabilities:
   - Signaling
   - Routing
   - Automatic resource, service and neighbor discovery

   Specific requirements for signaling, routing channel and discovery are
   addressed proxy signaling defined in Section 9. the OIF UNI
   1.0 [OIFUNI] shall be supported.

   The general requirements for control message transport network provides communication
   mechanisms between entities in the control plane functions to support
   optical networking and service functions include: plane.

   - The control plane
   must have the capability to establish, teardown and maintain the end-
   to-end connection, and the hop-by-hop connection segments between any
   two end-points. message transport network shall support reliable
   message transfer.

   - The control plane must message transport network shall have the capability to support traffic-
   engineering requirements including resource discovery and
   dissemination, constraint-based routing and path computation. its own OAM
   mechanisms.

   - The control plane message transport network shall use protocols that
   support congestion control mechanisms.

   In addition, the control message transport network status or action result
   code responses should support
   message priorities. Message prioritization allows time critical
   messages, such as those used for restoration, to any requests have priority over the control interfaces.

   -
   other messages, such as other connection signaling messages and
   topology and resource discovery messages.

   The control plane message transport network shall support resource allocation on both UNI be highly reliable and
   NNI.

   - Upon successful connection teardown all resources associated with
   implement failure recovery.

8.3.  Control Plane Interface to Data Plane

   In the situation where the connection shall become available for access for new requests.

   - The control plane shall support management and data plane request are provided
   by different suppliers, this interface needs to be standardized.
   Requirements for
   connection attributes/status query.

   - a standard control-data plane interface are under
   study. The specification of a control plane must have the capability interface to support various
   protection and restoration schemes for the optical channel
   establishment.

   - data
   plane is outside the scope of this document.

   Control plane failures shall not affect active connections.

   - The control should support a standards based interface to configure
   and switching fabrics and port functions.

   Data plane shall monitor and detect the failure (LOL, LOS, etc.) and
   quality degradation (high BER, etc.) of the signals and be able to trigger restoration based on
   provide signal-failure and signal-degrade alarms or other indications of failure.

8.2.  Signaling Network

   The signaling network consists of a set of signaling channels that
   interconnect to the nodes within control plane
   accordingly to trigger proper mitigation actions in the control
   plane. Therefore, the
   signaling network must

8.4.  Management Plane Interface to Data Plane

   The management plane shall be accessible by each of responsible for the communicating
   nodes (e.g., OXCs).

   - The signaling network must terminate at each of the nodes resource
   management in the
   transport data plane.

   - The signaling It should able to partition the network
   resources and control the allocation and the deallocation of the
   resource for the use by the control plane.

   Data plane shall not monitor and detect the failure and quality
   degradation of the signals and be assumed able to provide signal-failure and
   signal-degrade alarms plus associated detailed fault information to have
   the same
   topology as management plane to trigger and enable the data plane, nor management for fault
   location and repair.

   Management plane failures shall not affect the data plane normal operation of a
   configured and operational control plane traffic be assumed or data plane.

8.5.  Control Plane Interface to be congruently routed.  A signaling
   channel Management Plane

   The control plane is considered a managed entity within a network.
   Therefore, it is subject to management requirements just as other
   managed entities in the communication path for transporting control messages
   between network nodes, and over the UNI (i.e., between are subject to such requirements.

   Control plane should be able to service the UNI entity
   on requests from the user side (UNI-C)
   management plane for end-to-end connection provisioning (e.g. SPC
   connection) and control plane database information query (e.g.
   topology database)
   Control plane shall report all the UNI entity on control plane faults to the
   management plane with detailed fault information

   In general, the management plane shall have authority over the network side (UNI-
   N)). The
   control messages include plane. Management plane should be able to configure the
   routing, signaling messages, routing
   information messages, and other discovery control maintenance protocol messages parameters such as neighbor and service discovery. There are three different
   types hold-down
   timers, hello-interval, etc. to effect the behavior of signaling methods depending on the way control
   plane. In the signaling channel
   is constructed: - In-band signaling: The signaling messages are
   carried over a logical communication channel embedded in case of network failure, both the data-
   carrying optical link or channel. For example, using management plane and
   the overhead
   bytes in SONET data framing as a logical communication channel falls
   into control plane need fault information at the in-band signaling methods.

   - In fiber, Out-of-band signaling: same priority. The signaling messages are carried
   over a dedicated communication channel separate from the optical
   data-bearing channels, but within
   control plane shall be responsible for providing necessary statistic
   data such as call counts, traffic counts to the same fiber. For example, a
   dedicated wavelength or TDM channel may management plane.
   They should be used within available upon the same fiber
   as query from the data channels.

   - Out-of-fiber signaling: management plane.
   The signaling messages are carried over a
   dedicated communication channel or path within different fibers management plane shall be able to
   those used tear down connections
   established by the optical data-bearing channels. For example,
   dedicated optical fiber links or communication path via separate control plane both gracefully and
   independent IP-based network infrastructure forcibly on
   demand.

8.6.  Control Plane Interconnection

   When two (sub)networks are both classified as
   out-of-fiber signaling.

   In-band signaling may interconnected on transport plane level,
   so should be used over a UNI interface, where there are
   relatively few data channels. Proxy signaling is also important over
   the UNI interface, as it is useful to support users unable to signal
   to the optical two corresponding control network via a direct communication channel. In this
   situation a third party system containing the UNI-C entity will
   initiate and process the information exchange on behalf of at the user
   device. control plane.
   The UNI-C entities in this case reside outside of the user in
   separate signaling systems.

   In-fiber, out-of-band control plane interconnection model defines the way how two
   control networks can be interconnected in terms of controlling
   relationship and out-of-fiber signaling channel alternatives control information flow allowed between them.

8.6.1.  Interconnection Models

   There are usually used for NNI interfaces, which generally have significant
   numbers of channels per link. Signaling messages relating to all three basic types of control plane network interconnection
   models: overlay, peer and hybrid, which are defined by the different channels can then be aggregated over IETF IPO
   WG document [IPO_frame], as discussed in the Appendix.

   Choosing the level of coupling depends upon a single or small number of signaling channels.

   The signaling network forms the basis different
   factors, some of the transport network
   control plane. which are:

   - The signaling Variety of clients using the optical network shall support reliable
   message transfer.

   - The signaling Relationship between the client and optical network shall have its own OAM mechanisms.

   - The signaling network Operating model of the carrier

   Overlay model (UNI like model) shall use protocols that support congestion be supported for client to
   optical control mechanisms.

   In addition, the signaling network should support message priorities.
   Message prioritization allows time critical messages, such as those
   used plane interconnection.

   Other models are optional for restoration, client to have priority over other messages, such as
   other connection signaling messages and topology and resource
   discovery messages.

   The signaling network must optical control plane
   interconnection.

   For optical to optical control plane interconnection all three models
   shall be highly scalable, with minimal
   performance degradations as supported. In general, the number priority for support of nodes and node sizes
   increase.

   The signaling network shall
   interconnection models should be highly reliable overlay, hybrid and implement failure
   recovery.

   Security peer, in
   decreasing order.

9.  Requirements for Signaling, Routing and resilience are crucial issues Discovery

9.1.  Requirements for the signaling network
   will be addressed in Section 10 information sharing over UNI, I-NNI and 11 E-NNI

   Different types of this document.

8.3.  Control Plane Interface to Data Plane

   In the situation where the control plane and data plane are provided
   by interfaces shall impose different suppliers, this interface needs requirements and
   functionality due to their different trust relationships.
   Specifically:

   -  Topology information shall not be standardized.
   Requirements for a standard exchanged across E-NNI and UNI.

   -  The control -data plane interface are under
   study. Control plane interface to shall allow the data plane is outside carrier to configure the scope type
   and extent of this document.

8.4.  Management Plane Interface to Data Plane

   The management plane control information exchange across various interfaces.

   -  Address resolution exchange over UNI is responsible needed if an addressing
      directory service is not available.

9.2.  Signaling Functions

   Call and connection control and management signaling messages are
   used for identifying which network
   resources that the control plane may use establishment, modification, status query and release of
   an end-to-end optical connection.  Unless otherwise specified, the
   word "signaling" refers to carry out its control
   functions.  Additional resources may be allocated or existing
   resources deallocated over time.

   Resources both inter-domain and intra-domain
   signaling.

   - The inter-domain signaling protocol shall be able to be allocated agnostic to the control plane intra-
   domain signaling protocol for
   control plane functions include resources involved in setting up and
   tearing down calls and control plane specific resources.  Resources
   allocated to all the control plane for domains within the purpose of setting up network.

   - Signaling shall support both strict and
   tearing down calls include access loose routing.

   - Signaling shall support individual as well as groups (a set of access points), connection point groups (a set of connection points). Resources
   allocated to the control plane
   requests.

   - Signaling shall support fault notifications.

   - Inter-domain signaling shall support per connection, globally
   unique identifiers for the operation of the control plane
   itself may include protected all connection management primitives based on
   a well-defined naming scheme.

   - Inter-domain signaling shall support crank-back and protecting control channels.

   Resources allocated rerouting.

9.3.  Routing Functions

   Routing includes reachability information propagation, network
   topology/resource information dissemination and path computation.
   Network topology/resource information dissemination is to provide
   each node in the control plane by network with information about the management plane
   shall be carrier network
   such that a single node is able to support constraint-based path
   selection.  A mixture of hop-by-hop routing, explicit/source routing
   and hierarchical routing will likely be de-allocated from the control plane on management
   plane request.

   If resources are supporting an active connection used within future transport
   networks.

   All three mechanisms (Hop-by-hop routing, explicit / source-based
   routing and the resources
   are requested to hierarchical routing) must be de-allocated by management plane, the control
   plane shall reject the request.  The management plane supported.  Messages
   crossing untrusted boundaries must either
   wait until not contain information regarding
   the resources are no longer in use or tear down details of an internal network topology.

   Requirements for routing information dissemination:

   - The inter-domain routing protocol shall be agnostic to the
   connection before intra-
   domain routing protocol within any of the resources can be de-allocated from domains within the control
   plane. Management plane failures shall not affect active connections.

   Management plane failures shall not affect network.

   - The exchange of the normal operation following types of information shall be
   supported by inter-domain routing protocols:

      - Inter-domain topology
      - Per-domain topology abstraction
      - Per domain reachability information

   - Metrics for routing decisions supporting load sharing, a
   configured and operational control plane or data plane.

8.5.  Control Plane Interface to Management Plane

   The control plane is considered a managed entity within a network.
   Therefore, it is subject to management requirements just as other
   managed entities in the network are subject to such requirements.

8.5.1.  Soft Permanent Connections (Point-and click provisioning)

   In the case range of SPCs,
   service granularity and service types, restoration capabilities,
   diversity, and policy.

   Major concerns for routing protocol performance are scalability and
   stability, which impose the management plane requests following requirement on the control plane
   to set up / tear down a connection just like what we can do over a
   UNI. routing
   protocols:

   - The management plane routing protocol shall be able to query on demand scale with the status size of the connection request network

   The control plane routing protocols shall report to the
   management plane, the Success/Failures of a connection request.  Upon
   a connection request failure, the control plane support following requirements:

   1. Routing protocol shall report to the
   management plane a cause code identifying the reason for the failure.

8.5.2.  Resource Contention resolution Since resources are allocated to
   the control plane for use, there should not be contention between the
   management plane support hierarchical routing information
   dissemination, including topology information aggregation and the control plane for connection set-up.  Only
   the control plane can establish connections for allocated resources.
   However, in general, the management plane shall have authority over
   the control plane.
   summarization.

   2. The control plane routing protocol(s) shall not assume authority over management plane
   provisioning functions.

   In the case of network failure, both the management plane and the
   control plane need fault minimize global information at the same priority.

   The control plane needs fault information in order to perform its
   restoration function (in the event that the control plane is
   providing this function). However, the control plane needs less
   granular and keep
      information than that required by the management plane.  For
   example, the control plane locally significant as much as possible.
      Over external interfaces only needs to know whether the resource is
   good/bad.  The management plane would additionally need to know if a
   resource was degraded or failed and the reason for the failure, the
   time the failure occurred reachability information, next
   routing hop and so on.

   The control plane service capability information should be exchanged.
Any
   other network related information shall not assume authority over management plane
   for its  management functions (FCAPS). leak out to other
networks.

   3. The control plane routing protocol shall be responsible for providing necessary
   statistic data such able to minimize global information
   and keep information locally significant as call counts, traffic counts much as possible (e.g.,
   information local to a node, a sub-network, a domain, etc). For
   example, a single optical node may have thousands of ports. The ports
   with common characteristics need not to the management
   plane. They should be available upon the query from the management
   plane.

   Control plane advertised individually.

   4.  The routing protocol shall support policy-based CAC function either within
   the control plane or provide an interface to a policy server outside
   the network.

   Topological distinguish static routing information
   and dynamic routing information. The routing protocol operation shall
   update dynamic and static routing information differently. Only
dynamic
   routing information learned shall be updated in the discovery process real time.

   5. Routing protocol shall be able to be queried on demand from control the management plane.

   The management plane shall dynamic information
   updating frequency through different types of thresholds. Two types
   of thresholds could be able to tear down connections
   established by the control plane both gracefully defined: absolute threshold and forcibly on
   demand.

8.6.  Control Plane Interconnection

   When two (sub)networks are interconnected on transport plane level,
   so should be two corresponding control network at the control plane. relative
   threshold.

   6. The control plane interconnection model defines the way how two
   control networks can be interconnected in terms of controlling
   relationship routing protocol shall support trigger-based and control timeout-based
   information flow allowed between them.

8.6.1.  Interconnection Models

   There are three basic types update.

   7. Inter-domain routing protocol shall support policy-based routing
   information exchange.

   8. The routing protocol shall be able to support different levels of control plane network interconnection
   models: overlay, peer
   protection/restoration and hybrid, which other resiliency requirements. These are defined by the IETF IPO
   WG document [IPO_frame].

   Choosing
   discussed in Section 10.

   All the level of coupling depends upon a number of different
   factors, some of which are:

   - Variety of clients using scalability techniques will impact the optical network

   - Relationship resource
   representation accuracy. The tradeoff between accuracy of the client routing
   information and optical network

   - Operating model of the carrier

   Overlay model (UNI like model) shall be supported for client to
   optical control plane interconnection

   Other models are optional for client to optical control plane
   interconnection

   For optical routing protocol scalability is an important
   consideration to optical control plane interconnection all three models
   shall be supported

9. made by network operators.

9.4.  Requirements for Signaling, Routing and Discovery

9.1.  Requirements path selection

    The following are functional requirements for information sharing over UNI, I-NNI and E-NNI

   There are three types of interfaces where the routing information
   dissemination may occur: UNI, I-NNI and E-NNI. Different types of
   interfaces shall impose different requirements and functionality due
   to their different trust relationships.  Over UNI, the user network
   and the transport network form a client-server relationship.
   Therefore, the transport network topology shall not be disseminated
   from transport network to the user network.

   Information flows expected over the UNI shall support the following:
   - Call control
   - Resource Discovery
   - Connection Control path selection:

   - Connection Selection

   Address resolution exchange over UNI is needed if an addressing
   directory service is not available.

   Information flows over the I-NNI Path selection shall support the following:
   - Resource Discovery
   - Connection Control
   - Connection Selection shortest path routing.

   - Connection Routing

   Information flows over the E-NNI Path selection shall also support constraint-based routing.  At
   least the following: following constraints shall be supported:

        - Call Control Cost
        - Resource Discovery Link utilization
        - Connection Control Diversity
        - Connection Selection Service Class

   - Connection Routing

9.2.  Signaling Functions

   Call and connection control Path selection shall be able to include/exclude some specific
   network resources, based on policy.

   - Path selection shall be able to support different levels of
   diversity, including node, link, SRLG and management signaling messages are
   used for SRG.

   - Path selection algorithms shall provide carriers the establishment, modification, status query and release ability to
   support a wide range of
   an end-to-end optical connection.

9.2.1.  Call services and connection control

   To support many enhanced optical services, multiple levels of service
   classes. Parameters such as scheduled
   bandwidth on demand service type, transparency, bandwidth,
   latency, bit error rate, etc. may be relevant.

9.5.  Automatic Discovery Functions

   Automatic discovery functions include neighbor, resource and bundled connections, service
   discovery.

9.5.1.  Neighbor discovery

   Neighbor Discovery can be described as an instance of auto-discovery
   that is used for associating two network entities within a call model layer
   network based on
   the separation of the call control and connection control is
   essential. a specified adjacency relation.

   The call control is responsible for the end-to-end session
   negotiation, call admission control and call state maintenance while
   connection control is responsible for setting up the connections
   associated with a call. A call can correspond to zero, one or more
   connections depending upon the number of connections needed to plane shall support the call.

   This call model has the advantage of reducing redundant call control
   information at intermediate (relay) connection control nodes, thereby
   removing the burden of decoding and interpreting the entire message following neighbor discovery
   capability as described in [ITU-g7714]:

   - Physical media adjacency that detects and its parameters. Since the call control is provided at the ingress
   to verifies the physical
   layer network or at gateways and connectivity between two connected network boundaries. As such element
   ports.

   - Logical network adjacency that detects and verify the
   relay bearer needs only provide logical
   network layer connection above the procedures to support switching
   connections.

   Call control is a signaling association physical layer between one or more user
   applications network
   layer specific ports.

   - Control adjacency that detect and verify the logical neighboring
   relation between two control entities associated with data plane
   network to elements that form either physical or logical adjacency.

    The control plane shall support manual neighbor adjacency
   configuration to either overwrite or supplement the set-up, release,
   modification and maintenance of sets of connections. Call control automatic
   neighbor discovery function.

9.5.2. Resource Discovery

   Resource discovery is
   used to maintain concerned with the association ability to verify physical
   connectivity between parties and a call may
   embody any number of underlying connections, including zero, at any
   instance of time.

   Call control may be realized by one two ports on adjacent network elements, improve
   inventory management of the following methods:

   - Separation network resources, detect configuration
   mismatches between adjacent ports, associating port characteristics
   of adjacent network elements, etc. Resource discovery shall be
   supported.

   Resource discovery can be achieved through either manual provisioning
   or automated procedures. The procedures are generic while the call
   specific mechanisms and control information into parameters carried by a
   single call/connection protocol

   - Separation can be technology
   dependent.

   After neighbor discovery resource verification and monitoring must be
   performed periodically to verify physical attributes to ensure
   compatibility.

9.5.3. Service Discovery

   Service Discovery can be described as an instance of the state machines auto-discovery
   that is used for call control verifying and connection
   control, whilst signaling information in exchanging service capabilities of a single call/connection
   protocol

   - Separation
   network. Service discovery can only happen after neighbor discovery.
   Since service capabilities of information and state machines by providing separate
   signaling protocols a network can dynamically change,
   service discovery may need to be repeated.

   Service discovery is required for call control all the optical services supported.

10.  Requirements for service and connection control

    Call admission control plane resiliency

   Resiliency is a policy function invoked by an
   Originating role in a Network and may involve cooperation with network capability to continue its operations under
   the
   Terminating role in condition of failures within the Network. Note that a call being allowed to
   proceed only indicates that network.  The automatic switched
   optical network assumes the call may proceed to request one or
   more connections. It does not imply that any separation of those connection
   requests will succeed. Call admission control may also be invoked at
   other plane and data
   plane. Therefore the failures in the network boundaries.

   Connection control is responsible for can be divided into
   those affecting the data plane and those affecting the overall control of
   individual connections. Connection plane.
   To provide enhanced optical services, resiliency measures in both
   data plane and control may also plane should be considered to implemented. The following
   failure handling principles shall be associated with link control. supported.

   The overall control of a connection
   is performed by the protocol undertaking the set-up and release
   procedures associated with a connection plane shall provide optical service failure detection and
   recovery functions such that the maintenance of failures in the
   state of data plane within
   the connection.

   Connection admission control is essentially a process that determines
   if there are sufficient resources to admit a connection (or re-
   negotiates resources during a call). This is usually performed on a
   link-by-link basis, based on local conditions and policy. Connection
   admission plane coverage can be quickly mitigated.

   The failure of control may refuse the connection request.

   Control plane shall support not in any way adversely affect
   the separation normal functioning of call existing optical connections in the data
   plane.

   In general, there shall be no single point of failure for all major
   control and
   connection control.

   Control plane shall support proxy signaling.

   Inter-domain signaling shall comply with g.8080 and g.7713 (ITU). functions, including signaling, routing etc. The inter-domain signaling protocol
   control plane shall be agnostic to the intra-
   domain provide reliable transfer of signaling protocol within messages
   and flow control mechanisms for easing any of the domains congestion within the
   network.

   Inter-domain signaling shall support both strict
   control plane.

10.1.  Service resiliency

   In circuit-switched transport networks, the quality and loose routing.

   Inter-domain signaling shall not be assumed necessarily congruent
   with routing.

    It should not be assumed that the same exact nodes are handling both
   signaling and routing in all situations.

   Inter-domain signaling shall support all call  management primitives:
   - Per individual connections

   - Per groups reliability
   of connections

   Inter-domain signaling shall support inter-domain notifications.

   Inter-domain signaling shall support per connection global connection
   identifier for all connection management primitives.

   Inter-domain signaling shall support both positive and negative
   responses for all requests, including the cause, when applicable.

   Inter-domain signaling shall support all established optical connections in the connection attributes
   representative to transport plane can be
   enhanced by the connection characteristics of protection and restoration mechanisms provided by the individual
   connections in scope.

   Inter-domain signaling shall support crank-back
   control plane functions.  Rapid recovery is required by transport
   network providers to protect service and rerouting.

   Inter-domain signaling shall also to support graceful deletion of connections
   including of failed connections, if needed.

9.3.  Routing Functions

   Routing includes reachability information propagation, stringent
   Service Level Agreements (SLAs) that dictate high reliability and
   availability for customer connectivity.

   Protection and restoration are closely related techniques for
   repairing network
   topology/resource information dissemination node and path computation. In
   optical network, each connection involves two user endpoints. When
   user endpoint A requests link failures. Protection is a connection collection
   of failure recovery techniques meant to user endpoint B, the optical rehabilitate failed
   connections by pre-provisioning dedicated protection network needs the reachability information
   connections and switching to select a path for the
   connection. If a user endpoint protection circuit once the failure
   is unreachable, a connection request
   to that user endpoint shall be rejected. Network topology/resource
   information dissemination detected.  Restoration is a collection of reactive techniques used
   to provide each node in rehabilitate failed connections by dynamic rerouting the failed
   connection around the network with
   stabilized and consistent information about failures using the carrier shared network such
   that a single node is able to support constrain-based path selection.
   A mixture
   resources.

   The protection switching is characterized by shorter recovery time at
   the cost of hop-by-hop routing, explicit/source routing the dedicated network resources while dynamic restoration
   is characterized by longer recover time with efficient resource
   sharing.  Furthermore, the protection and
   hierarchical routing will likely restoration can be used within future transport
   networks. Using hop-by-hop message routing, each node within
   performed either on a
   network makes routing decisions based per link/span basis or on an end-to-end
   connection path basis. The formal is called local repair initiated a
   node closest to the message destination, failure and the network topology/resource information or latter is called global repair
   initiated from the local routing tables
   if available. However, achieving efficient load balancing ingress node.

   The protection and
   establishing diverse connections restoration actions are impractical using hop-by-hop
   routing. Instead, explicit (or source) routing may be used usually in reaction to send
   signaling messages along a route calculated by the source. This
   route, described using
   failure in the networks. However, during the network maintenance
   affecting the protected connections, a set of nodes/links, is carried within network operator need to
   proactively force the
   signaling message, traffic on the protected connections to switch
   to its protection connection.

   The failure and used signal degradation in forwarding the message.

   Hierarchical routing supports signaling across NNIs.  It allows
   conveying summarized information across I-NNIs, transport plane is usually
   technology specific and avoids conveying
   topology information across trust boundaries. Each signaling message
   contains a list of the domains traversed, therefore shall be monitored and potentially details of
   the route within detected by
   the domain being traversed.

   All three mechanisms (Hop-by-hop routing, explicit / source-based
   routing transport plane.

   The transport plane shall report both physical level failure and hierarchical routing) must be supported.  Messages
   crossing trust boundaries must not contain information regarding
   signal degradation to the
   details control plane in the form of an internal network topology. This is particularly
   important the signal
   failure alarm and signal degrade alarm.

   The control plane shall support both alarm-triggered and hold-down
   timers based protection switching and dynamic restoration for failure
   recovery.

   Clients will have different requirements for connection availability.
   These requirements can be expressed in traversing E-UNIs terms of the "service level",
   which can be mapped to different restoration and E-NNIs. Connection routes protection options
   and
   identifiers encoded using topology information (e.g., node
   identifiers) must also not priority related connection characteristics, such as holding
   priority(e.g. pre-emptable or not), set-up priority, or restoration
   priority. However, how the mapping of individual service levels to a
   specific set of protection/restoration options and connection
   priorities will be conveyed over these boundaries.

   Requirements determined by individual carriers.

   In order for routing information dissemination:

   Routing protocols must propagate the appropriate information
   efficiently to network nodes.
    The following requirements apply:

   The inter-domain routing protocol shall comply with G.8080 (ITU).

   The inter-domain routing protocol shall be agnostic to the intra-
   domain routing protocol within any support multiple grades of service, the domains within
   control plane must support differing protection and restoration
   options on a per connection basis.

   In order for the network.

   The inter-domain routing protocol shall not impede any network to support multiple grades of service, the
   control plane must support setup priority, restoration priority and
   holding priority on a per connection basis.

   In general, the following routing paradigms protection schemes shall be considered for
   all protection cases within individual domains: the network:
   - Hierarchical routing Dedicated protection: 1+1 and 1:1
   - Step-by-step routing Shared protection: 1:N and M:N.
   - Source routing Unprotected

   The exchange of control plane shall support "extra-traffic" capability, which
   allows unprotected traffic to be transmitted on the following types of information protection
   circuit.

   The control plane shall support both trunk-side and drop-side
   protection switching.

   The following restoration schemes should be supported
   by inter-domain routing protocols

   - Inter-domain topology

   - Per-domain topology abstraction supported:
   - Per domain reachability information Restorable
   - Metrics for routing decisions supporting load sharing, a range of
   service granularity Un-restorable

   Protection and service types, restoration capabilities,
   diversity, and policy.

   Inter-domain routing protocols shall support can be done on an end-to-end basis per domain topology and
   resource information abstraction.

   Inter-domain protocols shall support reachability information
   aggregation.

   A major concern for routing protocol performance is scalability and
   stability issues, which impose following requirements
   connection. It can also be done on the routing
   protocols:

   - a per span or link basis between
   two adjacent network nodes. These schemes should be supported.

   The routing protocol performance shall not largely depend on protection and restoration actions are usually triggered by the
   scale of
   failure in the networks. However, during the network (e.g. maintenance
   affecting the number of nodes, protected connections, a network operator need to
   proactively force the number of links,
   end user etc.). traffic on the protected connections to switch
   to its protection connection. Therefore in order to support easy
   network maintenance, it is required that management initiated
   protection and restoration be supported.

   Protection and restoration configuration should be based on software
   only.

   The routing protocol design control plane shall keep allow the network
   size effect as small as possible.

   - modification of protection and
   restoration attributes on a per-connection basis.

   The routing protocols control plane shall support following scalability
   techniques:

   1. Routing protocol mechanisms for reserving bandwidth
   resources for restoration.

   The control plane shall support hierarchical routing information
   dissemination, including topology information aggregation and
   summarization.

   2. The mechanisms for normalizing connection
   routing protocol (reversion) after failure repair.

   Normal connection management operations (e.g., connection deletion)
   shall be able to minimize global information
   and keep information locally significant as much as possible (e.g.,
   information local to a node, a sub-network, a domain, etc). For
   example, a single optical node may have thousands of ports. The ports
   with common characteristics need not to result in protection/restoration being initiated.

10.2.  Control plane resiliency

   The control plane may be advertised individually.

   3. Routing protocol shall distinguish static routing information affected by failures in signaling network
   connectivity and
   dynamic routing information. Static routing information does not
   change due to connection operations, such as neighbor relationship,
   link attributes, total link bandwidth, etc. On the other hand,
   dynamic routing information updates due by software failures (e.g., signaling, topology and
   resource discovery modules).

   The signaling control plane should implement signaling message
   priorities to connection operations,
   such as link bandwidth availability, link multiplexing fragmentation,
   etc.

   4. ensure that restoration messages receive preferential
   treatment, resulting in faster restoration.

   The routing protocol operation optical control plane signal network shall update dynamic support protection and static
   routing information differently. Only dynamic routing information
   shall be updated
   restoration options to enable it to self-healing in real time.

   5. Routing protocol case of failures
   within the control plane.

   Control network failure detection mechanisms shall be able to distinguish
   between control the dynamic information
   updating frequency through different types of thresholds. Two types
   of thresholds could be defined: absolute threshold channel and relative
   threshold. software process failures.

   The dynamic routing information will not be disseminated
   if its difference is still inside control plane failure shall only impact the threshold. When an update has
   not been sent capability to
   provision new services.

   Fault localization techniques for a specific time (this time the isolation of failed control
   resources shall be configurable
   the carrier), an update is automatically sent. Default time could be
   30 minutes.

   All the scalability techniques will impact the network resource
   representation accuracy. The tradeoff between accuracy supported.

   Recovery from control plane failures shall result in complete
   recovery and re-synchronization of the routing
   information network.

11.  Security Considerations

   In this section, security considerations and the routing protocol scalability should be well
   studied. A routing protocol shall allow the network operators to
   adjust the balance according to their networks' specific
   characteristics.

9.4.  Requirements requirements for path selection

   The path selection algorithm must be able optical
   services and associated control plane requirements are described.

11.1.  Optical Network Security Concerns

   Since optical service is directly related to compute the path, physical network
   which
   satisfies is fundamental to a list telecommunications infrastructure,
   stringent security assurance mechanism should be implemented in
   optical networks.

   In terms of service parameter requirements, such as service
   type requirements, bandwidth requirements, protection requirements,
   diversity requirements, bit error rate requirements, latency
   requirements, including/excluding area requirements.  The
   characteristics security, an optical connection consists of a path are those two aspects.
   One is security of the weakest link. For example,
   if one of data plane where an optical connection itself
   belongs, and the links does not have link protection capability, other is security of the
   whole path should be declared as having no link-based protection. The
   following are functional requirements on path selection.

   - Path selection shall support shortest path as well as constraint-
   based routing.

   - Various constraints may be required for constraint based path
   selection, including but not limited to:
   - Cost
   - Load Sharing
   - Diversity
   - Service Class control plane.

11.1.1.  Data Plane Security

   - Path selection Misconnection shall be able avoided in order to include/exclude some specific
   locations, based on policy.

   - Path selection shall keep the user's data
   confidential.  For enhancing integrity and confidentiality of data,
   it may be able helpful to support protection/restoration
   capability. Section 10 discusses this subject in more detail.

   - Path selection shall be able to support different levels scrambling of
   diversity, including diversity routing and protection/restoration
   diversity.

   - Path selection algorithms shall provide carriers data at layer 2 or
   encryption of data at a higher layer.

11.1.2.  Control Plane Security

   It is desirable to decouple the ability control plane from the data plane
   physically.

   Restoration shall not result in miss-connections (connections
   established to
   support a wide range of services and multiple levels destination other than that intended), even for
   short periods of service
   classes. Parameters such as service type, transparency, bandwidth,
   latency, bit error rate, etc. may time (e.g., during contention resolution). For
   example, signaling messages, used to restore connectivity after
   failure, should not be relevant.

   - Path selection algorithms shall support forwarded by a set of requested routing
   constraints, and constraints of node before contention has been
   resolved.

   Additional security mechanisms should be provided to guard against
   intrusions on the networks. signaling network. Some of the network
   constraints are technology specific, such as the constraints in all-
   optical networks addressed in [John_Angela_IPO_draft]. The requested
   constraints these may include bandwidth requirement, diversity
   requirements, path specific requirements, as well as restoration
   requirements.

9.5.  Automatic Discovery Functions

   This section describes be done with
   the requirements for automatic discovery to
   aid distributed connection management (DCM) in help of the context management plane.

   - Network information shall not be advertised across exterior
   interfaces (UNI or E-NNI). The advertisement of
   automatically switched transport networks (ASTN/ASON), as specified
   in ITU-T recommendation G.807. Auto-discovery is applicable to network information
   across the
   User-to-Network Interface (UNI), Network-Node Interfaces (NNI) E-NNI shall be controlled and to
   the Transport Plane Interfaces (TPI) limited in a configurable
   policy based fashion. The advertisement of the ASTN.

   Automatic discovery functions include neighbor, resource network information shall
   be isolated and service
   discovery.

9.5.1.  Neighbor discovery

   This section provides the requirements for the automatic neighbor
   discovery for the UNI and NNI managed separately by each administration.

   - The signaling network itself shall be secure, blocking all
   unauthorized access.  The signaling network topology and TPI interfaces. This requirement
   does addresses
   shall not preclude specific manual configurations that may be required advertised outside a carrier's domain of trust.

   - Identification, authentication and in particular does not specify any mechanism that may access control shall be
   rigorously used for
   optimizing by network management.

   Neighbor operators for providing access to the
   control plane.

   - Discovery can information, including neighbor discovery, service
   discovery, resource discovery and reachability information should be described as an instance of auto-discovery
   that is used for associating two subnet points that form a trail or a
   link connection
   exchanged in a particular layer network.  The association
   created through neighbor discovery is valid so long as secure way.

   - Information on security-relevant events occurring in the trail control
   plane or
   link connection that forms security-relevant operations performed or attempted in the association is capable of carrying
   traffic.  This is referred to as transport
   control plane neighbor discovery.
   In addition to transport shall be logged in the management plane.

   - The management plane neighbor discovery, auto-discovery can
   also shall be used for distributed subnet controller functions able to establish
   adjacencies.  This is referred analyze and exploit logged
   data in order to as check if they violate or threat security of the
   control plane.

   - The control plane neighbor
   discovery.  It should shall be noted that the Sub network points that are
   associated, as part of neighbor discovery do not have able to be contained
   in network elements with physically adjacent ports.  Thus neighbor
   discovery is specific generate alarm notifications
   about security related events to the layer management plane in which connections are to be
   made and consequently is principally useful only when the network has
   switching capability at this layer.  Further details on neighbor
   discovery can be obtained from ITU-T draft recommendations G.7713 an
   adjustable and
   G.7714.

   Both selectable fashion.

   - The control plane and transport plane neighbor discovery shall support recovery from successful and
   attempted intrusion attacks.

11.2.  Service Access Control

   From a security perspective, network resources should be
   supported.

9.5.2. Resource Discovery

   Resource discovery can protected
   from unauthorized accesses and should not be described as an instance of auto-discovery
   that is used for verifying by unauthorized
   entities. Service access control is the physical connectivity between two
   ports on adjacent mechanism that limits and
   controls entities trying to access network elements in resources. Especially on
   the network.  Resource
   discovery is UNI and E-NNI, Connection Admission Control (CAC) functions
   should also concerned with support the ability to improve inventory
   management of network resources, detect configuration mismatches
   between adjacent ports, associating port characteristics of adjacent
   network elements, etc.

   Resource discovery happens between neighbors. A mechanism designed
   for a technology domain can following security features:

   - CAC should be applied to any pair of NEs
   interconnected entity that tries to access network
   resources through interfaces of the same technology.  However,
   because resource discovery means certain information disclosure
   between two business domains, it UNI (or E-NNI). CAC should include an
   authentication function of an entity in order to prevent masquerade
   (spoofing). Masquerade is under the service providers'
   security and policy control. In certain fraudulent use of network scenario, resources by
   pretending to be a service
   provider who owns the transport network may not different entity. An authenticated entity should
   be willing to
   disclose any internal addressing scheme to its client. So given a client NE
   may not have the neighbor NE address and port ID in its NE service access level
   resource table.

   Interface ports and their characteristics define the network element
   resources. Each network can store its resources in a local table that
   could include switching granularity supported by the network element,
   ability to support concatenated services, range of bandwidths
   supported by adaptation, physical attributes signal format,
   transmission bit rate, optics type, multiplexing structure,
   wavelength, and the direction of the flow of information. Resource
   discovery can be achieved through either manual provisioning or
   automated procedures. configurable policy basis.

   - The procedures are generic while the specific UNI and NNI should provide optional mechanisms to ensure origin
   authentication and control information can be technology dependent.

   Resource discovery can be achieved in several methods. One of the
   methods is the self-resource discovery by which the NE populates its
   resource table with the physical attributes message integrity for connection management
   requests such as set-up, tear-down and resources. Neighbor
   discovery modify and connection
   signaling messages. This is another method by which NE discovers the adjacencies important in
   the transport plane and their port association and populates the
   neighbor NE. After neighbor discovery resource verification and
   monitoring must be performed to verify physical attributes order to ensure
   compatibility. Resource monitoring must be performed periodically
   since neighbor discovery and port association are repeated
   periodically.  Further information can be found in [GMPLS-ARCH].

   Resource discovery shall be supported.

9.5.3. Service Discovery prevent Denial of
   Service Discovery can be described attacks. The UNI and E-NNI should also include mechanisms,
   such as an instance usage-based billing based on CAC, to ensure non-repudiation
   of auto-discovery
   that is used for verifying and exchanging service capabilities that
   are supported by a particular link connection or trail.  It is
   assumed that service discovery would take place after two Sub Network
   Points within the layer management messages.

   - Each entity should be authorized to use network are associated through neighbor
   discovery.  However, since service capabilities of a link connection
   or trail can dynamically change, service discovery can take place at
   any time after neighbor discovery and any number of times as may be
   deemed necessary.

   Service discovery is required for all the optical services supported.

10.  Requirements for service and control plane resiliency

   Resiliency is a network capability resources according
   to continue its operations under
   the condition of failures within the network. service level given.

12.  Acknowledgements

   The automatic switched
   Optical network assumes the separation authors of control plane and data
   plane. Therefore the failures in the network can be divided into
   those affecting this document would like to acknowledge the data plane valuable
   inputs from John Strand, Yangguang Xu, Deborah Brunhard, Daniel
Awduche,
   Jim Luciani, Lynn Neir, Wesam Alanqar, Tammy Ferris, Mark Jones and those affecting the control plane.
   To provide enhanced optical services, resiliency measures in both
   data plane
   Jerry Ash.

13. References

   [carrier-framework]  Y. Xue et al., Carrier Optical Services
   Framework and control plane should be implemented. The following
   failure handling principles shall be supported.

   The control plane shall provide the failure detection Associated UNI requirements", draft-many-carrier-
   framework-uni-00.txt, IETF, Nov. 2001.

   [oif2001.196.0]  M. Lazer, "High Level Requirements on Optical
   Network Addressing", oif2001.196.0.

   [oif2001.046.2]  J. Strand and recovery
   functions such that the failures in the data plane within the control
   plane coverage can be quickly mitigated.

   The failure of control plane shall not Y. Xue, "Routing For Optical Networks
   With Multiple Routing Domains", oif2001.046.2.

   [ipo-impairements]  J. Strand et al.,  "Impairments and Other
   Constraints on Optical Layer Routing", Work in any way adversely affect
   the normal functioning of existing optical connections in the data
   plane.

10.1.  Service resiliency

   In circuit-switched transport networks, the quality and reliability
   of the established optical connections in the transport plane can be
   enhanced by the protection and restoration mechanisms provided by the
   control plane functions.  Rapid recovery is required by transport
   network providers to protect service and also to support stringent
   Service Level Agreements (SLAs) that dictate high reliability and
   availability for customer connectivity.

   The choice of a protection/restoration mechanism is a tradeoff
   between network resource utilization (cost) and service interruption
   time. Clearly, minimizing service interruption time is desirable, but
   schemes achieving this usually do so at the expense of network
   resources, resulting in increased cost to the provider. Different
   protection/restoration schemes differ in the spare capacity
   requirements and service interruption time.

   In light of these tradeoffs, transport providers are expected to
   support a range of different levels of service offerings,
   characterized by the recovery speed in the event of network failures.
   For example, a provider's highest offered service level would
   generally ensure the most rapid recovery from network failures.
   However, such schemes (e.g., 1+1, 1:1 protection) generally use a
   large amount of spare restoration capacity, and are thus not cost
   effective for most customer applications. Significant reductions in
   spare capacity can be achieved by protection and restoration using
   shared network resources.

   Clients will have different requirements for connection availability.
   These requirements can be expressed in terms of the "service level",
   which can be mapped to different restoration and protection options
   and priority related connection characteristics, such as holding
   priority(e.g. pre-emptable or not), set-up priority, or restoration
   priority. However, how the mapping of individual service levels to a
   specific set of protection/restoration options and connection
   priorities will be determined by individual carriers.

   In order for the network to support multiple grades of service, the
   control plane must support differing protection and restoration
   options on a per connection basis.

   In order for the network to support multiple grades of service, the
   control plane must support setup priority, restoration priority and
   holding priority on a per connection basis.

   In general, the following protection schemes shall be considered for
   all protection cases within the network:
   - Dedicated protection: 1+1 and 1:1
   - Shared protection: 1:N and M:N..
   - Unprotected

   In general, the following restoration schemes should be considered
   for all restoration cases within the network:
   - Shared restoration capacity.
   - Un-restorable

   Protection and restoration can be done on an end-to-end basis per
   connection. It can also be done on a per span or link basis between
   two adjacent network nodes. Specifically, the link can be a network
   link between two nodes within the network where the P&R scheme
   operates across a NNI interface or a drop-side link between the edge
   device and a switch node where the P&R scheme operates across a UNI
   interface. End-to-end Path protection and restoration schemes operate
   between access points across all NNI and UNI interfaces supporting
   the connection.

   In order for the network to support multiple grades of service, the
   control plane must support differing protection and restoration
   options on a per link or span basis within the network.

   In order for the network to support multiple grades of service, the
   control plane must support differing protection and restoration
   options on a per link or span basis for dropped customer connections.

   The protection and restoration actions are usually triggered by the
   failure in the networks. However, during the network maintenance
   affecting the protected connections, a network operator need to
   proactively force the traffic on the protected connections to switch
   to its protection connection. Therefore In order to support easy
   network maintenance, it required that management initiated protection
   and restoration be supported.

   To support the protection/restoration options: The control plane
   shall support configurable protection and restoration options via
   software commands (as opposed to needing hardware reconfigurations)
   to change the protection/restoration mode.

   The control plane shall support mechanisms to establish primary and
   protection paths.

   The control plane shall support mechanisms to modify protection
   assignments, subject to service protection constraints.

   The control plane shall support methods for fault notification to the
   nodes responsible for triggering restoration / protection (note that
   the transport plane is designed to provide the needed information
   between termination points.  This information is expected to be
   utilized as appropriate.)

   The control plane shall support mechanisms for signaling rapid re-
   establishment of connection connectivity after failure.

   The control plane shall support mechanisms for reserving bandwidth
   resources for restoration.

   The control plane shall support mechanisms for normalizing connection
   routing (reversion) after failure repair.

   The signaling control plane should implement signaling message
   priorities to ensure that restoration messages receive preferential
   treatment, resulting in faster restoration.

   Normal connection management operations (e.g., connection deletion)
   shall not result in protection/restoration being initiated.

   Restoration shall not result in miss-connections (connections
   established to a destination other than that intended), even for
   short periods of time (e.g., during contention resolution). For
   example, signaling messages, used to restore connectivity after
   failure, should not be forwarded by a node before contention has been
   resolved.

   In the event of there being insufficient bandwidth available to
   restore all connections, restoration priorities / pre-emption should
   be used to determine which connections should be allocated the
   available capacity.

   The amount of restoration capacity reserved on the restoration paths
   determines the robustness of the restoration scheme to failures. For
   example, a network operator may choose to reserve sufficient capacity
   to ensure that all shared restorable connections can be recovered in
   the event of any single failure event (e.g., a conduit being cut). A
   network operator may instead reserve more or less capacity than
   required to handle any single failure event, or may alternatively
   choose to reserve only a fixed pool independent of the number of
   connections requiring this capacity (i.e., not reserve capacity for
   each individual connection).

10.2.  Control plane resiliency

   The control plane may be affected by failures in signaling network
   connectivity and by software failures (e.g., signaling, topology and
   resource discovery modules).

   Fast detection and recovery from failures in the control plane are
   important to allow normal network operation to continue in the event
   of signaling channel failures.

   The optical control plane signal network shall support protection and
   restoration options to enable it to self-healing in case of failures
   within the control plane.  The control plane shall support the
   necessary options to ensure that no service-affecting module of the
   control plane (software modules or control plane communications) is a
   single point of failure.  The control plane shall provide reliable
   transfer of signaling messages and flow control mechanisms for easing
   any congestion within the control plane.  Control plane failures
   shall not cause failure of established data plane connections.
   Control network failure detection mechanisms shall distinguish
   between control channel and software process failures.

   When there are multiple channels (optical fibers or multiple
   wavelengths) between network elements and / or client devices,
   failure of the control channel will have a much bigger impact on the
   service availability than in the single case. It is therefore
   recommended to support a certain level of protection of the control
   channel. Control channel failures may be recovered by either using
   dedicated protection of control channels, or by re-routing control
   traffic within the control plane (e.g., using the self-healing
   properties of IP). To achieve this requires rapid failure detection
   and recovery mechanisms. For dedicated control channel protection,
   signaling traffic may be switched onto a backup control channel
   between the same adjacent pairs of nodes. Such mechanisms protect
   against control channel failure, but not against node failure.

   If a dedicated backup control channel is not available between
   adjacent nodes, or if a node failure has occurred, then signaling
   messages should be re-routed around the failed link / node.

   Fault localization techniques for the isolation of failed control
   resources shall be supported.

   Recovery from signaling process failures can be achieved by switching
   to a standby module, or by re-launching the failed signaling module.

   Recovery from software failures shall result in complete recovery of
   network state.

   Control channel failures may occur during connection establishment,
   modification or deletion. If this occurs, then the control channel
   failure must not result in partially established connections being
   left dangling within the network. Connections affected by a control
   channel failure during the establishment process must be removed from
   the network, re-routed (cranked back) or continued once the failure
   has been resolved. In the case of connection deletion requests
   affected by control channel failures, the connection deletion process
   must be completed once the signaling network connectivity is
   recovered.

   Connections shall not be left partially established as a result of a
   control plane failure.  Connections affected by a control channel
   failure during the establishment process must be removed from the
   network, re-routed (cranked back) or continued once the failure has
   been resolved.  Partial connection creations and deletions must be
   completed once the control plane connectivity is recovered.

11.  Security Considerations

   In this section, security considerations and requirements for optical
   services and associated control plane requirements are described.
   11.1 Optical Network Security Concerns Since optical service is
   directly related to the physical network which is fundamental to a
   telecommunications infrastructure, stringent security assurance
   mechanism should be implemented in optical networks. When designing
   equipment, protocols, NMS, and OSS that participate in optical
   service, every security aspect should be considered carefully in
   order to avoid any security holes that potentially cause dangers to
   an entire network, such as Denial of Service (DoS) attack,
   unauthorized access, masquerading, etc.

   In terms of security, an optical connection consists of two aspects.
   One is security of the data plane where an optical connection itself
   belongs, and the other is security of the control plane.

11.0.1.  Data Plane Security

   - Misconnection shall be avoided in order to keep the user's data
   confidential.  For enhancing integrity and confidentiality of data,
   it may be helpful to support scrambling of data at layer 2 or
   encryption of data at a higher layer.

11.0.2.  Control Plane Security

   It is desirable to decouple the control plane from the data plane
   physically.

   Additional security mechanisms should be provided to guard against
   intrusions on the signaling network. Some of these may be done with
   the help of the management plane.

   - Network information shall not be advertised across exterior
   interfaces (E-UNI or E-NNI). The advertisement of network information
   across the E-NNI shall be controlled and limited in a configurable
   policy based fashion. The advertisement of network information shall
   be isolated and managed separately by each administration.

   - The signaling network itself shall be secure, blocking all
   unauthorized access.  The signaling network topology and addresses
   shall not be advertised outside a carrier's domain of trust.

   - Identification, authentication and access control shall be
   rigorously used for providing access to the control plane.

   - Discovery information, including neighbor discovery, service
   discovery, resource discovery and reachability information should be
   exchanged in a secure way.  This is an optional NNI requirement.

   - UNI shall support ongoing identification and authentication of the
   UNI-C entity (i.e., each user request shall be authenticated).

   - The UNI and NNI should provide optional mechanisms to ensure origin
   authentication and message integrity for connection management
   requests such as set-up, tear-down and modify and connection
   signaling messages. This is important in order to prevent Denial of
   Service attacks. The NNI (especially E-NNI) should also include
   mechanisms to ensure non-repudiation of connection management
   messages.

   - Information on security-relevant events occurring in the control
   plane or security-relevant operations performed or attempted in the
   control plane shall be logged in the management plane.

   - The management plane shall be able to analyze and exploit logged
   data in order to check if they violate or threat security of the
   control plane.

   - The control plane shall be able to generate alarm notifications
   about security related events to the management plane in an
   adjustable and selectable fashion.

   - The control plane shall support recovery from successful and
   attempted intrusion attacks.

   - The desired level of security depends on the type of interfaces and
   accounting relation between the two adjacent sub-networks or domains.
   Typically, in-band control channels are perceived as more secure than
   out-of-band, out-of-fiber channels, which may be partly colocated
   with a public network.

11.1.  Service Access Control

   From a security perspective, network resources should be protected
   from unauthorized accesses and should not be used by unauthorized
   entities. Service Access Control is the mechanism that limits and
   controls entities trying to access network resources. Especially on
   the public UNI, Connection Admission Control (CAC) functions should
   also support the following security features:

   - CAC should be applied to any entity that tries to access network
   resources through the public UNI (or E-UNI). CAC should include an
   authentication function of an entity in order to prevent masquerade
   (spoofing). Masquerade is fraudulent use of network resources by
   pretending to be a different entity. An authenticated entity should
   be given a service access level in a configurable policy basis.

   - Each entity should be authorized to use network resources according
   to the service level given.

   - With help of CAC, usage based billing should be realized. CAC and
   usage based billing should be enough stringent to avoid any
   repudiation. Repudiation means that an entity involved in a
   communication exchange subsequently denies the fact.

12.  Acknowledgements
   The authors of this document would like to acknowledge the
   valuable inputs from John Strand, Yangguang Xu,
   Deborah Brunhard, Daniel Awduche, Jim Luciani, Lynn Neir, Wesam
   Alanqar, Tammy Ferris, Mark Jones and Gerry Ash.

 References

   [carrier-framework]  Y. Xue et al., Carrier Optical Services
   Framework and Associated UNI requirements", draft-many-carrier-
   framework-uni-00.txt, IETF, Nov. 2001.

   [G.807]  ITU-T Recommendation G.807 (2001), "Requirements for the
   Automatic Switched Transport Network (ASTN)".

   [G.dcm]  ITU-T New Recommendation G.dcm, "Distributed Connection
   Management (DCM)".

   [G.8080] ITU-T New recommendation G.ason, "Architecture for the
   Automatically Switched Optical Network (ASON)".

   [oif2001.196.0]  M. Lazer, "High Level Requirements on Optical
   Network Addressing", oif2001.196.0.

   [oif2001.046.2]  J. Strand and Y. Xue, "Routing For Optical Networks
   With Multiple Routing Domains", oif2001.046.2.

   [ipo-impairements]  J. Strand et al.,  "Impairments and Other
   Constraints on Optical Layer Routing", draft-ietf-ipo-
   impairments-00.txt, work in progress.

   [ccamp-gmpls] Y. Xu et al., "A Framework for Generalized Multi-
   Protocol Label Switching (GMPLS)", draft-many-ccamp-gmpls-
   framework-00.txt, July 2001.

   [mesh-restoration] G. Li et al., "RSVP-TE extensions for shared mesh
   restoration in transport networks", draft-li-shared-mesh-
   restoration-00.txt, July 2001.

   [sis-framework]  Yves T'Joens et al., "Service Level
      Specification and Usage Framework",
      draft-manyfolks-sls-framework-00.txt, IETF, Oct. 2000.

   [control-frmwrk] G. Bernstein et al., "Framework for MPLS-based
   control of Optical SDH/SONET Networks", draft-bms-optical-sdhsonet-
   mpls-control-frmwrk-00.txt, IETF, Nov. 2000.

   [ccamp-req]    J. Jiang et al.,  "Common Control and Measurement
   Plane Framework and Requirements",  draft-walker-ccamp-req-00.txt,
   CCAMP, August, 2001.

   [tewg-measure]  W. S. Lai et al., "A Framework for Internet Traffic
   Engineering Neasurement", draft-wlai-tewg-measure-01.txt, IETF, May,
   2001.

   [ccamp-g.709]   A. Bellato, "G. 709 Optical Transport Networks GMPLS
   Control Framework", draft-bellato-ccamp-g709-framework-00.txt, CCAMP,
   June, 2001.

   [onni-frame]  D. Papadimitriou, "Optical Network-to-Network Interface
   Framework and Signaling Requirements", draft-papadimitriou-onni-
   frame-01.txt, IETF, Nov. 2000.

   [oif2001.188.0]  R. Graveman et al.,"OIF Security requirement",
   oif2001.188.0.a`
   Author's Addresses

   Yong Xue
   UUNET/WorldCom
   22001 Loudoun County Parkway
   Ashburn, VA 20147
   Phone: +1 (703) 886-5358
   Email: yong.xue@wcom.com

   Monica Lazer
   AT&T
   900 ROUTE 202/206N PO BX 752
   BEDMINSTER, NJ  07921-0000
   mlazer@att.com

   Jennifer Yates,
   AT&T Labs
   180 PARK AVE, P.O. BOX 971
   FLORHAM PARK, NJ  07932-0000
   jyates@research.att.com

   Dongmei Wang
   AT&T Labs
   Room B180, Building 103
   180 Park Avenue
   Florham Park, NJ 07932
   mei@research.att.com

   Ananth Nagarajan
   Sprint
   9300 Metcalf Ave
   Overland Park, KS 66212, USA
   ananth.nagarajan@mail.sprint.com

   Hirokazu Ishimatsu
   Japan Telecom Co., LTD
   2-9-1 Hatchobori, Chuo-ku,
   Tokyo 104-0032 Japan
   Phone: +81 3 5540 8493
   Fax: +81 3 5540 8485
   EMail: hirokazu@japan-telecom.co.jp

   Olga Aparicio
   Cable & Wireless Global
   11700 Plaza America Drive
   Reston, VA 20191
   Phone: 703-292-2022
   Email: olga.aparicio@cwusa.com

   Steven Wright
   Science & Technology
   BellSouth Telecommunications
   41G70 BSC
   675 West Peachtree St. NE.
   Atlanta, GA 30375
   Phone +1 (404) 332-2194
   Email: steven.wright@snt.bellsouth.com

Appendix A Commonly Required Signal Rate

   The table below outlines the different signal rates and granularities
   for the SONET and SDH signals.
           SDH        SONET        Transported signal
           name       name
           RS64       STS-192      STM-64 (STS-192) signal without
                       Section      termination of any OH.
           RS16       STS-48       STM-16 (STS-48) signal without
                       Section      termination of any OH.
           MS64       STS-192      STM-64 (STS-192); termination of
                       Line         RSOH (section OH) possible.
           MS16       STS-48       STM-16 (STS-48); termination of
                       Line         RSOH (section OH) possible.
           VC-4-      STS-192c-    VC-4-64c (STS-192c-SPE);
           64c        SPE          termination of RSOH (section OH),
                                     MSOH (line OH) and VC-4-64c TCM OH
                                     possible.
           VC-4-      STS-48c-     VC-4-16c (STS-48c-SPE);
           16c        SPE          termination of RSOH (section OH),
                                     MSOH (line OH) and VC-4-16c  TCM
                                     OH possible.
           VC-4-4c    STS-12c-     VC-4-4c (STS-12c-SPE); termination
                       SPE          of RSOH (section OH), MSOH (line
                                     OH) and VC-4-4c TCM OH possible.
           VC-4       STS-3c-      VC-4 (STS-3c-SPE); termination of
                       SPE          RSOH (section OH), MSOH (line OH)
                                     and VC-4 TCM OH possible.
           VC-3       STS-1-SPE    VC-3 (STS-1-SPE); termination of
                                     RSOH (section OH), MSOH (line OH)
                                     and VC-3 TCM OH possible.
                                     Note: In SDH it could be a higher
                                     order or lower order VC-3, this is
                                     identified by the sub-addressing
                                     scheme. In case of a lower order
                                     VC-3 the higher order VC-4 OH can
                                     be terminated.
           VC-2       VT6-SPE      VC-2 (VT6-SPE); termination of
                                     RSOH (section OH), MSOH (line OH),
                                     higher order VC-3/4 (STS-1-SPE) OH
                                     and VC-2 TCM OH possible.
           -          VT3-SPE      VT3-SPE; termination of section
                                     OH, line OH, higher order STS-1-
                                     SPE OH and VC3-SPE TCM OH
                                     possible.
           VC-12      VT2-SPE      VC-12 (VT2-SPE); termination of
                                     RSOH (section OH), MSOH (line OH),
                                     higher order VC-3/4 (STS-1-SPE) OH
                                     and VC-12 TCM OH possible.
           VC-11      VT1.5-SPE    VC-11 (VT1.5-SPE); termination of
                                     RSOH (section OH), MSOH (line OH),
                                     higher order VC-3/4 (STS-1-SPE) OH
                                     and VC-11 TCM OH possible.
   The tables below outline the different signals, rates and
   granularities that have been defined for the OTN in G.709.

   OTU type         OTU nominal bit rate        OTU bit rate tolerance
   OTU1             255/238 * 2 488 320 kbit/s       20 ppm
   OTU2             255/237 * 9 953 280 kbit/s
   OTU3             255/236 * 39 813 120 kbit/s

   NOTE - The nominal OTUk rates are approximately: 2,666,057.143 kbit/s
   (OTU1), 10,709,225.316 kbit/s (OTU2) and 43,018,413.559 kbit/s
   (OTU3).

   ODU type         ODU nominal bit rate       ODU bit rate tolerance
   ODU1             239/238 * 2 488 320 kbit/s      20 ppm
   ODU2             239/237 * 9 953 280 kbit/s
   ODU3             239/236 * 39 813 120 kbit/s

   NOTE - The nominal ODUk rates are approximately: 2,498,775.126 kbit/s
   (ODU1), 10 037 273.924 kbit/s (ODU2) and 40 319 218.983 kbit/s
   (ODU3).  ODU Type and Capacity (G.709)

   OPU type   OPU Payload nominal       OPU Payload bit rate
               bit rate tolerance
   OPU1         2488320 kbit/s                   20 ppm
   OPU2         238/237 * 9953280 kbit/s
   OPU3         238/236 * 39813120 kbit/s
   NOTE - The nominal OPUk Payload rates are approximately:
   2,488,320.000 kbit/s (OPU1 Payload), 9,995,276.962 kbit/s (OPU2
   Payload) and 40,150,519.322 kbit/s (OPU3 Payload).

Appendix B:  Protection and Restoration Schemes

   For the purposes of this discussion, the following
   protection/restoration definitions have been provided:

   Reactive Protection: This is a function performed by either equipment
   management functions and/or the transport plane (i.e. depending on if
   it is equipment protection or facility protection and so on) in
   response to failures or degraded conditions. Thus if the control
   plane and/or management plane is disabled, the reactive protection
   function can still be performed. Reactive protection requires that
   protecting resources be configured and reserved (i.e. they cannot be
   used for other services). The time to exercise the protection is
   technology specific and designed to protect from service
   interruption.

   Proactive Protection: In this form of protection, protection events
   are initiated in response to planned engineering works (often from a
   centralized operations center). Protection events may be triggered
   manually via operator request or based on a schedule supported by a
   soft scheduling function. This soft scheduling function may be
   performed by either the management plane or the control plane but
   could also be part of the equipment management functions. If the
   control plane and/or management plane is disabled and this is where
   the soft scheduling function is performed, the proactive protection
   function cannot be performed. [Note that In the case of a
   hierarchical model of subnetworks, some protection may remain
   available in the case of partial failure (i.e. failure of a single
   subnetwork control plane or management plane controller) relates to
   all those entities below the failed subnetwork controller, but not
   its parents or peers.] Proactive protection requires that protecting
   resources be configured and reserved (i.e. they cannot be used for
   other services) prior to the protection exercise. The time to
   exercise the protection is technology specific and designed to
   protect from service interruption.

   Reactive Restoration: This is a function performed by either the
   management plane or the control plane. Thus if the control plane
   and/or management plane is disabled, the restoration function cannot
   be performed. [Note that in the case of a hierarchical model of
   subnetworks, some restoration may remain available in the case of
   partial failure (i.e. failure of a single subnetwork control plane or
   management plane controller) relates to all those entities below the
   failed subnetwork controller, but not its parents or peers.]
   Restoration capacity may be shared among multiple demands. A
   restoration path is created after detecting the failure.  Path
   selection could be done either off-line or on-line. The path
   selection algorithms may also be executed in real-time or non-real
   time depending upon their computational complexity, implementation,
   and specific network context.

   - Off-line computation may be facilitated by simulation and/or
   network planning tools. Off-line computation can help provide
   guidance to subsequent real-time computations.

   - On-line computation may be done whenever a connection request is
   received.

   Off-line and on-line path selection may be used together to make
   network operation more efficient. Operators could use on-line
   computation to handle a subset of path selection decisions and use
   off-line computation Progress, IETF

   [ccamp-gmpls] Y. Xu et al., "A Framework for complicated traffic engineering and policy
   related issues such as demand planning, service scheduling, cost
   modeling and global optimization.

   Proactive Restoration: This is a function performed by either the
   management plane or the control plane. Thus if the control plane
   and/or management plane is disabled, the restoration function cannot
   be performed. [Note that in the case of a hierarchical model of
   subnetworks, some restoration may remain available Generalized Multi-
   Protocol Label Switching (GMPLS)", Work in the case of
   partial failure (i.e. failure of a single subnetwork control plane or
   management plane controller) relates to all those entities below the
   failed subnetwork controller, but not its parents or peers.]
   Restoration capacity may be shared among multiple demands. Part or
   all of the restoration path is created before detecting the failure
   depending on algorithms used, types of restoration options supported
   (e.g. shared restoration/connection pool, dedicated restoration
   pool), whether the end-end call is protected or just UNI part or NNI
   part, available resources, and so on. In the event restoration path
   is fully pre-allocated, a protection switch must occur upon failure
   similarly to the reactive protection switch.  The main difference
   between the options Progress, IETF.

   [mesh-restoration] G. Li et al., "RSVP-TE extensions for shared mesh
   restoration in this case is that the switch occurs through
   actions of the control plane rather than the transport plane   Path
   selection could be done either off-line or on-line. The path
   selection algorithms may also be executed networks", Work in real-time or non-real
   time depending upon their computational complexity, implementation,
   and specific network context.

   - Off-line computation may be facilitated by simulation and/or
   network planning tools. Off-line computation can help provide
   guidance to subsequent real-time computations.

   - On-line computation may be done whenever a connection request is
   received.

   Off-line and on-line path selection may be used together to make
   network operation more efficient. Operators could use on-line
   computation to handle a subset of path selection decisions Progress, IETF.

   [sls-framework]  Yves T'Joens et al., "Service Level Specification
   and use
   off-line computation Usage Framework", Work in Progress, IETF.

   [control-frmwrk] G. Bernstein et al., "Framework for complicated traffic engineering MPLS-based
   control of Optical SDH/SONET Networks", Work in Progress, IETF.

   [ccamp-req]    J. Jiang et al.,  "Common Control and policy
   related issues such as demand planning, service scheduling, cost
   modeling Measurement
   Plane Framework and global optimization. Requirements",  Work in Progress, IETF.

   [tewg-measure]  W. S. Lai et al., "A Framework for Internet Traffic
   Engineering Neasurement", Work in Progress, IETF.

   [ccamp-g.709]   A. Bellato, "G. 709 Optical Transport Networks GMPLS
   Control channel Framework", Work in Progress, IETF.

   [onni-frame]  D. Papadimitriou, "Optical Network-to-Network Interface
   Framework and signaling software failures shall not cause
   disruptions Signaling Requirements", Work in established connections within Progress, IETF.

   [oif2001.188.0]  R. Graveman et al.,"OIF Security requirement",
   oif2001.188.0.a.

   [ASTN] ITU-T Rec. G.8070/Y.1301 (2001), Requirements for the data plane,
Automatic
   Switched Transport Network (ASTN).

   [ASON] ITU-T Rec. G.8080/Y.1304  (2001), Architecture of the
Automatic
   Switched Optical Network (ASON).

   [DCM] ITU-T Rec.  G.7713/Y.1704 (2001), Distributed Call and
   signaling messages affected by control plane outages should not
   result
Connection
   Management (DCM).

   [ASONROUTING] ITU-T Draft Rec. G.7715/Y.1706 (2002), Routing
Architecture and
   requirements for ASON Networks (work in partially established connections remaining within the
   network.

   Control channel progress).

   [DISC] ITU-T Rec. G.7714/Y.1705 (2001), Generalized Automatic
Discovery.

   [DCN]ITU-T Rec. G.7712/Y.1703 (2001), Architecture and signaling software failures shall not cause
   management plane failures.

Appendix C Specification
of
   Data Communication Network.

   Author's Addresses

   Yong Xue
   UUNET/WorldCom
   22001 Loudoun County Parkway
   Ashburn, VA 20147
   Email: yong.xue@wcom.com

   Monica Lazer
   AT&T
   900 ROUTE 202/206N PO BX 752
   BEDMINSTER, NJ  07921-0000
   mlazer@att.com

   Jennifer Yates,
   AT&T Labs
   180 PARK AVE, P.O. BOX 971
   FLORHAM PARK, NJ  07932-0000
   jyates@research.att.com

   Dongmei Wang
   AT&T Labs
   Room B180, Building 103
   180 Park Avenue
   Florham Park, NJ 07932
   mei@research.att.com

   Ananth Nagarajan
   Sprint
   9300 Metcalf Ave
   Overland Park, KS 66212, USA
   ananth.nagarajan@mail.sprint.com

   Hirokazu Ishimatsu
   Japan Telecom Co., LTD
   2-9-1 Hatchobori, Chuo-ku,
   Tokyo 104-0032 Japan
   Phone: +81 3 5540 8493
   Fax: +81 3 5540 8485
   EMail: hirokazu@japan-telecom.co.jp

   Olga Aparicio
   Cable & Wireless Global
   11700 Plaza America Drive
   Reston, VA 20191
   Phone: 703-292-2022
   Email: olga.aparicio@cwusa.com

   Steven Wright
   Science & Technology
   BellSouth Telecommunications
   41G70 BSC
   675 West Peachtree St. NE.
   Atlanta, GA 30375
   Phone +1 (404) 332-2194
   Email: steven.wright@snt.bellsouth.com
   Appendix: Interconnection of Control Planes

   The interconnection of the IP router (client) and optical control
   planes can be realized in a number of ways depending on the required
   level of coupling.  The control planes can be loosely or tightly
   coupled.  Loose coupling is generally referred to as the overlay
   model and tight coupling is referred to as the peer model.
   Additionally there is the augmented model that is somewhat in between
   the other two models but more akin to the peer model.  The model
   selected determines the following:

   - The details of the topology, resource and reachability information
   advertised between the client and optical networks

   - The level of control IP routers can exercise in selecting paths
   across the optical network

   The next three sections discuss these models in more details and the
   last section describes the coupling requirements from a carrier's
   perspective.

C.1.

 Peer Model (I-NNI like model)

   Under the peer model, the IP router clients act as peers of the
   optical transport network, such that single routing protocol instance
   runs over both the IP and optical domains.  In this regard the
   optical network elements are treated just like any other router as
   far as the control plane is concerned. The peer model, although not
   strictly an internal NNI, behaves like an I-NNI in the sense that
   there is sharing of resource and topology information.

   Presumably a common IGP such as OSPF or IS-IS, with appropriate
   extensions, will be used to distribute topology information.  One
   tacit assumption here is that a common addressing scheme will also be
   used for the optical and IP networks.  A common address space can be
   trivially realized by using IP addresses in both IP and optical
   domains.  Thus, the optical networks elements become IP addressable
   entities.

   The obvious advantage of the peer model is the seamless
   interconnection between the client and optical transport networks.
   The tradeoff is that the tight integration and the optical specific
   routing information that must be known to the IP clients.

   The discussion above has focused on the client to optical control
   plane inter-connection.  The discussion applies equally well to
   inter-connecting two optical control planes.

C.2.

Overlay (UNI-like model)

   Under the overlay model, the IP client routing, topology
   distribution, and signaling protocols are independent of the routing,
   topology distribution, and signaling protocols at the optical layer.
   This model is conceptually similar to the classical IP over ATM
   model, but applied to an optical sub-network directly.

   Though the overlay model dictates that the client and optical network
   are independent this still allows the optical network to re-use IP
   layer protocols to perform the routing and signaling functions.

   In addition to the protocols being independent the addressing scheme
   used between the client and optical network must be independent in
   the overlay model.  That is, the use of IP layer addressing in the
   clients must not place any specific requirement upon the addressing
   used within the optical control plane.

   The overlay model would provide a UNI to the client networks through
   which the clients could request to add, delete or modify optical
   connections.  The optical network would additionally provide
   reachability information to the clients but no topology information
   would be provided across the UNI.

C.3.

Augmented model (E-NNI like model)

   Under the augmented model, there are actually separate routing
   instances in the IP and optical domains, but information from one
   routing instance is passed through the other routing instance.  For
   example, external IP addresses could be carried within the optical
   routing protocols to allow reachability information to be passed to
   IP clients.  A typical implementation would use BGP between the IP
   client and optical network.

   The augmented model, although not strictly an external NNI, behaves
   like an E-NNI in that there is limited sharing of information.

   Generally in a carrier environment there will be more than just IP
   routers connected to the optical network.  Some other examples of
   clients could be ATM switches or SONET ADM equipment.  This may drive
   the decision towards loose coupling to prevent undue burdens upon
   non-IP router clients.  Also, loose coupling would ensure that future
   clients are not hampered by legacy technologies.

   Additionally, a carrier may for business reasons want a separation
   between the client and optical networks.  For example, the ISP
   business unit may not want to be tightly coupled with the optical
   network business unit.  Another reason for separation might be just
   pure politics that play out in a large carrier.  That is, it would
   seem unlikely to force the optical transport network to run that same
   set of protocols as the IP router networks.  Also, by forcing the
   same set of protocols in both networks the evolution of the networks
   is directly tied together.  That is, it would seem you could not
   upgrade the optical transport network protocols without taking into
   consideration the impact on the IP router network (and vice versa).

   Operating models also play a role in deciding the level of coupling.
   [Freeland] gives four main operating models envisioned for an optical
   transport network: - ISP owning all of its own infrastructure (i.e.,
   including fiber and duct to the customer premises)

   - ISP leasing some or all of its capacity from a third party

   - Carriers carrier providing layer 1 services

   - Service provider offering multiple layer 1, 2, and 3 services over
   a common infrastructure

   Although relatively few, if any, ISPs fall into category 1 it would
   seem the mostly likely of the four to use the peer model.  The other
   operating models would lend themselves more likely to choose an
   overlay model.  Most carriers would fall into category 4 and thus
   would most likely choose an overlay model architecture.

Full Copyright Statement

   Copyright (C) The Internet Society (2002).  All Rights Reserved.

   This document and translations of it may be copied and furnished to
   others, and derivative works that comment on or otherwise explain it
   or assist in its implementation may be prepared, copied, published
   and distributed, in whole or in part, without restriction of any
   kind, provided that the above copyright notice and this paragraph are
   included on all such copies and derivative works.  However, this
   document itself may not be modified in any way, such as by removing
   the copyright notice or references to the Internet Society or other
   Internet organizations, except as needed for the purpose of
   developing Internet standards in which case the procedures for
   copyrights defined in the Internet Standards process must be
   followed, or as required to translate it into languages other than
   English.

   The limited permissions granted above are perpetual and will not be
   revoked by the Internet Society or its successors or assigns.

   This document and the information contained herein is provided on an
   "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING
   TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING
   BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION
   HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF
   MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.