draft-ietf-issll-atm-framework-03.txt   rfc2382.txt 
Internet Engineering Task Force E. Crawley, Editor Network Working Group E. Crawley, Editor
Internet Draft (Argon Networks) Request for Comments: 2382 Argon Networks
draft-ietf-issll-atm-framework-03.txt L. Berger Category: Informational L. Berger
(Fore Systems) Fore Systems
S. Berson S. Berson
(ISI) ISI
F. Baker F. Baker
(Cisco Systems) Cisco Systems
M. Borden M. Borden
(Bay Networks) Bay Networks
J. Krawczyk J. Krawczyk
(ArrowPoint Communications) ArrowPoint Communications
August 1998
April 2, 1998 A Framework for Integrated Services and RSVP over ATM
A Framework for Integrated Services and RSVP over ATM Status of this Memo
Status of this Memo This memo provides information for the Internet community. It does
This document is an Internet Draft. Internet Drafts are working not specify an Internet standard of any kind. Distribution of this
documents of the Internet Engineering Task Force (IETF), its Areas, and memo is unlimited.
its Working Groups. Note that other groups may also distribute working
documents as Internet Drafts).
Internet Drafts are draft documents valid for a maximum of six months. Copyright Notice
Internet Drafts may be updated, replaced, or obsoleted by other
documents at any time. It is not appropriate to use Internet Drafts as
reference material or to cite them other than as a "working draft" or
"work in progress."
To view the entire list of current Internet-Drafts, please check Copyright (C) The Internet Society (1998). All Rights Reserved.
the "1id-abstracts.txt" listing contained in the Internet-Drafts
Shadow Directories on ftp.is.co.za (Africa), ftp.nordu.net
(Northern Europe), ftp.nis.garr.it (Southern Europe), munnari.oz.au
(Pacific Rim), ftp.ietf.org (US East Coast), or ftp.isi.edu
(US West Coast).
Abstract Abstract
This document outlines the issues and framework related to providing IP
Integrated Services with RSVP over ATM. It provides an overall approach
to the problem(s) and related issues. These issues and problems are to
be addressed in further documents from the ISATM subgroup of the ISSLL
working group.
Editor's Note This document outlines the issues and framework related to providing
This document is the merger of two previous documents, draft-ietf- IP Integrated Services with RSVP over ATM. It provides an overall
issll-atm-support-02.txt by Berger and Berson and draft-crawley-rsvp- approach to the problem(s) and related issues. These issues and
over-atm-00.txt by Baker, Berson, Borden, Crawley, and Krawczyk. The problems are to be addressed in further documents from the ISATM
former document has been split into this document and a set of subgroup of the ISSLL working group.
documents on RSVP over ATM implementation requirements and guidelines.
1. Introduction 1. Introduction
The Internet currently has one class of service normally referred to as The Internet currently has one class of service normally referred to
"best effort." This service is typified by first-come, first-serve as "best effort." This service is typified by first-come, first-
scheduling at each hop in the network. Best effort service has worked serve scheduling at each hop in the network. Best effort service has
well for electronic mail, World Wide Web (WWW) access, file transfer worked well for electronic mail, World Wide Web (WWW) access, file
(e.g. ftp), etc. For real-time traffic such as voice and video, the transfer (e.g. ftp), etc. For real-time traffic such as voice and
current Internet has performed well only across unloaded portions of video, the current Internet has performed well only across unloaded
the network. In order to provide quality real-time traffic, new portions of the network. In order to provide quality real-time
classes of service and a QoS signalling protocol are being introduced traffic, new classes of service and a QoS signalling protocol are
in the Internet [1,6,7], while retaining the existing best effort being introduced in the Internet [1,6,7], while retaining the
service. The QoS signalling protocol is RSVP [1], the Resource existing best effort service. The QoS signalling protocol is RSVP
ReSerVation Protocol and the service models [1], the Resource ReSerVation Protocol and the service models
One of the important features of ATM technology is the ability to One of the important features of ATM technology is the ability to
request a point-to-point Virtual Circuit (VC) with a specified Quality request a point-to-point Virtual Circuit (VC) with a specified
of Service (QoS). An additional feature of ATM technology is the Quality of Service (QoS). An additional feature of ATM technology is
ability to request point-to-multipoint VCs with a specified QoS. the ability to request point-to-multipoint VCs with a specified QoS.
Point-to-multipoint VCs allows leaf nodes to be added and removed from Point-to-multipoint VCs allows leaf nodes to be added and removed
the VC dynamically and so provides a mechanism for supporting IP from the VC dynamically and so provides a mechanism for supporting IP
multicast. It is only natural that RSVP and the Internet Integrated multicast. It is only natural that RSVP and the Internet Integrated
Services (IIS) model would like to utilize the QoS properties of any Services (IIS) model would like to utilize the QoS properties of any
underlying link layer including ATM, and this draft concentrates on underlying link layer including ATM, and this memo concentrates on
ATM. ATM.
Classical IP over ATM [10] has solved part of this problem, supporting Classical IP over ATM [10] has solved part of this problem,
IP unicast best effort traffic over ATM. Classical IP over ATM is supporting IP unicast best effort traffic over ATM. Classical IP
based on a Logical IP Subnetwork (LIS), which is a separately over ATM is based on a Logical IP Subnetwork (LIS), which is a
administered IP subnetwork. Hosts within an LIS communicate using the separately administered IP subnetwork. Hosts within an LIS
ATM network, while hosts from different subnets communicate only by communicate using the ATM network, while hosts from different subnets
going through an IP router (even though it may be possible to open a communicate only by going through an IP router (even though it may be
direct VC between the two hosts over the ATM network). Classical IP possible to open a direct VC between the two hosts over the ATM
over ATM provides an Address Resolution Protocol (ATMARP) for ATM edge network). Classical IP over ATM provides an Address Resolution
devices to resolve IP addresses to native ATM addresses. For any pair Protocol (ATMARP) for ATM edge devices to resolve IP addresses to
of IP/ATM edge devices (i.e. hosts or routers), a single VC is created native ATM addresses. For any pair of IP/ATM edge devices (i.e.
on demand and shared for all traffic between the two devices. A second hosts or routers), a single VC is created on demand and shared for
part of the RSVP and IIS over ATM problem, IP multicast, is being all traffic between the two devices. A second part of the RSVP and
solved with MARS [5], the Multicast Address Resolution Server. IIS over ATM problem, IP multicast, is being solved with MARS [5],
the Multicast Address Resolution Server.
MARS compliments ATMARP by allowing an IP address to resolve into a MARS compliments ATMARP by allowing an IP address to resolve into a
list of native ATM addresses, rather than just a single address. list of native ATM addresses, rather than just a single address.
The ATM Forum's LAN Emulation (LANE) [17, 20] and Multiprotocol Over The ATM Forum's LAN Emulation (LANE) [17, 20] and Multiprotocol Over
ATM (MPOA) [18] also address the support of IP best effort traffic over ATM (MPOA) [18] also address the support of IP best effort traffic
ATM through similar means. over ATM through similar means.
A key remaining issue for IP in an ATM environment is the integration A key remaining issue for IP in an ATM environment is the integration
of RSVP signalling and ATM signalling in support of the Internet of RSVP signalling and ATM signalling in support of the Internet
Integrated Services (IIS) model. There are two main areas involved in Integrated Services (IIS) model. There are two main areas involved
supporting the IIS model, QoS translation and VC management. QoS in supporting the IIS model, QoS translation and VC management. QoS
translation concerns mapping a QoS from the IIS model to a proper ATM translation concerns mapping a QoS from the IIS model to a proper ATM
QoS, while VC management concentrates on how many VCs are needed and QoS, while VC management concentrates on how many VCs are needed and
which traffic flows are routed over which VCs. which traffic flows are routed over which VCs.
1.1 Structure and Related Documents 1.1 Structure and Related Documents
This document provides a guide to the issues for IIS over ATM. It is This document provides a guide to the issues for IIS over ATM. It is
intended to frame the problems that are to be addressed in further intended to frame the problems that are to be addressed in further
documents. In this document, the modes and models for RSVP operation documents. In this document, the modes and models for RSVP operation
over ATM will be discussed followed by a discussion of management of over ATM will be discussed followed by a discussion of management of
ATM VCs for RSVP data and control. Lastly, the topic of encapsulations ATM VCs for RSVP data and control. Lastly, the topic of
will be discussed in relation to the models presented. encapsulations will be discussed in relation to the models presented.
This document is part of a group of documents from the ISATM subgroup This document is part of a group of documents from the ISATM subgroup
of the ISSLL working group related to the operation of IntServ and RSVP of the ISSLL working group related to the operation of IntServ and
over ATM. [14] discusses the mapping of the IntServ models for RSVP over ATM. [14] discusses the mapping of the IntServ models for
Controlled Load and Guaranteed Service to ATM. [15 and 16] discuss Controlled Load and Guaranteed Service to ATM. [15 and 16] discuss
detailed implementation requirements and guidelines for RSVP over ATM, detailed implementation requirements and guidelines for RSVP over
respectively. While these documents may not address all the issues ATM, respectively. While these documents may not address all the
raised in this document, they should provide enough information for issues raised in this document, they should provide enough
development of solutions for IntServ and RSVP over ATM. information for development of solutions for IntServ and RSVP over
ATM.
1.2 Terms 1.2 Terms
Several term used in this document are used in many contexts, often Several term used in this document are used in many contexts, often
with different meaning. These terms are used in this document with the with different meaning. These terms are used in this document with
following meaning: the following meaning:
- Sender is used in this document to mean the ingress point to the ATM - Sender is used in this document to mean the ingress point to the
network or "cloud". ATM network or "cloud".
- Receiver is used in this document to refer to the egress point from - Receiver is used in this document to refer to the egress point from
the ATM network or "cloud". the ATM network or "cloud".
- Reservation is used in this document to refer to an RSVP initiated - Reservation is used in this document to refer to an RSVP initiated
request for resources. RSVP initiates requests for resources based request for resources. RSVP initiates requests for resources based
on RESV message processing. RESV messages that simply refresh state on RESV message processing. RESV messages that simply refresh state
do not trigger resource requests. Resource requests may be made do not trigger resource requests. Resource requests may be made
based on RSVP sessions and RSVP reservation styles. RSVP styles based on RSVP sessions and RSVP reservation styles. RSVP styles
dictate whether the reserved resources are used by one sender or dictate whether the reserved resources are used by one sender or
shared by multiple senders. See [1] for details of each. Each new shared by multiple senders. See [1] for details of each. Each new
request is referred to in this document as an RSVP reservation, or request is referred to in this document as an RSVP reservation, or
simply reservation. simply reservation.
- Flow is used to refer to the data traffic associated with a - Flow is used to refer to the data traffic associated with a
particular reservation. The specific meaning of flow is RSVP style particular reservation. The specific meaning of flow is RSVP style
dependent. For shared style reservations, there is one flow per dependent. For shared style reservations, there is one flow per
session. For distinct style reservations, there is one flow per session. For distinct style reservations, there is one flow per
sender (per session). sender (per session).
2. Issues Regarding the Operation of RSVP and IntServ over ATM 2. Issues Regarding the Operation of RSVP and IntServ over ATM
The issues related to RSVP and IntServ over ATM fall into several The issues related to RSVP and IntServ over ATM fall into several
general classes: general classes:
- How to make RSVP run over ATM now and in the future
- When to set up a virtual circuit (VC) for a specific Quality of
Service (QoS) related to RSVP
- How to map the IntServ models to ATM QoS models
- How to know that an ATM network is providing the QoS necessary for a
flow
- How to handle the many-to-many connectionless features of IP
multicast and RSVP in the one-to-many connection-oriented world of
ATM
2.1 Modes/Models for RSVP and IntServ over ATM
[3] Discusses several different models for running IP over ATM - How to make RSVP run over ATM now and in the future
networks. [17, 18, and 20] also provide models for IP in ATM - When to set up a virtual circuit (VC) for a specific Quality of
environments. Any one of these models would work as long as the RSVP Service (QoS) related to RSVP
control packets (IP protocol 46) and data packets can follow the same - How to map the IntServ models to ATM QoS models
IP path through the network. It is important that the RSVP PATH - How to know that an ATM network is providing the QoS necessary for
messages follow the same IP path as the data such that appropriate PATH a flow
state may be installed in the routers along the path. For an ATM - How to handle the many-to-many connectionless features of IP
subnetwork, this means the ingress and egress points must be the same multicast and RSVP in the one-to-many connection-oriented world of
in both directions for the RSVP control and data messages. Note that ATM
the RSVP protocol does not require symmetric routing. The PATH state
installed by RSVP allows the RESV messages to "retrace" the hops that
the PATH message crossed. Within each of the models for IP over ATM,
there are decisions about using different types of data distribution in
ATM as well as different connection initiation. The following sections
look at some of the different ways QoS connections can be set up for
RSVP.
2.1.1 UNI 3.x and 4.0 2.1 Modes/Models for RSVP and IntServ over ATM
In the User Network Interface (UNI) 3.0 and 3.1 specifications [8,9] [3] Discusses several different models for running IP over ATM
and 4.0 specification, both permanent and switched virtual circuits networks. [17, 18, and 20] also provide models for IP in ATM
(PVC and SVC) may be established with a specified service category environments. Any one of these models would work as long as the RSVP
(CBR, VBR, and UBR for UNI 3.x and VBR-rt and ABR for 4.0) and specific control packets (IP protocol 46) and data packets can follow the same
traffic descriptors in point-to-point and point-to-multipoint IP path through the network. It is important that the RSVP PATH
configurations. Additional QoS parameters are not available in UNI 3.x messages follow the same IP path as the data such that appropriate
and those that are available are vendor-specific. Consequently, the PATH state may be installed in the routers along the path. For an
level of QoS control available in standard UNI 3.x networks is somewhat ATM subnetwork, this means the ingress and egress points must be the
limited. However, using these building blocks, it is possible to use same in both directions for the RSVP control and data messages. Note
RSVP and the IntServ models. ATM 4.0 with the Traffic Management (TM) that the RSVP protocol does not require symmetric routing. The PATH
4.0 specification [21] allows much greater control of QoS. [14] state installed by RSVP allows the RESV messages to "retrace" the
provides the details of mapping the IntServ models to UNI 3.x and 4.0 hops that the PATH message crossed. Within each of the models for IP
service categories and traffic parameters. over ATM, there are decisions about using different types of data
distribution in ATM as well as different connection initiation. The
following sections look at some of the different ways QoS connections
can be set up for RSVP.
2.1.1.1 Permanent Virtual Circuits (PVCs) 2.1.1 UNI 3.x and 4.0
PVCs emulate dedicated point-to-point lines in a network, so the In the User Network Interface (UNI) 3.0 and 3.1 specifications [8,9]
operation of RSVP can be identical to the operation over any point-to- and 4.0 specification, both permanent and switched virtual circuits
point network. The QoS of the PVC must be consistent and equivalent to (PVC and SVC) may be established with a specified service category
the type of traffic and service model used. The devices on either end (CBR, VBR, and UBR for UNI 3.x and VBR-rt and ABR for 4.0) and
of the PVC have to provide traffic control services in order to specific traffic descriptors in point-to-point and point-to-
multiplex multiple flows over the same PVC. With PVCs, there is no multipoint configurations. Additional QoS parameters are not
issue of when or how long it takes to set up VCs, since they are made available in UNI 3.x and those that are available are vendor-
in advance but the resources of the PVC are limited to what has been specific. Consequently, the level of QoS control available in
pre-allocated. PVCs that are not fully utilized can tie up ATM network standard UNI 3.x networks is somewhat limited. However, using these
resources that could be used for SVCs. building blocks, it is possible to use RSVP and the IntServ models.
ATM 4.0 with the Traffic Management (TM) 4.0 specification [21]
allows much greater control of QoS. [14] provides the details of
mapping the IntServ models to UNI 3.x and 4.0 service categories and
traffic parameters.
An additional issue for using PVCs is one of network engineering. 2.1.1.1 Permanent Virtual Circuits (PVCs)
Frequently, multiple PVCs are set up such that if all the PVCs were
running at full capacity, the link would be over-subscribed. This
frequently used "statistical multiplexing gain" makes providing IIS
over PVCs very difficult and unreliable. Any application of IIS over
PVCs has to be assured that the PVCs are able to receive all the
requested QoS.
2.1.1.2 Switched Virtual Circuits (SVCs) PVCs emulate dedicated point-to-point lines in a network, so the
operation of RSVP can be identical to the operation over any point-
to-point network. The QoS of the PVC must be consistent and
equivalent to the type of traffic and service model used. The
devices on either end of the PVC have to provide traffic control
services in order to multiplex multiple flows over the same PVC.
With PVCs, there is no issue of when or how long it takes to set up
VCs, since they are made in advance but the resources of the PVC are
limited to what has been pre-allocated. PVCs that are not fully
utilized can tie up ATM network resources that could be used for
SVCs.
SVCs allow paths in the ATM network to be set up "on demand". This An additional issue for using PVCs is one of network engineering.
allows flexibility in the use of RSVP over ATM along with some Frequently, multiple PVCs are set up such that if all the PVCs were
complexity. Parallel VCs can be set up to allow best-effort and better running at full capacity, the link would be over-subscribed. This
service class paths through the network, as shown in Figure 1. The frequently used "statistical multiplexing gain" makes providing IIS
cost and time to set up SVCs can impact their use. For example, it may over PVCs very difficult and unreliable. Any application of IIS over
be better to initially route QoS traffic over existing VCs until a SVC PVCs has to be assured that the PVCs are able to receive all the
with the desired QoS can be set up for the flow. Scaling issues can requested QoS.
come into play if a single RSVP flow is used per VC, as will be
discussed in Section 4.3.1.1. The number of VCs in any ATM device may
also be limited so the number of RSVP flows that can be supported by a
device can be strictly limited to the number of VCs available, if we
assume one flow per VC. Section 4 discusses the topic of VC management
for RSVP in greater detail.
Data Flow ==========> 2.1.1.2 Switched Virtual Circuits (SVCs)
+-----+ SVCs allow paths in the ATM network to be set up "on demand". This
| | --------------> +----+ allows flexibility in the use of RSVP over ATM along with some
| Src | --------------> | R1 | complexity. Parallel VCs can be set up to allow best-effort and
| *| --------------> +----+ better service class paths through the network, as shown in Figure 1.
+-----+ QoS VCs The cost and time to set up SVCs can impact their use. For example,
/\ it may be better to initially route QoS traffic over existing VCs
|| until a SVC with the desired QoS can be set up for the flow. Scaling
VC || issues can come into play if a single RSVP flow is used per VC, as
Initiator will be discussed in Section 4.3.1.1. The number of VCs in any ATM
device may also be limited so the number of RSVP flows that can be
supported by a device can be strictly limited to the number of VCs
available, if we assume one flow per VC. Section 4 discusses the
topic of VC management for RSVP in greater detail.
Figure 1: Data Flow VC Initiation Data Flow ==========>
While RSVP is receiver oriented, ATM is sender oriented. This might +-----+
seem like a problem but the sender or ingress point receives RSVP RESV | | --------------> +----+
messages and can determine whether a new VC has to be set up to the | Src | --------------> | R1 |
destination or egress point. | *| --------------> +----+
+-----+ QoS VCs
/\
||
VC ||
Initiator
2.1.1.3 Point to MultiPoint Figure 1: Data Flow VC Initiation
In order to provide QoS for IP multicast, an important feature of RSVP, While RSVP is receiver oriented, ATM is sender oriented. This might
data flows must be distributed to multiple destinations from a given seem like a problem but the sender or ingress point receives RSVP
source. Point-to-multipoint VCs provide such a mechanism. It is RESV messages and can determine whether a new VC has to be set up to
important to map the actions of IP multicasting and RSVP (e.g. IGMP the destination or egress point.
JOIN/LEAVE and RSVP RESV/RESV TEAR) to add party and drop party
functions for ATM. Point-to-multipoint VCs as defined in UNI 3.x and
UNI 4.0 have a single service class for all destinations. This is
contrary to the RSVP "heterogeneous receiver" concept. It is possible
to set up a different VC to each receiver requesting a different QoS,
as shown in Figure 2. This again can run into scaling and resource
problems when managing multiple VCs on the same interface to different
destinations.
+----+ 2.1.1.3 Point to MultiPoint
+------> | R1 |
| +----+
|
| +----+
+-----+ -----+ +--> | R2 |
| | ---------+ +----+ Receiver Request
Types:
| Src | ----> QoS 1 and QoS
2
| | .........+ +----+ ....> Best-Effort
+-----+ .....+ +..> | R3 |
: +----+
/\ :
|| : +----+
|| +......> | R4 |
|| +----+
Single
IP Mulicast
Group
Figure 2: Types of Multicast Receivers In order to provide QoS for IP multicast, an important feature of
RSVP, data flows must be distributed to multiple destinations from a
given source. Point-to-multipoint VCs provide such a mechanism. It
is important to map the actions of IP multicasting and RSVP (e.g.
IGMP JOIN/LEAVE and RSVP RESV/RESV TEAR) to add party and drop party
functions for ATM. Point-to-multipoint VCs as defined in UNI 3.x and
UNI 4.0 have a single service class for all destinations. This is
contrary to the RSVP "heterogeneous receiver" concept. It is
possible to set up a different VC to each receiver requesting a
different QoS, as shown in Figure 2. This again can run into scaling
and resource problems when managing multiple VCs on the same
interface to different destinations.
RSVP sends messages both up and down the multicast distribution tree. +----+
In the case of a large ATM cloud, this could result in a RSVP message +------> | R1 |
implosion at an ATM ingress point with many receivers. | +----+
|
| +----+
+-----+ -----+ +--> | R2 |
| | ---------+ +----+ Receiver Request Types:
| Src | ----> QoS 1 and QoS 2
| | .........+ +----+ ....> Best-Effort
+-----+ .....+ +..> | R3 |
: +----+
/\ :
|| : +----+
|| +......> | R4 |
|| +----+
Single
IP Multicast
Group
ATM 4.0 expands on the point-to-multipoint VCs by adding a Leaf Figure 2: Types of Multicast Receivers
Initiated Join (LIJ) capability. LIJ allows an ATM end point to join
into an existing point-to-multipoint VC without necessarily contacting
the source of the VC. This can reduce the burden on the ATM source
point for setting up new branches and more closely matches the
receiver-based model of RSVP and IP multicast. However, many of the
same scaling issues exist and the new branches added to a point-to-
multipoint VC must use the same QoS as existing branches.
2.1.1.4 Multicast Servers RSVP sends messages both up and down the multicast distribution tree.
In the case of a large ATM cloud, this could result in a RSVP message
implosion at an ATM ingress point with many receivers.
IP-over-ATM has the concept of a multicast server or reflector that can ATM 4.0 expands on the point-to-multipoint VCs by adding a Leaf
accept cells from multiple senders and send them via a point-to- Initiated Join (LIJ) capability. LIJ allows an ATM end point to join
multipoint VC to a set of receivers. This moves the VC scaling issues into an existing point-to-multipoint VC without necessarily
noted previously for point-to-multipoint VCs to the multicast server. contacting the source of the VC. This can reduce the burden on the
Additionally, the multicast server will need to know how to interpret ATM source point for setting up new branches and more closely matches
RSVP packets or receive instruction from another node so it will be the receiver-based model of RSVP and IP multicast. However, many of
able to provide VCs of the appropriate QoS for the RSVP flows. the same scaling issues exist and the new branches added to a point-
to-multipoint VC must use the same QoS as existing branches.
2.1.2 Hop-by-Hop vs. Short Cut 2.1.1.4 Multicast Servers
If the ATM "cloud" is made up a number of logical IP subnets (LISs), IP-over-ATM has the concept of a multicast server or reflector that
then it is possible to use "short cuts" from a node on one LIS directly can accept cells from multiple senders and send them via a point-to-
to a node on another LIS, avoiding router hops between the LISs. NHRP multipoint VC to a set of receivers. This moves the VC scaling
[4], is one mechanism for determining the ATM address of the egress issues noted previously for point-to-multipoint VCs to the multicast
point on the ATM network given a destination IP address. It is a topic server. Additionally, the multicast server will need to know how to
for further study to determine if significant benefit is achieved from interpret RSVP packets or receive instruction from another node so it
short cut routes vs. the extra state required. will be able to provide VCs of the appropriate QoS for the RSVP
flows.
2.1.3 Future Models 2.1.2 Hop-by-Hop vs. Short Cut
ATM is constantly evolving. If we assume that RSVP and IntServ If the ATM "cloud" is made up a number of logical IP subnets (LISs),
applications are going to be wide-spread, it makes sense to consider then it is possible to use "short cuts" from a node on one LIS
changes to ATM that would improve the operation of RSVP and IntServ directly to a node on another LIS, avoiding router hops between the
over ATM. Similarly, the RSVP protocol and IntServ models will LISs. NHRP [4], is one mechanism for determining the ATM address of
continue to evolve and changes that affect them should also be the egress point on the ATM network given a destination IP address.
considered. The following are a few ideas that have been discussed It is a topic for further study to determine if significant benefit
that would make the integration of the IntServ models and RSVP easier is achieved from short cut routes vs. the extra state required.
or more complete. They are presented here to encourage continued
development and discussion of ideas that can help aid in the
integration of RSVP, IntServ, and ATM.
2.1.3.1 Heterogeneous Point-to-MultiPoint 2.1.3 Future Models
The IntServ models and RSVP support the idea of "heterogeneous ATM is constantly evolving. If we assume that RSVP and IntServ
receivers"; e.g., not all receivers of a particular multicast flow are applications are going to be wide-spread, it makes sense to consider
required to ask for the same QoS from the network, as shown in Figure changes to ATM that would improve the operation of RSVP and IntServ
2. over ATM. Similarly, the RSVP protocol and IntServ models will
continue to evolve and changes that affect them should also be
considered. The following are a few ideas that have been discussed
that would make the integration of the IntServ models and RSVP easier
or more complete. They are presented here to encourage continued
development and discussion of ideas that can help aid in the
integration of RSVP, IntServ, and ATM.
The most important scenario that can utilize this feature occurs when 2.1.3.1 Heterogeneous Point-to-MultiPoint
some receivers in an RSVP session ask for a specific QoS while others
receive the flow with a best-effort service. In some cases where there
are multiple senders on a shared-reservation flow (e.g., an audio
conference), an individual receiver only needs to reserve enough
resources to receive one sender at a time. However, other receivers
may elect to reserve more resources, perhaps to allow for some amount
of "over-speaking" or in order to record the conference (post
processing during playback can separate the senders by their source
addresses).
In order to prevent denial-of-service attacks via reservations, the The IntServ models and RSVP support the idea of "heterogeneous
service models do not allow the service elements to simply drop non- receivers"; e.g., not all receivers of a particular multicast flow
conforming packets. For example, Controlled Load service model [7] are required to ask for the same QoS from the network, as shown in
assigns non-conformant packets to best-effort status (which may result Figure 2.
in packet drops if there is congestion).
Emulating these behaviors over an ATM network is problematic and needs The most important scenario that can utilize this feature occurs when
to be studied. If a single maximum QoS is used over a point-to- some receivers in an RSVP session ask for a specific QoS while others
multipoint VC, resources could be wasted if cells are sent over certain receive the flow with a best-effort service. In some cases where
links where the reassembled packets will eventually be dropped. In there are multiple senders on a shared-reservation flow (e.g., an
addition, the "maximum QoS" may actually cause a degradation in service audio conference), an individual receiver only needs to reserve
to the best-effort branches. enough resources to receive one sender at a time. However, other
receivers may elect to reserve more resources, perhaps to allow for
some amount of "over-speaking" or in order to record the conference
(post processing during playback can separate the senders by their
source addresses).
The term "variegated VC" has been coined to describe a point-to- In order to prevent denial-of-service attacks via reservations, the
multipoint VC that allows a different QoS on each branch. This approach service models do not allow the service elements to simply drop non-
seems to match the spirit of the Integrated Service and RSVP models, conforming packets. For example, Controlled Load service model [7]
but some thought has to be put into the cell drop strategy when assigns non-conformant packets to best-effort status (which may
traversing from a "bigger" branch to a "smaller" one. The "best-effort result in packet drops if there is congestion).
for non-conforming packets" behavior must also be retained. Early
Packet Discard (EPD) schemes must be used so that all the cells for a
given packet can be discarded at the same time rather than discarding
only a few cells from several packets making all the packets useless to
the receivers.
2.1.3.2 Lightweight Signalling Emulating these behaviors over an ATM network is problematic and
needs to be studied. If a single maximum QoS is used over a point-
to-multipoint VC, resources could be wasted if cells are sent over
certain links where the reassembled packets will eventually be
dropped. In addition, the "maximum QoS" may actually cause a
degradation in service to the best-effort branches.
Q.2931 signalling is very complete and carries with it a significant The term "variegated VC" has been coined to describe a point-to-
burden for signalling in all possible public and private connections. multipoint VC that allows a different QoS on each branch. This
It might be worth investigating a lighter weight signalling mechanism approach seems to match the spirit of the Integrated Service and RSVP
for faster connection setup in private networks. models, but some thought has to be put into the cell drop strategy
when traversing from a "bigger" branch to a "smaller" one. The
"best-effort for non-conforming packets" behavior must also be
retained. Early Packet Discard (EPD) schemes must be used so that
all the cells for a given packet can be discarded at the same time
rather than discarding only a few cells from several packets making
all the packets useless to the receivers.
2.1.3.3 QoS Renegotiation 2.1.3.2 Lightweight Signalling
Another change that would help RSVP over ATM is the ability to request Q.2931 signalling is very complete and carries with it a significant
a different QoS for an active VC. This would eliminate the need to burden for signalling in all possible public and private connections.
setup and tear down VCs as the QoS changed. RSVP allows receivers to It might be worth investigating a lighter weight signalling mechanism
change their reservations and senders to change their traffic for faster connection setup in private networks.
descriptors dynamically. This, along with the merging of reservations,
can create a situation where the QoS needs of a VC can change.
Allowing changes to the QoS of an existing VC would allow these
features to work without creating a new VC. In the ITU-T ATM
specifications [24,25], some cell rates can be renegotiated or changed.
Specifically, the Peak Cell Rate (PCR) of an existing VC can be changed
and, in some cases, QoS parameters may be renegotiated during the call
setup phase. It is unclear if this is sufficient for the QoS
renegotiation needs of the IntServ models.
2.1.3.4 Group Addressing 2.1.3.3 QoS Renegotiation
The model of one-to-many communications provided by point-to-multipoint Another change that would help RSVP over ATM is the ability to
VCs does not really match the many-to-many communications provided by request a different QoS for an active VC. This would eliminate the
IP multicasting. A scaleable mapping from IP multicast addresses to an need to setup and tear down VCs as the QoS changed. RSVP allows
ATM "group address" can address this problem. receivers to change their reservations and senders to change their
traffic descriptors dynamically. This, along with the merging of
reservations, can create a situation where the QoS needs of a VC can
change. Allowing changes to the QoS of an existing VC would allow
these features to work without creating a new VC. In the ITU-T ATM
specifications [24,25], some cell rates can be renegotiated or
changed. Specifically, the Peak Cell Rate (PCR) of an existing VC
can be changed and, in some cases, QoS parameters may be renegotiated
during the call setup phase. It is unclear if this is sufficient for
the QoS renegotiation needs of the IntServ models.
2.1.3.5 Label Switching 2.1.3.4 Group Addressing
The MultiProtocol Label Switching (MPLS) working group is discussing The model of one-to-many communications provided by point-to-
methods for optimizing the use of ATM and other switched networks for multipoint VCs does not really match the many-to-many communications
IP by encapsulating the data with a header that is used by the interior provided by IP multicasting. A scaleable mapping from IP multicast
switches to achieve faster forwarding lookups. [22] discusses a addresses to an ATM "group address" can address this problem.
framework for this work. It is unclear how this work will affect
IntServ and RSVP over label switched networks but there may be some
interactions.
2.1.4 QoS Routing 2.1.3.5 Label Switching
RSVP is explicitly not a routing protocol. However, since it conveys The MultiProtocol Label Switching (MPLS) working group is discussing
QoS information, it may prove to be a valuable input to a routing methods for optimizing the use of ATM and other switched networks for
protocol that can make path determinations based on QoS and network IP by encapsulating the data with a header that is used by the
load information. In other words, instead of asking for just the IP interior switches to achieve faster forwarding lookups. [22]
next hop for a given destination address, it might be worthwhile for discusses a framework for this work. It is unclear how this work
RSVP to provide information on the QoS needs of the flow if routing has will affect IntServ and RSVP over label switched networks but there
the ability to use this information in order to determine a route. may be some interactions.
Other forms of QoS routing have existed in the past such as using the
IP TOS and Precedence bits to select a path through the network. Some
have discussed using these same bits to select one of a set of parallel
ATM VCs as a form of QoS routing. ATM routing has also considered the
problem of QoS routing through the Private Network-to-Network Interface
(PNNI) [26] routing protocol for routing ATM VCs on a path that can
support their needs. The work in this area is just starting and there
are numerous issues to consider. [23], as part of the work of the QoSR
working group frame the issues for QoS Routing in the Internet.
2.2 Reliance on Unicast and Multicast Routing 2.1.4 QoS Routing
RSVP was designed to support both unicast and IP multicast RSVP is explicitly not a routing protocol. However, since it conveys
applications. This means that RSVP needs to work closely with QoS information, it may prove to be a valuable input to a routing
multicast and unicast routing. Unicast routing over ATM has been protocol that can make path determinations based on QoS and network
addressed [10] and [11]. MARS [5] provides multicast address load information. In other words, instead of asking for just the IP
resolution for IP over ATM networks, an important part of the solution next hop for a given destination address, it might be worthwhile for
for multicast but still relies on multicast routing protocols to RSVP to provide information on the QoS needs of the flow if routing
connect multicast senders and receivers on different subnets. has the ability to use this information in order to determine a
route. Other forms of QoS routing have existed in the past such as
using the IP TOS and Precedence bits to select a path through the
network. Some have discussed using these same bits to select one of
a set of parallel ATM VCs as a form of QoS routing. ATM routing has
also considered the problem of QoS routing through the Private
Network-to-Network Interface (PNNI) [26] routing protocol for routing
ATM VCs on a path that can support their needs. The work in this
area is just starting and there are numerous issues to consider.
[23], as part of the work of the QoSR working group frame the issues
for QoS Routing in the Internet.
2.3 Aggregation of Flows 2.2 Reliance on Unicast and Multicast Routing
Some of the scaling issues noted in previous sections can be addressed RSVP was designed to support both unicast and IP multicast
by aggregating several RSVP flows over a single VC if the destinations applications. This means that RSVP needs to work closely with
of the VC match for all the flows being aggregated. However, this multicast and unicast routing. Unicast routing over ATM has been
causes considerable complexity in the management of VCs and in the addressed [10] and [11]. MARS [5] provides multicast address
scheduling of packets within each VC at the root point of the VC. Note resolution for IP over ATM networks, an important part of the
that the rescheduling of flows within a VC is not possible in the solution for multicast but still relies on multicast routing
switches in the core of the ATM network. Virtual Paths (VPs) can be protocols to connect multicast senders and receivers on different
used for aggregating multiple VCs. This topic is discussed in greater subnets.
detail as it applies to multicast data distribution in section 4.2.3.4
2.4 Mapping QoS Parameters 2.3 Aggregation of Flows
The mapping of QoS parameters from the IntServ models to the ATM Some of the scaling issues noted in previous sections can be
service classes is an important issue in making RSVP and IntServ work addressed by aggregating several RSVP flows over a single VC if the
over ATM. [14] addresses these issues very completely for the destinations of the VC match for all the flows being aggregated.
Controlled Load and Guaranteed Service models. An additional issue is However, this causes considerable complexity in the management of VCs
that while some guidelines can be developed for mapping the parameters and in the scheduling of packets within each VC at the root point of
of a given service model to the traffic descriptors of an ATM traffic the VC. Note that the rescheduling of flows within a VC is not
class, implementation variables, policy, and cost factors can make possible in the switches in the core of the ATM network. Virtual
strict mapping problematic. So, a set of workable mappings that can be Paths (VPs) can be used for aggregating multiple VCs. This topic is
applied to different network requirements and scenarios is needed as discussed in greater detail as it applies to multicast data
long as the mappings can satisfy the needs of the service model(s). distribution in section 4.2.3.4
2.5 Directly Connected ATM Hosts 2.4 Mapping QoS Parameters
It is obvious that the needs of hosts that are directly connected to The mapping of QoS parameters from the IntServ models to the ATM
ATM networks must be considered for RSVP and IntServ over ATM. service classes is an important issue in making RSVP and IntServ work
Functionality for RSVP over ATM must not assume that an ATM host has over ATM. [14] addresses these issues very completely for the
all the functionality of a router, but such things as MARS and NHRP Controlled Load and Guaranteed Service models. An additional issue
clients would be worthwhile features. A host must manage VCs just like is that while some guidelines can be developed for mapping the
any other ATM sender or receiver as described later in section 4. parameters of a given service model to the traffic descriptors of an
ATM traffic class, implementation variables, policy, and cost factors
can make strict mapping problematic. So, a set of workable mappings
that can be applied to different network requirements and scenarios
is needed as long as the mappings can satisfy the needs of the
service model(s).
2.6 Accounting and Policy Issues 2.5 Directly Connected ATM Hosts
Since RSVP and IntServ create classes of preferential service, some It is obvious that the needs of hosts that are directly connected to
form of administrative control and/or cost allocation is needed to ATM networks must be considered for RSVP and IntServ over ATM.
control access. There are certain types of policies specific to ATM Functionality for RSVP over ATM must not assume that an ATM host has
and IP over ATM that need to be studied to determine how they all the functionality of a router, but such things as MARS and NHRP
interoperate with the IP and IntServ policies being developed. Typical clients would be worthwhile features. A host must manage VCs just
IP policies would be that only certain users are allowed to make like any other ATM sender or receiver as described later in section
reservations. This policy would translate well to IP over ATM due to 4.
the similarity to the mechanisms used for Call Admission Control (CAC).
There may be a need for policies specific to IP over ATM. For example,
since signalling costs in ATM are high relative to IP, an IP over ATM
specific policy might restrict the ability to change the prevailing QoS
in a VC. If VCs are relatively scarce, there also might be specific
accounting costs in creating a new VC. The work so far has been
preliminary, and much work remains to be done. The policy mechanisms
outlined in [12] and [13] provide the basic mechanisms for implementing
policies for RSVP and IntServ over any media, not just ATM.
3. Framework for IntServ and RSVP over ATM 2.6 Accounting and Policy Issues
Now that we have defined some of the issues for IntServ and RSVP over Since RSVP and IntServ create classes of preferential service, some
ATM, we can formulate a framework for solutions. The problem breaks form of administrative control and/or cost allocation is needed to
down to two very distinct areas; the mapping of IntServ models to ATM control access. There are certain types of policies specific to ATM
service categories and QoS parameters and the operation of RSVP over and IP over ATM that need to be studied to determine how they
ATM. interoperate with the IP and IntServ policies being developed.
Typical IP policies would be that only certain users are allowed to
make reservations. This policy would translate well to IP over ATM
due to the similarity to the mechanisms used for Call Admission
Control (CAC).
Mapping IntServ models to ATM service categories and QoS parameters is There may be a need for policies specific to IP over ATM. For
a matter of determining which categories can support the goals of the example, since signalling costs in ATM are high relative to IP, an IP
service models and matching up the parameters and variables between the over ATM specific policy might restrict the ability to change the
IntServ description and the ATM description(s). Since ATM has such a prevailing QoS in a VC. If VCs are relatively scarce, there also
wide variety of service categories and parameters, more than one ATM might be specific accounting costs in creating a new VC. The work so
service category should be able to support each of the two IntServ far has been preliminary, and much work remains to be done. The
models. This will provide a good bit of flexibility in configuration policy mechanisms outlined in [12] and [13] provide the basic
and deployment. [14] examines this topic completely. mechanisms for implementing policies for RSVP and IntServ over any
media, not just ATM.
The operation of RSVP over ATM requires careful management of VCs in 3. Framework for IntServ and RSVP over ATM
order to match the dynamics of the RSVP protocol. VCs need to be
managed for both the RSVP QoS data and the RSVP signalling messages.
The remainder of this document will discuss several approaches to
managing VCs for RSVP and [15] and [16] discuss their application for
implementations in term of interoperability requirement and
implementation guidelines.
4. RSVP VC Management Now that we have defined some of the issues for IntServ and RSVP over
ATM, we can formulate a framework for solutions. The problem breaks
down to two very distinct areas; the mapping of IntServ models to ATM
service categories and QoS parameters and the operation of RSVP over
ATM.
This section provides more detail on the issues related to the Mapping IntServ models to ATM service categories and QoS parameters
management of SVCs for RSVP and IntServ. is a matter of determining which categories can support the goals of
the service models and matching up the parameters and variables
between the IntServ description and the ATM description(s). Since
ATM has such a wide variety of service categories and parameters,
more than one ATM service category should be able to support each of
the two IntServ models. This will provide a good bit of flexibility
in configuration and deployment. [14] examines this topic
completely.
4.1 VC Initiation The operation of RSVP over ATM requires careful management of VCs in
order to match the dynamics of the RSVP protocol. VCs need to be
managed for both the RSVP QoS data and the RSVP signalling messages.
The remainder of this document will discuss several approaches to
managing VCs for RSVP and [15] and [16] discuss their application for
implementations in term of interoperability requirement and
implementation guidelines.
As discussed in section 2.1.1.2, there is an apparent mismatch between 4. RSVP VC Management
RSVP and ATM. Specifically, RSVP control is receiver oriented and ATM
control is sender oriented. This initially may seem like a major
issue, but really is not. While RSVP reservation (RESV) requests are
generated at the receiver, actual allocation of resources takes place
at the subnet sender. For data flows, this means that subnet senders
will establish all QoS VCs and the subnet receiver must be able to
accept incoming QoS VCs, as illustrated in Figure 1. These
restrictions are consistent with RSVP version 1 processing rules and
allow senders to use different flow to VC mappings and even different
QoS renegotiation techniques without interoperability problems.
The use of the reverse path provided by point-to-point VCs by receivers This section provides more detail on the issues related to the
is for further study. There are two related issues. The first is that management of SVCs for RSVP and IntServ.
use of the reverse path requires the VC initiator to set appropriate
reverse path QoS parameters. The second issue is that reverse paths are
not available with point-to-multipoint VCs, so reverse paths could only
be used to support unicast RSVP reservations.
4.2 Data VC Management 4.1 VC Initiation
Any RSVP over ATM implementation must map RSVP and RSVP associated data As discussed in section 2.1.1.2, there is an apparent mismatch
flows to ATM Virtual Circuits (VCs). LAN Emulation [17], Classical IP between RSVP and ATM. Specifically, RSVP control is receiver oriented
[10] and, more recently, NHRP [4] discuss mapping IP traffic onto ATM and ATM control is sender oriented. This initially may seem like a
SVCs, but they only cover a single QoS class, i.e., best effort major issue, but really is not. While RSVP reservation (RESV)
traffic. When QoS is introduced, VC mapping must be revisited. For RSVP requests are generated at the receiver, actual allocation of
controlled QoS flows, one issue is VCs to use for QoS data flows. resources takes place at the subnet sender. For data flows, this
means that subnet senders will establish all QoS VCs and the subnet
receiver must be able to accept incoming QoS VCs, as illustrated in
Figure 1. These restrictions are consistent with RSVP version 1
processing rules and allow senders to use different flow to VC
mappings and even different QoS renegotiation techniques without
interoperability problems.
In the Classic IP over ATM and current NHRP models, a single point-to- The use of the reverse path provided by point-to-point VCs by
point VC is used for all traffic between two ATM attached hosts receivers is for further study. There are two related issues. The
(routers and end-stations). It is likely that such a single VC will first is that use of the reverse path requires the VC initiator to
not be adequate or optimal when supporting data flows with multiple QoS set appropriate reverse path QoS parameters. The second issue is that
types. RSVP's basic purpose is to install support for flows with reverse paths are not available with point-to-multipoint VCs, so
multiple QoS types, so it is essential for any RSVP over ATM solution reverse paths could only be used to support unicast RSVP
to address VC usage for QoS data flows, as shown in Figure 1. reservations.
RSVP reservation styles must also be taken into account in any VC usage 4.2 Data VC Management
strategy.
This section describes issues and methods for management of VCs Any RSVP over ATM implementation must map RSVP and RSVP associated
associated with QoS data flows. When establishing and maintaining VCs, data flows to ATM Virtual Circuits (VCs). LAN Emulation [17],
the subnet sender will need to deal with several complicating factors Classical IP [10] and, more recently, NHRP [4] discuss mapping IP
including multiple QoS reservations, requests for QoS changes, ATM traffic onto ATM SVCs, but they only cover a single QoS class, i.e.,
short-cuts, and several multicast specific issues. The multicast best effort traffic. When QoS is introduced, VC mapping must be
specific issues result from the nature of ATM connections. The key revisited. For RSVP controlled QoS flows, one issue is VCs to use for
multicast related issues are heterogeneity, data distribution, receiver QoS data flows.
transitions, and end-point identification.
4.2.1 Reservation to VC Mapping In the Classic IP over ATM and current NHRP models, a single point-
to-point VC is used for all traffic between two ATM attached hosts
(routers and end-stations). It is likely that such a single VC will
not be adequate or optimal when supporting data flows with multiple
.bp QoS types. RSVP's basic purpose is to install support for flows
with multiple QoS types, so it is essential for any RSVP over ATM
solution to address VC usage for QoS data flows, as shown in Figure
1.
There are various approaches available for mapping reservations on to RSVP reservation styles must also be taken into account in any VC
VCs. A distinguishing attribute of all approaches is how reservations usage strategy.
are combined on to individual VCs. When mapping reservations on to
VCs, individual VCs can be used to support a single reservation, or
reservation can be combined with others on to "aggregate" VCs. In the
first case, each reservation will be supported by one or more VCs.
Multicast reservation requests may translate into the setup of multiple
VCs as is described in more detail in section 4.2.2. Unicast
reservation requests will always translate into the setup of a single
QoS VC. In both cases, each VC will only carry data associated with a
single reservation. The greatest benefit if this approach is ease of
implementation, but it comes at the cost of increased (VC) setup time
and the consumption of greater number of VC and associated resources.
When multiple reservations are combined onto a single VC, it is This section describes issues and methods for management of VCs
referred to as the "aggregation" model. With this model, large VCs associated with QoS data flows. When establishing and maintaining
could be set up between IP routers and hosts in an ATM network. These VCs, the subnet sender will need to deal with several complicating
VCs could be managed much like IP Integrated Service (IIS) point-to- factors including multiple QoS reservations, requests for QoS
point links (e.g. T-1, DS-3) are managed now. Traffic from multiple changes, ATM short-cuts, and several multicast specific issues. The
sources over multiple RSVP sessions might be multiplexed on the same multicast specific issues result from the nature of ATM connections.
VC. This approach has a number of advantages. First, there is The key multicast related issues are heterogeneity, data
typically no signalling latency as VCs would be in existence when the distribution, receiver transitions, and end-point identification.
traffic started flowing, so no time is wasted in setting up VCs.
Second, the heterogeneity problem (section 4.2.2) in full over ATM has
been reduced to a solved problem. Finally, the dynamic QoS problem
(section 4.2.7) for ATM has also been reduced to a solved problem.
The aggregation model can be used with point-to-point and point-to- 4.2.1 Reservation to VC Mapping
multipoint VCs. The problem with the aggregation model is that the
choice of what QoS to use for the VCs may be difficult, without
knowledge of the likely reservation types and sizes but is made easier
since the VCs can be changed as needed.
4.2.2 Unicast Data VC Management There are various approaches available for mapping reservations on to
VCs. A distinguishing attribute of all approaches is how
reservations are combined on to individual VCs. When mapping
reservations on to VCs, individual VCs can be used to support a
single reservation, or reservation can be combined with others on to
"aggregate" VCs. In the first case, each reservation will be
supported by one or more VCs. Multicast reservation requests may
translate into the setup of multiple VCs as is described in more
detail in section 4.2.2. Unicast reservation requests will always
translate into the setup of a single QoS VC. In both cases, each VC
will only carry data associated with a single reservation. The
greatest benefit if this approach is ease of implementation, but it
comes at the cost of increased (VC) setup time and the consumption of
greater number of VC and associated resources.
Unicast data VC management is much simpler than multicast data VC When multiple reservations are combined onto a single VC, it is
management but there are still some similar issues. If one considers referred to as the "aggregation" model. With this model, large VCs
unicast to be a devolved case of multicast, then implementing the could be set up between IP routers and hosts in an ATM network. These
multicast solutions will cover unicast. However, some may want to VCs could be managed much like IP Integrated Service (IIS) point-to-
consider unicast-only implementations. In these situations, the choice point links (e.g. T-1, DS-3) are managed now. Traffic from multiple
of using a single flow per VC or aggregation of flows onto a single VC sources over multiple RSVP sessions might be multiplexed on the same
remains but the problem of heterogeneity discussed in the following VC. This approach has a number of advantages. First, there is
section is removed. typically no signalling latency as VCs would be in existence when the
traffic started flowing, so no time is wasted in setting up VCs.
Second, the heterogeneity problem (section 4.2.2) in full over ATM
has been reduced to a solved problem. Finally, the dynamic QoS
problem (section 4.2.7) for ATM has also been reduced to a solved
problem.
4.2.3 Multicast Heterogeneity The aggregation model can be used with point-to-point and point-to-
multipoint VCs. The problem with the aggregation model is that the
choice of what QoS to use for the VCs may be difficult, without
knowledge of the likely reservation types and sizes but is made
easier since the VCs can be changed as needed.
As mentioned in section 2.1.3.1 and shown in figure 2, multicast 4.2.2 Unicast Data VC Management
heterogeneity occurs when receivers request different qualities of
service within a single session. This means that the amount of
requested resources differs on a per next hop basis. A related type of
heterogeneity occurs due to best-effort receivers. In any IP multicast
group, it is possible that some receivers will request QoS (via RSVP)
and some receivers will not. In shared media networks, like Ethernet,
receivers that have not requested resources can typically be given
identical service to those that have without complications. This is
not the case with ATM. In ATM networks, any additional end-points of a
VC must be explicitly added. There may be costs associated with adding
the best-effort receiver, and there might not be adequate resources.
An RSVP over ATM solution will need to support heterogeneous receivers
even though ATM does not currently provide such support directly.
RSVP heterogeneity is supported over ATM in the way RSVP reservations Unicast data VC management is much simpler than multicast data VC
are mapped into ATM VCs. There are four alternative approaches this management but there are still some similar issues. If one considers
mapping. There are multiple models for supporting RSVP heterogeneity unicast to be a devolved case of multicast, then implementing the
over ATM. Section 4.2.3.1 examines the multiple VCs per RSVP multicast solutions will cover unicast. However, some may want to
reservation (or full heterogeneity) model where a single reservation consider unicast-only implementations. In these situations, the
can be forwarded onto several VCs each with a different QoS. Section choice of using a single flow per VC or aggregation of flows onto a
4.2.3.2 presents a limited heterogeneity model where exactly one QoS VC single VC remains but the problem of heterogeneity discussed in the
is used along with a best effort VC. Section 4.2.3.3 examines the VC following section is removed.
per RSVP reservation (or homogeneous) model, where each RSVP
reservation is mapped to a single ATM VC. Section 4.2.3.4 describes
the aggregation model allowing aggregation of multiple RSVP
reservations into a single VC.
4.2.3.1 Full Heterogeneity Model 4.2.3 Multicast Heterogeneity
RSVP supports heterogeneous QoS, meaning that different receivers of As mentioned in section 2.1.3.1 and shown in figure 2, multicast
the same multicast group can request a different QoS. But importantly, heterogeneity occurs when receivers request different qualities of
some receivers might have no reservation at all and want to receive the service within a single session. This means that the amount of
traffic on a best effort service basis. The IP model allows receivers requested resources differs on a per next hop basis. A related type
to join a multicast group at any time on a best effort basis, and it is of heterogeneity occurs due to best-effort receivers. In any IP
important that ATM as part of the Internet continue to provide this multicast group, it is possible that some receivers will request QoS
service. We define the "full heterogeneity" model as providing a (via RSVP) and some receivers will not. In shared media networks,
separate VC for each distinct QoS for a multicast session including like Ethernet, receivers that have not requested resources can
best effort and one or more qualities of service. typically be given identical service to those that have without
complications. This is not the case with ATM. In ATM networks, any
additional end-points of a VC must be explicitly added. There may be
costs associated with adding the best-effort receiver, and there
might not be adequate resources. An RSVP over ATM solution will need
to support heterogeneous receivers even though ATM does not currently
provide such support directly.
Note that while full heterogeneity gives users exactly what they RSVP heterogeneity is supported over ATM in the way RSVP reservations
request, it requires more resources of the network than other possible are mapped into ATM VCs. There are four alternative approaches this
approaches. The exact amount of bandwidth used for duplicate traffic mapping. There are multiple models for supporting RSVP heterogeneity
depends on the network topology and group membership. over ATM. Section 4.2.3.1 examines the multiple VCs per RSVP
reservation (or full heterogeneity) model where a single reservation
can be forwarded onto several VCs each with a different QoS. Section
4.2.3.2 presents a limited heterogeneity model where exactly one QoS
VC is used along with a best effort VC. Section 4.2.3.3 examines the
VC per RSVP reservation (or homogeneous) model, where each RSVP
reservation is mapped to a single ATM VC. Section 4.2.3.4 describes
the aggregation model allowing aggregation of multiple RSVP
reservations into a single VC.
4.2.3.2 Limited Heterogeneity Model 4.2.3.1 Full Heterogeneity Model
We define the "limited heterogeneity" model as the case where the RSVP supports heterogeneous QoS, meaning that different receivers of
receivers of a multicast session are limited to use either best effort the same multicast group can request a different QoS. But
service or a single alternate quality of service. The alternate QoS importantly, some receivers might have no reservation at all and want
can be chosen either by higher level protocols or by dynamic to receive the traffic on a best effort service basis. The IP model
renegotiation of QoS as described below. allows receivers to join a multicast group at any time on a best
effort basis, and it is important that ATM as part of the Internet
continue to provide this service. We define the "full heterogeneity"
model as providing a separate VC for each distinct QoS for a
multicast session including best effort and one or more qualities of
service.
In order to support limited heterogeneity, each ATM edge device Note that while full heterogeneity gives users exactly what they
participating in a session would need at most two VCs. One VC would be request, it requires more resources of the network than other
a point-to-multipoint best effort service VC and would serve all best possible approaches. The exact amount of bandwidth used for duplicate
effort service IP destinations for this RSVP session. traffic depends on the network topology and group membership.
The other VC would be a point to multipoint VC with QoS and would serve 4.2.3.2 Limited Heterogeneity Model
all IP destinations for this RSVP session that have an RSVP reservation
established.
As with full heterogeneity, a disadvantage of the limited heterogeneity We define the "limited heterogeneity" model as the case where the
scheme is that each packet will need to be duplicated at the network receivers of a multicast session are limited to use either best
layer and one copy sent into each of the 2 VCs. Again, the exact effort service or a single alternate quality of service. The
amount of excess traffic will depend on the network topology and group alternate QoS can be chosen either by higher level protocols or by
membership. If any of the existing QoS VC end-points cannot upgrade to dynamic renegotiation of QoS as described below.
the new QoS, then the new reservation fails though the resources exist
for the new receiver.
4.2.3.3 Homogeneous and Modified Homogeneous Models In order to support limited heterogeneity, each ATM edge device
participating in a session would need at most two VCs. One VC would
be a point-to-multipoint best effort service VC and would serve all
best effort service IP destinations for this RSVP session.
We define the "homogeneous" model as the case where all receivers of a The other VC would be a point to multipoint VC with QoS and would
multicast session use a single quality of service VC. Best-effort serve all IP destinations for this RSVP session that have an RSVP
receivers also use the single RSVP triggered QoS VC. The single VC can reservation established.
be a point-to-point or point-to-multipoint as appropriate. The QoS VC
is sized to provide the maximum resources requested by all RSVP next-
hops.
This model matches the way the current RSVP specification addresses As with full heterogeneity, a disadvantage of the limited
heterogeneous requests. The current processing rules and traffic heterogeneity scheme is that each packet will need to be duplicated
control interface describe a model where the largest requested at the network layer and one copy sent into each of the 2 VCs.
reservation for a specific outgoing interface is used in resource Again, the exact amount of excess traffic will depend on the network
allocation, and traffic is transmitted at the higher rate to all next- topology and group membership. If any of the existing QoS VC end-
hops. This approach would be the simplest method for RSVP over ATM points cannot upgrade to the new QoS, then the new reservation fails
implementations. though the resources exist for the new receiver.
While this approach is simple to implement, providing better than best- 4.2.3.3 Homogeneous and Modified Homogeneous Models
effort service may actually be the opposite of what the user desires.
There may be charges incurred or resources that are wrongfully
allocated. There are two specific problems. The first problem is that
a user making a small or no reservation would share a QoS VC resources
without making (and perhaps paying for) an RSVP reservation. The second
problem is that a receiver may not receive any data. This may occur
when there is insufficient resources to add a receiver. The rejected
user would not be added to the single VC and it would not even receive
traffic on a best effort basis.
Not sending data traffic to best-effort receivers because of another We define the "homogeneous" model as the case where all receivers of
receiver's RSVP request is clearly unacceptable. The previously a multicast session use a single quality of service VC. Best-effort
described limited heterogeneous model ensures that data is always sent receivers also use the single RSVP triggered QoS VC. The single VC
to both QoS and best-effort receivers, but it does so by requiring can be a point-to-point or point-to-multipoint as appropriate. The
replication of data at the sender in all cases. It is possible to QoS VC is sized to provide the maximum resources requested by all
extend the homogeneous model to both ensure that data is always sent to RSVP next- hops.
best-effort receivers and also to avoid replication in the normal case.
This extension is to add special handling for the case where a best-
effort receiver cannot be added to the QoS VC. In this case, a best
effort VC can be established to any receivers that could not be added
to the QoS VC. Only in this special error case would senders be
required to replicate data. We define this approach as the "modified
homogeneous" model.
4.2.3.4 Aggregation This model matches the way the current RSVP specification addresses
heterogeneous requests. The current processing rules and traffic
control interface describe a model where the largest requested
reservation for a specific outgoing interface is used in resource
allocation, and traffic is transmitted at the higher rate to all
next-hops. This approach would be the simplest method for RSVP over
ATM implementations.
The last scheme is the multiple RSVP reservations per VC (or While this approach is simple to implement, providing better than
aggregation) model. With this model, large VCs could be set up between best-effort service may actually be the opposite of what the user
IP routers and hosts in an ATM network. These VCs could be managed much desires. There may be charges incurred or resources that are
like IP Integrated Service (IIS) point-to-point links (e.g. T-1, DS-3) wrongfully allocated. There are two specific problems. The first
are managed now. Traffic from multiple sources over multiple RSVP problem is that a user making a small or no reservation would share a
sessions might be multiplexed on the same VC. This approach has a QoS VC resources without making (and perhaps paying for) an RSVP
number of advantages. First, there is typically no signalling latency reservation. The second problem is that a receiver may not receive
as VCs would be in existence when the traffic started flowing, so no any data. This may occur when there is insufficient resources to add
time is wasted in setting up VCs. Second, the heterogeneity problem a receiver. The rejected user would not be added to the single VC
in full over ATM has been reduced to a solved problem. Finally, the and it would not even receive traffic on a best effort basis.
dynamic QoS problem for ATM has also been reduced to a solved problem.
This approach can be used with point-to-point and point-to-multipoint
VCs. The problem with the aggregation approach is that the choice of
what QoS to use for which of the VCs is difficult, but is made easier
if the VCs can be changed as needed.
4.2.4 Multicast End-Point Identification Not sending data traffic to best-effort receivers because of another
receiver's RSVP request is clearly unacceptable. The previously
described limited heterogeneous model ensures that data is always
sent to both QoS and best-effort receivers, but it does so by
requiring replication of data at the sender in all cases. It is
possible to extend the homogeneous model to both ensure that data is
always sent to best-effort receivers and also to avoid replication in
the normal case. This extension is to add special handling for the
case where a best- effort receiver cannot be added to the QoS VC. In
this case, a best effort VC can be established to any receivers that
could not be added to the QoS VC. Only in this special error case
would senders be required to replicate data. We define this approach
as the "modified homogeneous" model.
Implementations must be able to identify ATM end-points participating 4.2.3.4 Aggregation
in an IP multicast group. The ATM end-points will be IP multicast
receivers and/or next-hops. Both QoS and best-effort end-points must
be identified. RSVP next-hop information will provide QoS end-points,
but not best-effort end-points. Another issue is identifying end-points
of multicast traffic handled by non-RSVP capable next-hops. In this
case a PATH message travels through a non-RSVP egress router on the way
to the next hop RSVP node. When the next hop RSVP node sends a RESV
message it may arrive at the source over a different route than what
the data is using. The source will get the RESV message, but will not
know which egress router needs the QoS. For unicast sessions, there is
no problem since the ATM end-point will be the IP next-hop router.
Unfortunately, multicast routing may not be able to uniquely identify
the IP next-hop router. So it is possible that a multicast end-point
can not be identified.
In the most common case, MARS will be used to identify all end-points The last scheme is the multiple RSVP reservations per VC (or
of a multicast group. In the router to router case, a multicast aggregation) model. With this model, large VCs could be set up
routing protocol may provide all next-hops for a particular multicast between IP routers and hosts in an ATM network. These VCs could be
group. In either case, RSVP over ATM implementations must obtain a managed much like IP Integrated Service (IIS) point-to-point links
full list of end-points, both QoS and non-QoS, using the appropriate (e.g. T-1, DS-3) are managed now. Traffic from multiple sources over
mechanisms. The full list can be compared against the RSVP identified multiple RSVP sessions might be multiplexed on the same VC. This
end-points to determine the list of best-effort receivers. There is no approach has a number of advantages. First, there is typically no
straightforward solution to uniquely identifying end-points of signalling latency as VCs would be in existence when the traffic
multicast traffic handled by non-RSVP next hops. The preferred started flowing, so no time is wasted in setting up VCs. Second,
solution is to use multicast routing protocols that support unique end- the heterogeneity problem in full over ATM has been reduced to a
point identification. In cases where such routing protocols are solved problem. Finally, the dynamic QoS problem for ATM has also
unavailable, all IP routers that will be used to support RSVP over ATM been reduced to a solved problem. This approach can be used with
should support RSVP. To ensure proper behavior, implementations point-to-point and point-to-multipoint VCs. The problem with the
should, by default, only establish RSVP-initiated VCs to RSVP capable aggregation approach is that the choice of what QoS to use for which
end-points. of the VCs is difficult, but is made easier if the VCs can be changed
as needed.
4.2.5 Multicast Data Distribution 4.2.4 Multicast End-Point Identification
Two models are planned for IP multicast data distribution over ATM. In Implementations must be able to identify ATM end-points participating
one model, senders establish point-to-multipoint VCs to all ATM in an IP multicast group. The ATM end-points will be IP multicast
attached destinations, and data is then sent over these VCs. This receivers and/or next-hops. Both QoS and best-effort end-points must
model is often called "multicast mesh" or "VC mesh" mode distribution. be identified. RSVP next-hop information will provide QoS end-
In the second model, senders send data over point-to-point VCs to a points, but not best-effort end-points. Another issue is identifying
central point and the central point relays the data onto point-to- end-points of multicast traffic handled by non-RSVP capable next-
multipoint VCs that have been established to all receivers of the IP hops. In this case a PATH message travels through a non-RSVP egress
multicast group. This model is often referred to as "multicast server" router on the way to the next hop RSVP node. When the next hop RSVP
mode distribution. RSVP over ATM solutions must ensure that IP node sends a RESV message it may arrive at the source over a
multicast data is distributed with appropriate QoS. different route than what the data is using. The source will get the
RESV message, but will not know which egress router needs the QoS.
For unicast sessions, there is no problem since the ATM end-point
will be the IP next-hop router. Unfortunately, multicast routing may
not be able to uniquely identify the IP next-hop router. So it is
possible that a multicast end-point can not be identified.
In the Classical IP context, multicast server support is provided via In the most common case, MARS will be used to identify all end-points
MARS [5]. MARS does not currently provide a way to communicate QoS of a multicast group. In the router to router case, a multicast
requirements to a MARS multicast server. Therefore, RSVP over ATM routing protocol may provide all next-hops for a particular multicast
implementations must, by default, support "mesh-mode" distribution for group. In either case, RSVP over ATM implementations must obtain a
RSVP controlled multicast flows. When using multicast servers that do full list of end-points, both QoS and non-QoS, using the appropriate
not support QoS requests, a sender must set the service, not global, mechanisms. The full list can be compared against the RSVP
break bit(s). identified end-points to determine the list of best-effort receivers.
There is no straightforward solution to uniquely identifying end-
points of multicast traffic handled by non-RSVP next hops. The
preferred solution is to use multicast routing protocols that support
unique end-point identification. In cases where such routing
protocols are unavailable, all IP routers that will be used to
support RSVP over ATM should support RSVP. To ensure proper
behavior, implementations should, by default, only establish RSVP-
initiated VCs to RSVP capable end-points.
4.2.6 Receiver Transitions 4.2.5 Multicast Data Distribution
When setting up a point-to-multipoint VCs for multicast RSVP sessions, Two models are planned for IP multicast data distribution over ATM.
there will be a time when some receivers have been added to a QoS VC In one model, senders establish point-to-multipoint VCs to all ATM
and some have not. During such transition times it is possible to attached destinations, and data is then sent over these VCs. This
start sending data on the newly established VC. The issue is when to model is often called "multicast mesh" or "VC mesh" mode
start send data on the new VC. If data is sent both on the new VC and distribution. In the second model, senders send data over point-to-
the old VC, then data will be delivered with proper QoS to some point VCs to a central point and the central point relays the data
receivers and with the old QoS to all receivers. This means the QoS onto point-to-multipoint VCs that have been established to all
receivers can get duplicate data. If data is sent just on the new QoS receivers of the IP multicast group. This model is often referred to
VC, the receivers that have not yet been added will lose information. as "multicast server" mode distribution. RSVP over ATM solutions must
So, the issue comes down to whether to send to both the old and new ensure that IP multicast data is distributed with appropriate QoS.
VCs, or to send to just one of the VCs. In one case duplicate
information will be received, in the other some information may not be
received.
This issue needs to be considered for three cases: In the Classical IP context, multicast server support is provided via
- When establishing the first QoS VC MARS [5]. MARS does not currently provide a way to communicate QoS
- When establishing a VC to support a QoS change requirements to a MARS multicast server. Therefore, RSVP over ATM
- When adding a new end-point to an already established QoS VC implementations must, by default, support "mesh-mode" distribution
for RSVP controlled multicast flows. When using multicast servers
that do not support QoS requests, a sender must set the service, not
global, break bit(s).
The first two cases are very similar. It both, it is possible to send 4.2.6 Receiver Transitions
data on the partially completed new VC, and the issue of duplicate
versus lost information is the same. The last case is when an end-point
must be added to an existing QoS VC. In this case the end-point must
be both added to the QoS VC and dropped from a best-effort VC. The
issue is which to do first. If the add is first requested, then the
end-point may get duplicate information. If the drop is requested
first, then the end-point may loose information.
In order to ensure predictable behavior and delivery of data to all When setting up a point-to-multipoint VCs for multicast RSVP
receivers, data can only be sent on a new VCs once all parties have sessions, there will be a time when some receivers have been added to
been added. This will ensure that all data is only delivered once to a QoS VC and some have not. During such transition times it is
all receivers. This approach does not quite apply for the last case. possible to start sending data on the newly established VC. The
In the last case, the add operation should be completed first, then the issue is when to start send data on the new VC. If data is sent both
drop operation. This means that receivers must be prepared to receive on the new VC and the old VC, then data will be delivered with proper
some duplicate packets at times of QoS setup. QoS to some receivers and with the old QoS to all receivers. This
means the QoS receivers can get duplicate data. If data is sent just
on the new QoS VC, the receivers that have not yet been added will
lose information. So, the issue comes down to whether to send to
both the old and new VCs, or to send to just one of the VCs. In one
case duplicate information will be received, in the other some
information may not be received.
4.2.7 Dynamic QoS This issue needs to be considered for three cases:
RSVP provides dynamic quality of service (QoS) in that the resources - When establishing the first QoS VC
that are requested may change at any time. There are several common - When establishing a VC to support a QoS change
reasons for a change of reservation QoS. - When adding a new end-point to an already established QoS VC
1. An existing receiver can request a new larger (or smaller) QoS. The first two cases are very similar. It both, it is possible to
2. A sender may change its traffic specification (TSpec), which can send data on the partially completed new VC, and the issue of
trigger a change in the reservation requests of the receivers. duplicate versus lost information is the same. The last case is when
3. A new sender can start sending to a multicast group with a larger an end-point must be added to an existing QoS VC. In this case the
traffic specification than existing senders, triggering larger end-point must be both added to the QoS VC and dropped from a best-
reservations. effort VC. The issue is which to do first. If the add is first
4. A new receiver can make a reservation that is larger than existing requested, then the end-point may get duplicate information. If the
reservations. drop is requested first, then the end-point may loose information.
If the limited heterogeneity model is being used and the merge node for In order to ensure predictable behavior and delivery of data to all
the larger reservation is an ATM edge device, a new larger reservation receivers, data can only be sent on a new VCs once all parties have
must be set up across the ATM network. Since ATM service, as currently been added. This will ensure that all data is only delivered once to
defined in UNI 3.x and UNI 4.0, does not allow renegotiating the QoS of all receivers. This approach does not quite apply for the last case.
a VC, dynamically changing the reservation means creating a new VC with In the last case, the add operation should be completed first, then
the new QoS, and tearing down an established VC. Tearing down a VC and the drop operation. This means that receivers must be prepared to
setting up a new VC in ATM are complex operations that involve a non- receive some duplicate packets at times of QoS setup.
trivial amount of processing time, and may have a substantial latency.
There are several options for dealing with this mismatch in service. A
specific approach will need to be a part of any RSVP over ATM solution.
The default method for supporting changes in RSVP reservations is to 4.2.7 Dynamic QoS
attempt to replace an existing VC with a new appropriately sized VC.
During setup of the replacement VC, the old VC must be left in place
unmodified. The old VC is left unmodified to minimize interruption of
QoS data delivery. Once the replacement VC is established, data
transmission is shifted to the new VC, and the old VC is then closed.
If setup of the replacement VC fails, then the old QoS VC should
continue to be used. When the new reservation is greater than the old
reservation, the reservation request should be answered with an error.
When the new reservation is less than the old reservation, the request
should be treated as if the modification was successful. While leaving
the larger allocation in place is suboptimal, it maximizes delivery of
service to the user. Implementations should retry replacing the too
large VC after some appropriate elapsed time.
One additional issue is that only one QoS change can be processed at RSVP provides dynamic quality of service (QoS) in that the resources
one time per reservation. If the (RSVP) requested QoS is changed while that are requested may change at any time. There are several common
the first replacement VC is still being setup, then the replacement VC reasons for a change of reservation QoS.
is released and the whole VC replacement process is restarted. To limit
the number of changes and to avoid excessive signalling load,
implementations may limit the number of changes that will be processed
in a given period. One implementation approach would have each ATM
edge device configured with a time parameter T (which can change over
time) that gives the minimum amount of time the edge device will wait
between successive changes of the QoS of a particular VC. Thus if the
QoS of a VC is changed at time t, all messages that would change the
QoS of that VC that arrive before time t+T would be queued. If several
messages changing the QoS of a VC arrive during the interval, redundant
messages can be discarded. At time t+T, the remaining change(s) of QoS,
if any, can be executed. This timer approach would apply more generally
to any network structure, and might be worthwhile to incorporate into
RSVP.
The sequence of events for a single VC would be
- Wait if timer is active 1. An existing receiver can request a new larger (or smaller) QoS.
- Establish VC with new QoS 2. A sender may change its traffic specification (TSpec), which can
- Remap data traffic to new VC trigger a change in the reservation requests of the receivers.
- Tear down old VC 3. A new sender can start sending to a multicast group with a larger
- Activate timer traffic specification than existing senders, triggering larger
reservations.
4. A new receiver can make a reservation that is larger than existing
reservations.
There is an interesting interaction between heterogeneous reservations If the limited heterogeneity model is being used and the merge node
and dynamic QoS. In the case where a RESV message is received from a for the larger reservation is an ATM edge device, a new larger
new next-hop and the requested resources are larger than any existing reservation must be set up across the ATM network. Since ATM service,
reservation, both dynamic QoS and heterogeneity need to be addressed. A as currently defined in UNI 3.x and UNI 4.0, does not allow
key issue is whether to first add the new next-hop or to change to the renegotiating the QoS of a VC, dynamically changing the reservation
new QoS. This is a fairly straight forward special case. Since the means creating a new VC with the new QoS, and tearing down an
older, smaller reservation does not support the new next-hop, the established VC. Tearing down a VC and setting up a new VC in ATM are
dynamic QoS process should be initiated first. Since the new QoS is complex operations that involve a non-trivial amount of processing
only needed by the new next-hop, it should be the first end-point of time, and may have a substantial latency. There are several options
the new VC. This way signalling is minimized when the setup to the new for dealing with this mismatch in service. A specific approach will
next-hop fails. need to be a part of any RSVP over ATM solution.
4.2.8 Short-Cuts The default method for supporting changes in RSVP reservations is to
attempt to replace an existing VC with a new appropriately sized VC.
During setup of the replacement VC, the old VC must be left in place
unmodified. The old VC is left unmodified to minimize interruption of
QoS data delivery. Once the replacement VC is established, data
transmission is shifted to the new VC, and the old VC is then closed.
If setup of the replacement VC fails, then the old QoS VC should
continue to be used. When the new reservation is greater than the old
reservation, the reservation request should be answered with an
error. When the new reservation is less than the old reservation,
the request should be treated as if the modification was successful.
While leaving the larger allocation in place is suboptimal, it
maximizes delivery of service to the user. Implementations should
retry replacing the too large VC after some appropriate elapsed time.
Short-cuts [4] allow ATM attached routers and hosts to directly One additional issue is that only one QoS change can be processed at
establish point-to-point VCs across LIS boundaries, i.e., the VC end- one time per reservation. If the (RSVP) requested QoS is changed
points are on different IP subnets. The ability for short-cuts and while the first replacement VC is still being setup, then the
RSVP to interoperate has been raised as a general question. An area of replacement VC is released and the whole VC replacement process is
concern is the ability to handle asymmetric short-cuts. Specifically restarted. To limit the number of changes and to avoid excessive
how RSVP can handle the case where a downstream short-cut may not have signalling load, implementations may limit the number of changes that
a matching upstream short-cut. In this case, PATH and RESV messages will be processed in a given period. One implementation approach
following different paths. would have each ATM edge device configured with a time parameter T
(which can change over time) that gives the minimum amount of time
the edge device will wait between successive changes of the QoS of a
particular VC. Thus if the QoS of a VC is changed at time t, all
messages that would change the QoS of that VC that arrive before time
t+T would be queued. If several messages changing the QoS of a VC
arrive during the interval, redundant messages can be discarded. At
time t+T, the remaining change(s) of QoS, if any, can be executed.
This timer approach would apply more generally to any network
structure, and might be worthwhile to incorporate into RSVP.
Examination of RSVP shows that the protocol already includes mechanisms The sequence of events for a single VC would be
that will support short-cuts. The mechanism is the same one used to
support RESV messages arriving at the wrong router and the wrong
interface. The key aspect of this mechanism is RSVP only processing
messages that arrive at the proper interface and RSVP forwarding of
messages that arrive on the wrong interface. The proper interface is
indicated in the NHOP object of the message. So, existing RSVP
mechanisms will support asymmetric short-cuts. The short-cut model of
VC establishment still poses several issues when running with RSVP. The
major issues are dealing with established best-effort short-cuts, when
to establish short-cuts, and QoS only short-cuts. These issues will
need to be addressed by RSVP implementations.
The key issue to be addressed by any RSVP over ATM solution is when to - Wait if timer is active
establish a short-cut for a QoS data flow. The default behavior is to - Establish VC with new QoS
simply follow best-effort traffic. When a short-cut has been - Remap data traffic to new VC
established for best-effort traffic to a destination or next-hop, that - Tear down old VC
same end-point should be used when setting up RSVP triggered VCs for - Activate timer
QoS traffic to the same destination or next-hop. This will happen
naturally when PATH messages are forwarded over the best-effort short-
cut. Note that in this approach when best-effort short-cuts are never
established, RSVP triggered QoS short-cuts will also never be
established. More study is expected in this area.
4.2.9 VC Teardown There is an interesting interaction between heterogeneous
reservations and dynamic QoS. In the case where a RESV message is
received from a new next-hop and the requested resources are larger
than any existing reservation, both dynamic QoS and heterogeneity
need to be addressed. A key issue is whether to first add the new
next-hop or to change to the new QoS. This is a fairly straight
forward special case. Since the older, smaller reservation does not
support the new next-hop, the dynamic QoS process should be initiated
first. Since the new QoS is only needed by the new next-hop, it
should be the first end-point of the new VC. This way signalling is
minimized when the setup to the new next-hop fails.
RSVP can identify from either explicit messages or timeouts when a data 4.2.8 Short-Cuts
VC is no longer needed. Therefore, data VCs set up to support RSVP
controlled flows should only be released at the direction of RSVP. VCs
must not be timed out due to inactivity by either the VC initiator or
the VC receiver. This conflicts with VCs timing out as described in
RFC 1755 [11], section 3.4 on VC Teardown. RFC 1755 recommends tearing
down a VC that is inactive for a certain length of time. Twenty minutes
is recommended. This timeout is typically implemented at both the VC
initiator and the VC receiver. Although, section 3.1 of the update to
RFC 1755 [11] states that inactivity timers must not be used at the VC
receiver.
When this timeout occurs for an RSVP initiated VC, a valid VC with QoS Short-cuts [4] allow ATM attached routers and hosts to directly
will be torn down unexpectedly. While this behavior is acceptable for establish point-to-point VCs across LIS boundaries, i.e., the VC
best-effort traffic, it is important that RSVP controlled VCs not be end-points are on different IP subnets. The ability for short-cuts
torn down. If there is no choice about the VC being torn down, the and RSVP to interoperate has been raised as a general question. An
RSVP daemon must be notified, so a reservation failure message can be area of concern is the ability to handle asymmetric short-cuts.
sent. Specifically how RSVP can handle the case where a downstream short-
cut may not have a matching upstream short-cut. In this case, PATH
and RESV messages following different paths.
For VCs initiated at the request of RSVP, the configurable inactivity Examination of RSVP shows that the protocol already includes
timer mentioned in [11] must be set to "infinite". Setting the mechanisms that will support short-cuts. The mechanism is the same
inactivity timer value at the VC initiator should not be problematic one used to support RESV messages arriving at the wrong router and
since the proper value can be relayed internally at the originator. the wrong interface. The key aspect of this mechanism is RSVP only
Setting the inactivity timer at the VC receiver is more difficult, and processing messages that arrive at the proper interface and RSVP
would require some mechanism to signal that an incoming VC was RSVP forwarding of messages that arrive on the wrong interface. The
initiated. To avoid this complexity and to conform to [11] proper interface is indicated in the NHOP object of the message. So,
implementations must not use an inactivity timer to clear received existing RSVP mechanisms will support asymmetric short-cuts. The
connections. short-cut model of VC establishment still poses several issues when
running with RSVP. The major issues are dealing with established
best-effort short-cuts, when to establish short-cuts, and QoS only
short-cuts. These issues will need to be addressed by RSVP
implementations.
4.3 RSVP Control Management The key issue to be addressed by any RSVP over ATM solution is when
to establish a short-cut for a QoS data flow. The default behavior is
to simply follow best-effort traffic. When a short-cut has been
established for best-effort traffic to a destination or next-hop,
that same end-point should be used when setting up RSVP triggered VCs
for QoS traffic to the same destination or next-hop. This will happen
naturally when PATH messages are forwarded over the best-effort
short-cut. Note that in this approach when best-effort short-cuts
are never established, RSVP triggered QoS short-cuts will also never
be established. More study is expected in this area.
One last important issue is providing a data path for the RSVP messages 4.2.9 VC Teardown
themselves. There are two main types of messages in RSVP, PATH and
RESV. PATH messages are sent to unicast or multicast addresses, while
RESV messages are sent only to unicast addresses. Other RSVP messages
are handled similar to either PATH or RESV, although this might be more
complicated for RERR messages. So ATM VCs used for RSVP signalling
messages need to provide both unicast and multicast functionality.
There are several different approaches for how to assign VCs to use for
RSVP signalling messages.
The main approaches are: RSVP can identify from either explicit messages or timeouts when a
- use same VC as data data VC is no longer needed. Therefore, data VCs set up to support
- single VC per session RSVP controlled flows should only be released at the direction of
- single point-to-multipoint VC multiplexed among sessions RSVP. VCs must not be timed out due to inactivity by either the VC
- multiple point-to-point VCs multiplexed among sessions initiator or the VC receiver. This conflicts with VCs timing out as
described in RFC 1755 [11], section 3.4 on VC Teardown. RFC 1755
recommends tearing down a VC that is inactive for a certain length of
time. Twenty minutes is recommended. This timeout is typically
implemented at both the VC initiator and the VC receiver. Although,
section 3.1 of the update to RFC 1755 [11] states that inactivity
timers must not be used at the VC receiver.
There are several different issues that affect the choice of how to When this timeout occurs for an RSVP initiated VC, a valid VC with
assign VCs for RSVP signalling. One issue is the number of additional QoS will be torn down unexpectedly. While this behavior is
VCs needed for RSVP signalling. Related to this issue is the degree of acceptable for best-effort traffic, it is important that RSVP
multiplexing on the RSVP VCs. In general more multiplexing means fewer controlled VCs not be torn down. If there is no choice about the VC
VCs. An additional issue is the latency in dynamically setting up new being torn down, the RSVP daemon must be notified, so a reservation
RSVP signalling VCs. A final issue is complexity of implementation. The failure message can be sent.
remainder of this section discusses the issues and tradeoffs among
these different approaches and suggests guidelines for when to use
which alternative.
4.3.1 Mixed data and control traffic For VCs initiated at the request of RSVP, the configurable inactivity
timer mentioned in [11] must be set to "infinite". Setting the
inactivity timer value at the VC initiator should not be problematic
since the proper value can be relayed internally at the originator.
Setting the inactivity timer at the VC receiver is more difficult,
and would require some mechanism to signal that an incoming VC was
RSVP initiated. To avoid this complexity and to conform to [11]
implementations must not use an inactivity timer to clear received
connections.
In this scheme RSVP signalling messages are sent on the same VCs as is 4.3 RSVP Control Management
the data traffic. The main advantage of this scheme is that no
additional VCs are needed beyond what is needed for the data traffic.
An additional advantage is that there is no ATM signalling latency for
PATH messages (which follow the same routing as the data messages).
However there can be a major problem when data traffic on a VC is One last important issue is providing a data path for the RSVP
nonconforming. With nonconforming traffic, RSVP signalling messages may messages themselves. There are two main types of messages in RSVP,
be dropped. While RSVP is resilient to a moderate level of dropped PATH and RESV. PATH messages are sent to unicast or multicast
messages, excessive drops would lead to repeated tearing down and re- addresses, while RESV messages are sent only to unicast addresses.
establishing of QoS VCs, a very undesirable behavior for ATM. Due to Other RSVP messages are handled similar to either PATH or RESV,
these problems, this may not be a good choice for providing RSVP although this might be more complicated for RERR messages. So ATM
signalling messages, even though the number of VCs needed for this VCs used for RSVP signalling messages need to provide both unicast
scheme is minimized. One variation of this scheme is to use the best and multicast functionality. There are several different approaches
effort data path for signalling traffic. In this scheme, there is no for how to assign VCs to use for RSVP signalling messages.
issue with nonconforming traffic, but there is an issue with congestion
in the ATM network. RSVP provides some resiliency to message loss due
to congestion, but RSVP control messages should be offered a preferred
class of service. A related variation of this scheme that is hopeful
but requires further study is to have a packet scheduling algorithm
(before entering the ATM network) that gives priority to the RSVP
signalling traffic. This can be difficult to do at the IP layer.
4.3.1.1 Single RSVP VC per RSVP Reservation The main approaches are:
In this scheme, there is a parallel RSVP signalling VC for each RSVP - use same VC as data
reservation. This scheme results in twice the number of VCs, but means - single VC per session
that RSVP signalling messages have the advantage of a separate VC. This - single point-to-multipoint VC multiplexed among sessions
separate VC means that RSVP signalling messages have their own traffic - multiple point-to-point VCs multiplexed among sessions
contract and compliant signalling messages are not subject to dropping
due to other noncompliant traffic (such as can happen with the scheme
in section 4.3.1). The advantage of this scheme is its simplicity -
whenever a data VC is created, a separate RSVP signalling VC is
created. The disadvantage of the extra VC is that extra ATM signalling
needs to be done. Additionally, this scheme requires twice the minimum
number of VCs and also additional latency, but is quite simple.
4.3.1.2 Multiplexed point-to-multipoint RSVP VCs There are several different issues that affect the choice of how to
assign VCs for RSVP signalling. One issue is the number of additional
VCs needed for RSVP signalling. Related to this issue is the degree
of multiplexing on the RSVP VCs. In general more multiplexing means
fewer VCs. An additional issue is the latency in dynamically setting
up new RSVP signalling VCs. A final issue is complexity of
implementation. The remainder of this section discusses the issues
and tradeoffs among these different approaches and suggests
guidelines for when to use which alternative.
In this scheme, there is a single point-to-multipoint RSVP signalling 4.3.1 Mixed data and control traffic
VC for each unique ingress router and unique set of egress routers.
This scheme allows multiplexing of RSVP signalling traffic that shares
the same ingress router and the same egress routers. This can save on
the number of VCs, by multiplexing, but there are problems when the
destinations of the multiplexed point-to-multipoint VCs are changing.
Several alternatives exist in these cases, that have applicability in
different situations. First, when the egress routers change, the
ingress router can check if it already has a point-to-multipoint RSVP
signalling VC for the new list of egress routers. If the RSVP
signalling VC already exists, then the RSVP signalling traffic can be
switched to this existing VC. If no such VC exists, one approach would
be to create a new VC with the new list of egress routers. Other
approaches include modifying the existing VC to add an egress router or
using a separate new VC for the new egress routers. When a destination
drops out of a group, an alternative would be to keep sending to the
existing VC even though some traffic is wasted. The number of VCs used
in this scheme is a function of traffic patterns across the ATM
network, but is always less than the number used with the Single RSVP
VC per data VC. In addition, existing best effort data VCs could be
used for RSVP signalling. Reusing best effort VCs saves on the number
of VCs at the cost of higher probability of RSVP signalling packet
loss. One possible place where this scheme will work well is in the
core of the network where there is the most opportunity to take
advantage of the savings due to multiplexing. The exact savings depend
on the patterns of traffic and the topology of the ATM network.
4.3.1.3 Multiplexed point-to-point RSVP VCs In this scheme RSVP signalling messages are sent on the same VCs as
is the data traffic. The main advantage of this scheme is that no
additional VCs are needed beyond what is needed for the data traffic.
An additional advantage is that there is no ATM signalling latency
for PATH messages (which follow the same routing as the data
messages). However there can be a major problem when data traffic on
a VC is nonconforming. With nonconforming traffic, RSVP signalling
messages may be dropped. While RSVP is resilient to a moderate level
of dropped messages, excessive drops would lead to repeated tearing
down and re-establishing of QoS VCs, a very undesirable behavior for
ATM. Due to these problems, this may not be a good choice for
providing RSVP signalling messages, even though the number of VCs
needed for this scheme is minimized. One variation of this scheme is
to use the best effort data path for signalling traffic. In this
scheme, there is no issue with nonconforming traffic, but there is an
issue with congestion in the ATM network. RSVP provides some
resiliency to message loss due to congestion, but RSVP control
messages should be offered a preferred class of service. A related
variation of this scheme that is hopeful but requires further study
is to have a packet scheduling algorithm (before entering the ATM
network) that gives priority to the RSVP signalling traffic. This can
be difficult to do at the IP layer.
In this scheme, multiple point-to-point RSVP signalling VCs are used 4.3.1.1 Single RSVP VC per RSVP Reservation
for a single point-to-multipoint data VC. This scheme allows
multiplexing of RSVP signalling traffic but requires the same traffic
to be sent on each of several VCs. This scheme is quite flexible and
allows a large amount of multiplexing.
Since point-to-point VCs can set up a reverse channel at the same time In this scheme, there is a parallel RSVP signalling VC for each RSVP
as setting up the forward channel, this scheme could save substantially reservation. This scheme results in twice the number of VCs, but
on signalling cost. In addition, signalling traffic could share means that RSVP signalling messages have the advantage of a separate
existing best effort VCs. Sharing existing best effort VCs reduces the VC. This separate VC means that RSVP signalling messages have their
total number of VCs needed, but might cause signalling traffic drops if own traffic contract and compliant signalling messages are not
there is congestion in the ATM network. This point-to-point scheme subject to dropping due to other noncompliant traffic (such as can
would work well in the core of the network where there is much happen with the scheme in section 4.3.1). The advantage of this
opportunity for multiplexing. Also in the core of the network, RSVP VCs scheme is its simplicity - whenever a data VC is created, a separate
can stay permanently established either as Permanent Virtual Circuits RSVP signalling VC is created. The disadvantage of the extra VC is
(PVCs) or as long lived Switched Virtual Circuits (SVCs). The number that extra ATM signalling needs to be done. Additionally, this scheme
of VCs in this scheme will depend on traffic patterns, but in the core requires twice the minimum number of VCs and also additional latency,
of a network would be approximately n(n-1)/2 where n is the number of but is quite simple.
IP nodes in the network. In the core of the network, this will
typically be small compared to the total number of VCs.
4.3.2 QoS for RSVP VCs 4.3.1.2 Multiplexed point-to-multipoint RSVP VCs
There is an issue of what QoS, if any, to assign to the RSVP signalling In this scheme, there is a single point-to-multipoint RSVP signalling
VCs. For other RSVP VC schemes, a QoS (possibly best effort) will be VC for each unique ingress router and unique set of egress routers.
needed. What QoS to use partially depends on the expected level of This scheme allows multiplexing of RSVP signalling traffic that
multiplexing that is being done on the VCs, and the expected shares the same ingress router and the same egress routers. This can
reliability of best effort VCs. Since RSVP signalling is infrequent save on the number of VCs, by multiplexing, but there are problems
(typically every 30 seconds), only a relatively small QoS should be when the destinations of the multiplexed point-to-multipoint VCs are
needed. This is important since using a larger QoS risks the VC setup changing. Several alternatives exist in these cases, that have
being rejected for lack of resources. Falling back to best effort when applicability in different situations. First, when the egress routers
a QoS call is rejected is possible, but if the ATM net is congested, change, the ingress router can check if it already has a point-to-
there will likely be problems with RSVP packet loss on the best effort multipoint RSVP signalling VC for the new list of egress routers. If
VC also. Additional experimentation is needed in this area. the RSVP signalling VC already exists, then the RSVP signalling
traffic can be switched to this existing VC. If no such VC exists,
one approach would be to create a new VC with the new list of egress
routers. Other approaches include modifying the existing VC to add an
egress router or using a separate new VC for the new egress routers.
When a destination drops out of a group, an alternative would be to
keep sending to the existing VC even though some traffic is wasted.
The number of VCs used in this scheme is a function of traffic
patterns across the ATM network, but is always less than the number
used with the Single RSVP VC per data VC. In addition, existing best
effort data VCs could be used for RSVP signalling. Reusing best
effort VCs saves on the number of VCs at the cost of higher
probability of RSVP signalling packet loss. One possible place where
this scheme will work well is in the core of the network where there
is the most opportunity to take advantage of the savings due to
multiplexing. The exact savings depend on the patterns of traffic
and the topology of the ATM network.
5. Encapsulation 4.3.1.3 Multiplexed point-to-point RSVP VCs
Since RSVP is a signalling protocol used to control flows of IP data In this scheme, multiple point-to-point RSVP signalling VCs are used
packets, encapsulation for both RSVP packets and associated IP data for a single point-to-multipoint data VC. This scheme allows
packets must be defined. The methods for transmitting IP packets over multiplexing of RSVP signalling traffic but requires the same traffic
ATM (Classical IP over ATM[10], LANE[17], and MPOA[18]) are all based to be sent on each of several VCs. This scheme is quite flexible and
on the encapsulations defined in RFC1483 [19]. RFC1483 specifies two allows a large amount of multiplexing.
encapsulations, LLC Encapsulation and VC-based multiplexing. The
former allows multiple protocols to be encapsulated over the same VC
and the latter requires different VCs for different protocols.
For the purposes of RSVP over ATM, any encapsulation can be used as Since point-to-point VCs can set up a reverse channel at the same
long as the VCs are managed in accordance to the methods outlined in time as setting up the forward channel, this scheme could save
Section 4. Obviously, running multiple protocol data streams over the substantially on signalling cost. In addition, signalling traffic
same VC with LLC encapsulation can cause the same problems as running could share existing best effort VCs. Sharing existing best effort
multiple flows over the same VC. VCs reduces the total number of VCs needed, but might cause
signalling traffic drops if there is congestion in the ATM network.
This point-to-point scheme would work well in the core of the network
where there is much opportunity for multiplexing. Also in the core of
the network, RSVP VCs can stay permanently established either as
Permanent Virtual Circuits (PVCs) or as long lived Switched Virtual
Circuits (SVCs). The number of VCs in this scheme will depend on
traffic patterns, but in the core of a network would be approximately
n(n-1)/2 where n is the number of IP nodes in the network. In the
core of the network, this will typically be small compared to the
total number of VCs.
While none of the transmission methods directly address the issue of 4.3.2 QoS for RSVP VCs
QoS, RFC1755 [11] does suggest some common values for VC setup for
best-effort traffic. [14] discusses the relationship of the RFC1755
setup parameters and those needed to support IntServ flows in greater
detail.
6. Security Considerations There is an issue of what QoS, if any, to assign to the RSVP
signalling VCs. For other RSVP VC schemes, a QoS (possibly best
effort) will be needed. What QoS to use partially depends on the
expected level of multiplexing that is being done on the VCs, and the
expected reliability of best effort VCs. Since RSVP signalling is
infrequent (typically every 30 seconds), only a relatively small QoS
should be needed. This is important since using a larger QoS risks
the VC setup being rejected for lack of resources. Falling back to
best effort when a QoS call is rejected is possible, but if the ATM
net is congested, there will likely be problems with RSVP packet loss
on the best effort VC also. Additional experimentation is needed in
this area.
The same considerations stated in [1] and [11] apply to this document. 5. Encapsulation
There are no additional security issues raised in this document.
7. References Since RSVP is a signalling protocol used to control flows of IP data
packets, encapsulation for both RSVP packets and associated IP data
packets must be defined. The methods for transmitting IP packets over
ATM (Classical IP over ATM[10], LANE[17], and MPOA[18]) are all based
on the encapsulations defined in RFC1483 [19]. RFC1483 specifies two
encapsulations, LLC Encapsulation and VC-based multiplexing. The
former allows multiple protocols to be encapsulated over the same VC
and the latter requires different VCs for different protocols.
[1] R. Braden, L. Zhang, S. Berson, S. Herzog, S. Jamin. Resource For the purposes of RSVP over ATM, any encapsulation can be used as
ReSerVation Protocol (RSVP) -- Version 1 Functional Specification long as the VCs are managed in accordance to the methods outlined in
RFC 2209, September 1997. Section 4. Obviously, running multiple protocol data streams over
[2] M. Borden, E. Crawley, B. Davie, S. Batsell. Integration of Real- the same VC with LLC encapsulation can cause the same problems as
time Services in an IP-ATM Network Architecture. Request for running multiple flows over the same VC.
Comments (Informational) RFC 1821, August 1995.
[3] R. Cole, D. Shur, C. Villamizar. IP over ATM: A Framework Document.
Request for Comments (Informational), RFC 1932, April 1996.
[4] D. Katz, D. Piscitello, B. Cole, J. Luciani. NBMA Next Hop
Resolution Protocol (NHRP). Internet Draft, draft-ietf-rolc-nhrp-
12.txt, October 1997.
[5] G. Armitage, Support for Multicast over UNI 3.0/3.1 based ATM
Networks. RFC 2022. November 1996.
[6] S. Shenker, C. Partridge. Specification of Guaranteed Quality of
Service. RFC 2212, September 1997.
[7] J. Wroclawski. Specification of the Controlled-Load Network Element
Service. RFC 2211, September 1997.
[8] ATM Forum. ATM User-Network Interface Specification Version 3.0.
Prentice Hall, September 1993
[9] ATM Forum. ATM User Network Interface (UNI) Specification Version
3.1. Prentice Hall, June 1995.
[10]M. Laubach, Classical IP and ARP over ATM. Request for Comments
(Proposed Standard) RFC1577, January 1994.
[11]M. Perez, A. Mankin, E. Hoffman, G. Grossman, A. Malis, ATM
Signalling Support for IP over ATM, Request for Comments (Proposed
Standard) RFC1755, February 1995.
[12]S. Herzog. RSVP Extensions for Policy Control. Internet Draft,
draft-ietf-rsvp-policy-ext-02.txt, April 1997.
[13]S. Herzog. Local Policy Modules (LPM): Policy Control for RSVP,
Internet Draft, draft-ietf-rsvp-policy-lpm-01.txt, November 1996.
[14]M. Borden, M. Garrett. Interoperation of Controlled-Load and
Guaranteed Service with ATM, Internet Draft, draft-ietf-issll-atm-
mapping-03.txt, August 1997.
[15]L. Berger. RSVP over ATM Implementation Requirements. Internet
Draft, draft-ietf-issll-atm-imp-req-00.txt, July 1997.
[16]L. Berger. RSVP over ATM Implementation Guidelines. Internet Draft,
draft-ietf-issll-atm-imp-guide-01.txt, July 1997.
[17]ATM Forum Technical Committee. LAN Emulation over ATM, Version 1.0
Specification, af-lane-0021.000, January 1995.
[18]ATM Forum Technical Committee. Baseline Text for MPOA, af-95- While none of the transmission methods directly address the issue of
0824r9, September 1996. QoS, RFC1755 [11] does suggest some common values for VC setup for
[19]J. Heinanen. Multiprotocol Encapsulation over ATM Adaptation Layer best-effort traffic. [14] discusses the relationship of the RFC1755
5, RFC 1483, July 1993. setup parameters and those needed to support IntServ flows in greater
[20]ATM Forum Technical Committee. LAN Emulation over ATM Version 2 - detail.
LUNI Specification, December 1996.
[21]ATM Forum Technical Committee. Traffic Management Specification
v4.0, af-tm-0056.000, April 1996.
[22]R. Callon, et al. A Framework for Multiprotocol Label Switching,
Internet Draft, draft-ietf-mpls-framework-01.txt, July 1997.
[23]B. Rajagopalan, R. Nair, H. Sandick, E. Crawley. A Framework for
QoS-based Routing in the Internet, Internet Draft, draft-ietf-qosr-
framework-01.txt, July 1997.
[24]ITU-T. Digital Subscriber Signaling System No. 2-Connection
modification: Peak cell rate modification by the connection owner,
ITU-T Recommendation Q.2963.1, July 1996.
[25]ITU-T. Digital Subscriber Signaling System No. 2-Connection
characteristics negotiation during call/connection establishment
phase, ITU-T Recommendation Q.2962, July 1996.
[26]ATM Forum Technical Committee. Private Network-Network Interface
Specification v1.0 (PNNI), March 1996
8. Author's Address 6. Security Considerations
Eric S. Crawley The same considerations stated in [1] and [11] apply to this
Argon Networks document. There are no additional security issues raised in this
25 Porter Road document.
Littleton, Ma 01460
+1 978 486-0665
esc@argon.com
Lou Berger 7. References
FORE Systems
6905 Rockledge Drive
Suite 800
Bethesda, MD 20817
+1 301 571-2534
lberger@fore.com
Steven Berson [1] Braden, R., Zhang, L., Berson, S., Herzog, S., and S. Jamin,
USC Information Sciences Institute "Resource ReSerVation Protocol (RSVP) -- Version 1 Functional
4676 Admiralty Way Specification", RFC 2209, September 1997.
Marina del Rey, CA 90292
+1 310 822-1511
berson@isi.edu
Fred Baker [2] Borden, M., Crawley, E., Davie, B., and S. Batsell, "Integration
Cisco Systems of Realtime Services in an IP-ATM Network Architecture", RFC
519 Lado Drive 1821, August 1995.
Santa Barbara, California 93111
+1 805 681-0115
fred@cisco.com
Marty Borden
Bay Networks
125 Nagog Park
Acton, MA 01720
mborden@baynetworks.com
+1 978 266-1011
John J. Krawczyk [3] Cole, R., Shur, D., and C. Villamizar, "IP over ATM: A Framework
ArrowPoint Communications Document", RFC 1932, April 1996.
235 Littleton Road
Westford, Massachusetts 01886 [4] Luciani, J., Katz, D., Piscitello, D., Cole, B., and N.
+1 978 692-5875 Doraswamy, "NBMA Next Hop Resolution Protocol (NHRP)", RFC 2332,
jj@arrowpoint.com April 1998.
[5] Armitage, G., "Support for Multicast over UNI 3.0/3.1 based ATM
Networks", RFC 2022, November 1996.
[6] Shenker, S., and C. Partridge, "Specification of Guaranteed
Quality of Service", RFC 2212, September 1997.
[7] Wroclawski, J., "Specification of the Controlled-Load Network
Element Service", RFC 2211, September 1997.
[8] ATM Forum. ATM User-Network Interface Specification Version 3.0.
Prentice Hall, September 1993.
[9] ATM Forum. ATM User Network Interface (UNI) Specification Version
3.1. Prentice Hall, June 1995.
[10] Laubach, M., "Classical IP and ARP over ATM", RFC 2225, April
1998.
[11] Perez, M., Mankin, A., Hoffman, E., Grossman, G., and A. Malis,
"ATM Signalling Support for IP over ATM", RFC 1755, February
1995.
[12] Herzog, S., "RSVP Extensions for Policy Control", Work in
Progress.
[13] Herzog, S., "Local Policy Modules (LPM): Policy Control for
RSVP", Work in Progress.
[14] Borden, M., and M. Garrett, "Interoperation of Controlled-Load
and Guaranteed Service with ATM", RFC 2381, August 1998.
[15] Berger, L., "RSVP over ATM Implementation Requirements", RFC
2380, August 1998.
[16] Berger, L., "RSVP over ATM Implementation Guidelines", RFC 2379,
August 1998.
[17] ATM Forum Technical Committee. LAN Emulation over ATM, Version
1.0 Specification, af-lane-0021.000, January 1995.
[18] ATM Forum Technical Committee. Baseline Text for MPOA, af-95-
0824r9, September 1996.
[19] Heinanen, J., "Multiprotocol Encapsulation over ATM Adaptation
Layer 5", RFC 1483, July 1993.
[20] ATM Forum Technical Committee. LAN Emulation over ATM Version 2
- LUNI Specification, December 1996.
[21] ATM Forum Technical Committee. Traffic Management Specification
v4.0, af-tm-0056.000, April 1996.
[22] Callon, R., et al., "A Framework for Multiprotocol Label
Switching, Work in Progress.
[23] Rajagopalan, B., Nair, R., Sandick, H., and E. Crawley, "A
Framework for QoS-based Routing in the Internet", RFC 2386,
August 1998.
[24] ITU-T. Digital Subscriber Signaling System No. 2-Connection
modification: Peak cell rate modification by the connection
owner, ITU-T Recommendation Q.2963.1, July 1996.
[25] ITU-T. Digital Subscriber Signaling System No. 2-Connection
characteristics negotiation during call/connection establishment
phase, ITU-T Recommendation Q.2962, July 1996.
[26] ATM Forum Technical Committee. Private Network-Network Interface
Specification v1.0 (PNNI), March 1996.
8. Authors' Addresses
Eric S. Crawley
Argon Networks
25 Porter Road
Littleton, Ma 01460
Phone: +1 978 486-0665
EMail: esc@argon.com
Lou Berger
FORE Systems
6905 Rockledge Drive
Suite 800
Bethesda, MD 20817
Phone: +1 301 571-2534
EMail: lberger@fore.com
Steven Berson
USC Information Sciences Institute
4676 Admiralty Way
Marina del Rey, CA 90292
Phone: +1 310 822-1511
EMail: berson@isi.edu
Fred Baker
Cisco Systems
519 Lado Drive
Santa Barbara, California 93111
Phone: +1 805 681-0115
EMail: fred@cisco.com
Marty Borden
Bay Networks
125 Nagog Park
Acton, MA 01720
Phone: +1 978 266-1011
EMail: mborden@baynetworks.com
John J. Krawczyk
ArrowPoint Communications
235 Littleton Road
Westford, Massachusetts 01886
Phone: +1 978 692-5875
EMail: jj@arrowpoint.com
9. Full Copyright Statement
Copyright (C) The Internet Society (1998). All Rights Reserved.
This document and translations of it may be copied and furnished to
others, and derivative works that comment on or otherwise explain it
or assist in its implementation may be prepared, copied, published
and distributed, in whole or in part, without restriction of any
kind, provided that the above copyright notice and this paragraph are
included on all such copies and derivative works. However, this
document itself may not be modified in any way, such as by removing
the copyright notice or references to the Internet Society or other
Internet organizations, except as needed for the purpose of
developing Internet standards in which case the procedures for
copyrights defined in the Internet Standards process must be
followed, or as required to translate it into languages other than
English.
The limited permissions granted above are perpetual and will not be
revoked by the Internet Society or its successors or assigns.
This document and the information contained herein is provided on an
"AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING
TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING
BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION
HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
 End of changes. 173 change blocks. 
1084 lines changed or deleted 1010 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/