draft-ietf-avt-jpeg-03.txt   rfc2035.txt 
Internet Engineering Task Force Audio-Video Transport Working Group Network Working Group L. Berc
INTERNET-DRAFT L. Berc Request for Comments: 2035 Digital Equipment Corporation
draft-ietf-avt-jpeg-03.txt Digital Equipment Corporation Category: Standards Track W. Fenner
W. Fenner Xerox PARC
Xerox PARC R. Frederick
R. Frederick Xerox PARC
Xerox PARC S. McCanne
S. McCanne Lawrence Berkeley Laboratory
Lawrence Berkeley Laboratory October 1996
July 7, 1996
Expires: 1/1/97
RTP Payload Format for JPEG-compressed Video RTP Payload Format for JPEG-compressed Video
Status of this Memo Status of this Memo
This document is an Internet Draft. Internet Drafts are working docu- This document specifies an Internet standards track protocol for the
ments of the Internet Engineering Task Force (IETF), its Areas, and Internet community, and requests discussion and suggestions for
its Working Groups. Note that other groups may also distribute improvements. Please refer to the current edition of the "Internet
working documents as Internet Drafts). Official Protocol Standards" (STD 1) for the standardization state
and status of this protocol. Distribution of this memo is unlimited.
Internet Drafts are draft documents valid for a maximum of six
months. Internet Drafts may be updated, replaced, or obsoleted by
other documents at any time. It is not appropriate to use Internet
Drafts as reference material or to cite them other than as a
"working draft" or "work in progress."
Please check the I-D abstract listing contained in each Internet
Draft directory to learn the current status of this or any other Inter-
net Draft.
Distribution of this document is unlimited.
Abstract Abstract
This draft describes the RTP payload format for JPEG video streams. This memo describes the RTP payload format for JPEG video streams.
The packet format is optimized for real-time video streams where The packet format is optimized for real-time video streams where
codec parameters change rarely from frame to frame. codec parameters change rarely from frame to frame.
This document is a product of the Audio-Video Transport working group This document is a product of the Audio-Video Transport working group
within the Internet Engineering Task Force. Comments are solicited and within the Internet Engineering Task Force. Comments are solicited
should be addressed to the working group's mailing list at rem- and should be addressed to the working group's mailing list at rem-
conf@es.net and/or the author(s). conf@es.net and/or the author(s).
1. Introduction 1. Introduction
The Joint Photographic Experts Group (JPEG) standard [1,2,3] defines a The Joint Photographic Experts Group (JPEG) standard [1,2,3] defines
family of compression algorithms for continuous-tone, still images. a family of compression algorithms for continuous-tone, still images.
This still image compression standard can be applied to video by This still image compression standard can be applied to video by
compressing each frame of video as an independent still image and compressing each frame of video as an independent still image and
transmitting them in series. Video coded in this fashion is often transmitting them in series. Video coded in this fashion is often
called Motion-JPEG. called Motion-JPEG.
We first give an overview of JPEG and then describe the specific subset We first give an overview of JPEG and then describe the specific
of JPEG that is supported in RTP and the mechanism by which JPEG frames subset of JPEG that is supported in RTP and the mechanism by which
are carried as RTP payloads. JPEG frames are carried as RTP payloads.
The JPEG standard defines four modes of operation: the sequential DCT The JPEG standard defines four modes of operation: the sequential DCT
mode, the progressive DCT mode, the lossless mode, and the hierarchical mode, the progressive DCT mode, the lossless mode, and the
mode. Depending on the mode, the image is represented in one or more hierarchical mode. Depending on the mode, the image is represented
passes. Each pass (called a frame in the JPEG standard) is further bro- in one or more passes. Each pass (called a frame in the JPEG
ken down into one or more scans. Within each scan, there are one to standard) is further broken down into one or more scans. Within each
four components,which represent the three components of a color signal scan, there are one to four components,which represent the three
(e.g., ``red, green, and blue'', or a luminance signal and two chroman- components of a color signal (e.g., "red, green, and blue", or a
ince signals). These components can be encoded as separate scans or luminance signal and two chromanince signals). These components can
interleaved into a single scan. be encoded as separate scans or interleaved into a single scan.
Each frame and scan is preceded with a header containing optional defin- Each frame and scan is preceded with a header containing optional
itions for compression parameters like quantization tables and Huffman definitions for compression parameters like quantization tables and
coding tables. The headers and optional parameters are identified with Huffman coding tables. The headers and optional parameters are
``markers'' and comprise a marker segment; each scan appears as an identified with "markers" and comprise a marker segment; each scan
entropy-coded bit stream within two marker segments. Markers are appears as an entropy-coded bit stream within two marker segments.
aligned to byte boundaries and (in general) cannot appear in the Markers are aligned to byte boundaries and (in general) cannot appear
entropy-coded segment, allowing scan boundaries to be determined without in the entropy-coded segment, allowing scan boundaries to be
parsing the bit stream. determined without parsing the bit stream.
Compressed data is represented in one of three formats: the interchange Compressed data is represented in one of three formats: the
format, the abbreviated format, or the table-specification format. The interchange format, the abbreviated format, or the table-
interchange format contains definitions for all the table used in the by specification format. The interchange format contains definitions
the entropy-coded segments, while the abbreviated format might omit some for all the table used in the by the entropy-coded segments, while
assuming they were defined out-of-band or by a ``previous'' image. the abbreviated format might omit some assuming they were defined
out-of-band or by a "previous" image.
The JPEG standard does not define the meaning or format of the com- The JPEG standard does not define the meaning or format of the
ponents that comprise the image. Attributes like the color space and components that comprise the image. Attributes like the color space
pixel aspect ratio must be specified out-of-band with respect to the and pixel aspect ratio must be specified out-of-band with respect to
JPEG bit stream. The JPEG File Interchange Format (JFIF) [4] is a the JPEG bit stream. The JPEG File Interchange Format (JFIF) [4] is
defacto standard that provides this extra information using an applica- a defacto standard that provides this extra information using an
tion marker segment (APP0). Note that a JFIF file is simply a JPEG application marker segment (APP0). Note that a JFIF file is simply a
interchange format image along with the APP0 segment. In the case of JPEG interchange format image along with the APP0 segment. In the
video, additional parameters must be defined out-of-band (e.g., frame case of video, additional parameters must be defined out-of-band
rate, interlaced vs. non-interlaced, etc.). (e.g., frame rate, interlaced vs. non-interlaced, etc.).
While the JPEG standard provides a rich set of algorithms for flexible While the JPEG standard provides a rich set of algorithms for
compression, cost-effective hardware implementations of the full stan- flexible compression, cost-effective hardware implementations of the
dard have not appeared. Instead, most hardware JPEG video codecs imple- full standard have not appeared. Instead, most hardware JPEG video
ment only a subset of the sequential DCT mode of operation. Typically, codecs implement only a subset of the sequential DCT mode of
marker segments are interpreted in software (which ``re-programs'' the operation. Typically, marker segments are interpreted in software
hardware) and the hardware is presented with a single, interleaved (which "re-programs" the hardware) and the hardware is presented with
entropy-coded scan represented in the YUV color space. a single, interleaved entropy-coded scan represented in the YUV color
space.
2. JPEG Over RTP 2. JPEG Over RTP
To maximize interoperability among hardware-based codecs, we assume the To maximize interoperability among hardware-based codecs, we assume
sequential DCT operating mode [1,Annex F] and restrict the set of pre- the sequential DCT operating mode [1,Annex F] and restrict the set of
defined RTP/JPEG ``type codes'' (defined below) to single-scan, inter- predefined RTP/JPEG "type codes" (defined below) to single-scan,
leaved images. While this is more restrictive than even baseline JPEG, interleaved images. While this is more restrictive than even
many hardware implementation fall short of the baseline specification baseline JPEG, many hardware implementation fall short of the
(e.g., most hardware cannot decode non-interleaved scans). baseline specification (e.g., most hardware cannot decode non-
interleaved scans).
In practice, most of the table-specification data rarely changes from In practice, most of the table-specification data rarely changes from
frame to frame within a single video stream. Therefore, RTP/JPEG data frame to frame within a single video stream. Therefore, RTP/JPEG
is represented in abbreviated format, with all of the tables omitted data is represented in abbreviated format, with all of the tables
from the bit stream. Each image begins immediately with the (single) omitted from the bit stream. Each image begins immediately with the
entropy-coded scan. The information that would otherwise be in both the (single) entropy-coded scan. The information that would otherwise be
frame and scan headers is represented entirely within a 64-bit RTP/JPEG in both the frame and scan headers is represented entirely within a
header (defined below) that lies between the RTP header and the JPEG 64-bit RTP/JPEG header (defined below) that lies between the RTP
scan and is present in every packet. header and the JPEG scan and is present in every packet.
While parameters like Huffman tables and color space are likely to While parameters like Huffman tables and color space are likely to
remain fixed for the lifetime of the video stream, other parameters remain fixed for the lifetime of the video stream, other parameters
should be allowed to vary, notably the quantization tables and image should be allowed to vary, notably the quantization tables and image
size (e.g., to implement rate-adaptive transmission or allow a user to size (e.g., to implement rate-adaptive transmission or allow a user
adjust the ``quality level'' or resolution manually). Thus explicit to adjust the "quality level" or resolution manually). Thus explicit
fields in the RTP/JPEG header are allocated to represent this informa- fields in the RTP/JPEG header are allocated to represent this
tion. Since only a small set of quantization tables are typically used, information. Since only a small set of quantization tables are
we encode the entire set of quantization tables in a small integer typically used, we encode the entire set of quantization tables in a
field. The image width and height are encoded explicitly. small integer field. The image width and height are encoded
explicitly.
Because JPEG frames are typically larger than the underlying network's Because JPEG frames are typically larger than the underlying
maximum packet size, frames must often be fragmented into several pack- network's maximum packet size, frames must often be fragmented into
ets. One approach is to allow the network layer below RTP (e.g., IP) to several packets. One approach is to allow the network layer below
perform the fragmentation. However, this precludes rate-controlling the RTP (e.g., IP) to perform the fragmentation. However, this precludes
resulting packet stream or partial delivery in the presence of loss. rate-controlling the resulting packet stream or partial delivery in
For example, IP will not deliver a fragmented datagram to the applica- the presence of loss. For example, IP will not deliver a fragmented
tion if one or more fragments is lost, or IP might fragment an 8000 byte datagram to the application if one or more fragments is lost, or IP
frame into a burst of 8 back-to-back packets. Instead, RTP/JPEG defines might fragment an 8000 byte frame into a burst of 8 back-to-back
a simple fragmentation and reassembly scheme at the RTP level. packets. Instead, RTP/JPEG defines a simple fragmentation and
reassembly scheme at the RTP level.
3. RTP/JPEG Packet Format 3. RTP/JPEG Packet Format
The RTP timestamp is in units of 90000Hz. The same timestamp must The RTP timestamp is in units of 90000Hz. The same timestamp must
appear across all fragments of a single frame. The RTP marker bit is appear across all fragments of a single frame. The RTP marker bit is
set in the last packet of a frame. set in the last packet of a frame.
3.1. JPEG header 3.1. JPEG header
A special header is added to each packet that immediately follows the A special header is added to each packet that immediately follows the
RTP header: RTP header:
0 1 2 3 0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type specific | Fragment Offset | | Type specific | Fragment Offset |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type | Q | Width | Height | | Type | Q | Width | Height |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
3.1.1. Type specific: 8 bits 3.1.1. Type specific: 8 bits
Interpretation depends on the value of the type field. Interpretation depends on the value of the type field.
3.1.2. Fragment Offset: 24 bits 3.1.2. Fragment Offset: 24 bits
The Fragment Offset is the data offset in bytes of the current The Fragment Offset is the data offset in bytes of the current packet
packet in the JPEG scan. in the JPEG scan.
3.1.3. Type: 8 bits 3.1.3. Type: 8 bits
The type field specifies the information that would otherwise be The type field specifies the information that would otherwise be
present in a JPEG abbreviated table-specification as well as the present in a JPEG abbreviated table-specification as well as the
additional JFIF-style parameters not defined by JPEG. Types 0-127 additional JFIF-style parameters not defined by JPEG. Types 0-127
are reserved as fixed, well-known mappings to be defined by this are reserved as fixed, well-known mappings to be defined by this
document and future revisions of this document. Types 128-255 are document and future revisions of this document. Types 128-255 are
free to be dynamically defined by a session setup protocol (which free to be dynamically defined by a session setup protocol (which is
is beyond the scope of this document). beyond the scope of this document).
3.1.4. Q: 8 bits 3.1.4. Q: 8 bits
The Q field defines the quantization tables for this frame using an The Q field defines the quantization tables for this frame using an
algorithm that determined by the Type field (see below). algorithm that determined by the Type field (see below).
3.1.5. Width: 8 bits 3.1.5. Width: 8 bits
This field encodes the width of the image in 8-pixel multiples This field encodes the width of the image in 8-pixel multiples (e.g.,
(e.g., a width of 40 denotes an image 320 pixels wide). a width of 40 denotes an image 320 pixels wide).
3.1.6. Height: 8 bits 3.1.6. Height: 8 bits
This field encodes the height of the image in 8-pixel multiples This field encodes the height of the image in 8-pixel multiples
(e.g., a height of 30 denotes an image 240 pixels tall). (e.g., a height of 30 denotes an image 240 pixels tall).
3.1.7. Data 3.1.7. Data
The data following the RTP/JPEG header is an entropy-coded segment The data following the RTP/JPEG header is an entropy-coded segment
consisting of a single scan. The scan header is not present and is consisting of a single scan. The scan header is not present and is
inferred from the RTP/JPEG header. The scan is terminated either inferred from the RTP/JPEG header. The scan is terminated either
implicitly (i.e., the point at which the image is fully parsed), or implicitly (i.e., the point at which the image is fully parsed), or
explicitly with an EOI marker. The scan may be padded to arbitrary explicitly with an EOI marker. The scan may be padded to arbitrary
length with undefined bytes. (Existing hardware codecs generate length with undefined bytes. (Existing hardware codecs generate
extra lines at the bottom of a video frame and removal of these extra lines at the bottom of a video frame and removal of these lines
lines would require a Huffman-decoding pass over the data.) would require a Huffman-decoding pass over the data.)
As defined by JPEG, restart markers are the only type of marker As defined by JPEG, restart markers are the only type of marker that
that may appear embedded in the entropy-coded segment. The ``type may appear embedded in the entropy-coded segment. The "type code"
code'' determines whether a restart interval is defined, and there- determines whether a restart interval is defined, and therefore
fore whether restart markers may be present. It also determines if whether restart markers may be present. It also determines if the
the restart intervals will be aligned with RTP packets, allowing for restart intervals will be aligned with RTP packets, allowing for
partial decode of frames, thus increasing resiliance to packet partial decode of frames, thus increasing resiliance to packet drop.
drop. If restart markers are present, the 6-byte DRI segment (define If restart markers are present, the 6-byte DRI segment (define
restart interval marker [1, Sec. B.2.4.4] precedes the scan). restart interval marker [1, Sec. B.2.4.4] precedes the scan).
JPEG markers appear explicitly on byte aligned boundaries beginning JPEG markers appear explicitly on byte aligned boundaries beginning
with an 0xFF. A ``stuffed'' 0x00 byte follows any 0xFF byte gen- with an 0xFF. A "stuffed" 0x00 byte follows any 0xFF byte generated
erated by the entropy coder [1, Sec. B.1.1.5]. by the entropy coder [1, Sec. B.1.1.5].
4. Discussion 4. Discussion
4.1. The Type Field 4.1. The Type Field
The Type field defines the abbreviated table-specification and addi- The Type field defines the abbreviated table-specification and
tional JFIF-style parameters not defined by JPEG, since they are not additional JFIF-style parameters not defined by JPEG, since they are
present in the body of the transmitted JPEG data. The Type field must not present in the body of the transmitted JPEG data. The Type field
remain constant for the duration of a session. must remain constant for the duration of a session.
Six type codes are currently defined. They correspond to an abbre- Six type codes are currently defined. They correspond to an
viated table-specification indicating the ``Baseline DCT sequential'' abbreviated table-specification indicating the "Baseline DCT
mode, 8-bit samples, square pixels, three components in the YUV color sequential" mode, 8-bit samples, square pixels, three components in
space, standard Huffman tables as defined in [1, Annex K.3], and a the YUV color space, standard Huffman tables as defined in [1, Annex
single interleaved scan with a scan component selector indicating K.3], and a single interleaved scan with a scan component selector
components 0, 1, and 2 in that order. The Y, U, and V color planes indicating components 0, 1, and 2 in that order. The Y, U, and V
correspond to component numbers 0, 1, and 2, respectively. Component color planes correspond to component numbers 0, 1, and 2,
0 (i.e., the luminance plane) uses Huffman table number 0 and quantiz- respectively. Component 0 (i.e., the luminance plane) uses Huffman
ation table number 0 (defined below) and components 1 and 2 (i.e., table number 0 and quantization table number 0 (defined below) and
the chrominance planes) use Huffman table number 1 and quantization components 1 and 2 (i.e., the chrominance planes) use Huffman table
table number 1 (defined below). number 1 and quantization table number 1 (defined below).
Additionally, video is non-interlaced and unscaled (i.e., the aspect Additionally, video is non-interlaced and unscaled (i.e., the aspect
ratio is determined by the image width and height). The frame rate is ratio is determined by the image width and height). The frame rate
variable and explicit via the RTP timestamp. is variable and explicit via the RTP timestamp.
Six RTP/JPEG types are currently defined that assume all of the above. Six RTP/JPEG types are currently defined that assume all of the
The odd types have different JPEG sampling factors from the even ones: above. The odd types have different JPEG sampling factors from the
even ones:
horizontal vertical horizontal vertical
types comp samp. fact. samp. fact. types comp samp. fact. samp. fact.
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 0/2/4 | 0 | 2 | 1 | | 0/2/4 | 0 | 2 | 1 |
| 0/2/4 | 1 | 1 | 1 | | 0/2/4 | 1 | 1 | 1 |
| 0/2/4 | 2 | 1 | 1 | | 0/2/4 | 2 | 1 | 1 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 1/3/5 | 0 | 2 | 2 | | 1/3/5 | 0 | 2 | 2 |
| 1/3/5 | 1 | 1 | 1 | | 1/3/5 | 1 | 1 | 1 |
| 1/3/5 | 2 | 1 | 1 | | 1/3/5 | 2 | 1 | 1 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
These sampling factors indicate that the chromanince components of type These sampling factors indicate that the chromanince components of
0/2/4 video is downsampled horizontally by 2 (often called 4:2:2) while type 0/2/4 video is downsampled horizontally by 2 (often called
the chrominance components of type 1/3/5 video are downsampled both 4:2:2) while the chrominance components of type 1/3/5 video are
horizontally and vertically by 2 (often called 4:2:0). downsampled both horizontally and vertically by 2 (often called
4:2:0).
The three pairs of types (0/1), (2/3) and (4/5) differ from each other The three pairs of types (0/1), (2/3) and (4/5) differ from each
as follows: other as follows:
0/1 : No restart markers are present in the entropy data. 0/1 : No restart markers are present in the entropy data.
No restriction is placed on the fragmentation of the stream into No restriction is placed on the fragmentation of the stream
RTP packets. into RTP packets.
The type specific field is unused and must be zero. The type specific field is unused and must be zero.
2/3 : Restart markers are present in the entropy data. 2/3 : Restart markers are present in the entropy data.
The entropy data is preceded by a DRI marker segment, defining the The entropy data is preceded by a DRI marker segment, defining
restart interval. the restart interval.
No restriction is placed on the fragmentation of the stream into No restriction is placed on the fragmentation of the stream
RTP packets. into RTP packets.
The type specific field is unused and must be zero. The type specific field is unused and must be zero.
4/5 : Restart markers are present in the entropy data. 4/5 : Restart markers are present in the entropy data.
The entropy data is preceded by a DRI marker segment, defining the The entropy data is preceded by a DRI marker segment, defining
restart interval. the restart interval.
Restart intervals are be sent as separate (possibly multiple) RTP Restart intervals are be sent as separate (possibly multiple)
packets. RTP packets.
The type specific field (TSPEC) is used as follows: The type specific field (TSPEC) is used as follows:
A restart interval count (RCOUNT) is defined, which starts at A restart interval count (RCOUNT) is defined, which
zero, and is incremented for each restart interval in the starts at zero, and is incremented for each restart
frame. interval in the frame.
The first packet of a restart interval gets TSPEC = RCOUNT. The first packet of a restart interval gets TSPEC = RCOUNT.
Subsequent packets of the restart interval get TSPEC = 254, Subsequent packets of the restart interval get TSPEC = 254,
except the final packet, which gets TSPEC = 255. except the final packet, which gets TSPEC = 255.
Additional types in the range 128-255 may be defined by external means, Additional types in the range 128-255 may be defined by external
such as a session protocol. means, such as a session protocol.
Appendix B contains C source code for transforming the RTP/JPEG header Appendix B contains C source code for transforming the RTP/JPEG
parameters into the JPEG frame and scan headers that are absent from the header parameters into the JPEG frame and scan headers that are
data payload. absent from the data payload.
4.2. The Q Field 4.2. The Q Field
The quantization tables used in the decoding process are algorithmically The quantization tables used in the decoding process are
derived from the Q field. The algorithm used depends on the type field algorithmically derived from the Q field. The algorithm used depends
but only one algorithm is currently defined for the two types. on the type field but only one algorithm is currently defined for the
two types.
Both type 0 and type 1 JPEG assume two quantizations tables. These Both type 0 and type 1 JPEG assume two quantizations tables. These
tables are chosen as follows. For 1 <= Q <= 99, the Independent JPEG tables are chosen as follows. For 1 <= Q <= 99, the Independent JPEG
Group's formula [5] is used to produce a scale factor S as: Group's formula [5] is used to produce a scale factor S as:
S = 5000 / Q for 1 <= Q <= 50 S = 5000 / Q for 1 <= Q <= 50
= 200 - 2 * Q for 51 <= Q <= 99 = 200 - 2 * Q for 51 <= Q <= 99
This value is then used to scale Tables K.1 and K.2 from [1] (saturating This value is then used to scale Tables K.1 and K.2 from [1]
each value to 8-bits) to give quantization table numbers 0 and 1, (saturating each value to 8-bits) to give quantization table numbers
respectively. C source code is provided in Appendix A to compute these 0 and 1, respectively. C source code is provided in Appendix A to
tables. compute these tables.
For Q >= 100, a dynamically defined quantization table is used, which For Q >= 100, a dynamically defined quantization table is used, which
might be specified by a session setup protocol. (This session protocol might be specified by a session setup protocol. (This session
is beyond the scope of this document). It is expected that the standard protocol is beyond the scope of this document). It is expected that
quantization tables will handle most cases in practice, and dynamic the standard quantization tables will handle most cases in practice,
tables will be used rarely. Q = 0 is reserved. and dynamic tables will be used rarely. Q = 0 is reserved.
4.3. Fragmentation and Reassembly 4.3. Fragmentation and Reassembly
Since JPEG frames are large, they must often be fragmented. Frames Since JPEG frames are large, they must often be fragmented. Frames
should be fragmented into packets in a manner avoiding fragmentation at should be fragmented into packets in a manner avoiding fragmentation
a lower level. When using restart markers, frames should be fragmented at a lower level. When using restart markers, frames should be
such that each packet starts with a restart interval (see below). fragmented such that each packet starts with a restart interval (see
below).
Each packet that makes up a single frame has the same timestamp. The Each packet that makes up a single frame has the same timestamp. The
fragment offset field is set to the byte offset of this packet within fragment offset field is set to the byte offset of this packet within
the original frame. The RTP marker bit is set on the last packet in a the original frame. The RTP marker bit is set on the last packet in
frame. a frame.
An entire frame can be identified as a sequence of packets beginning An entire frame can be identified as a sequence of packets beginning
with a packet having a zero fragment offset and ending with a packet with a packet having a zero fragment offset and ending with a packet
having the RTP marker bit set. Missing packets can be detected either having the RTP marker bit set. Missing packets can be detected
with RTP sequence numbers or with the fragment offset and lengths of either with RTP sequence numbers or with the fragment offset and
each packet. Reassembly could be carried out without the offset field lengths of each packet. Reassembly could be carried out without the
(i.e., using only the RTP marker bit and sequence numbers), but an effi- offset field (i.e., using only the RTP marker bit and sequence
cient single-copy implementation would not otherwise be possible in the numbers), but an efficient single-copy implementation would not
presence of misordered packets. Moreover, if the last packet of the otherwise be possible in the presence of misordered packets.
previous frame (containing the marker bit) were dropped, then a receiver Moreover, if the last packet of the previous frame (containing the
could not detect that the current frame is entirely intact. marker bit) were dropped, then a receiver could not detect that the
current frame is entirely intact.
4.4. Restart Markers 4.4. Restart Markers
Restart markers indicate a point in the JPEG stream at which the Huffman Restart markers indicate a point in the JPEG stream at which the
codec and DC predictors are reset, allowing partial decoding starting Huffman codec and DC predictors are reset, allowing partial decoding
at that point. The use of restart markers allows for robustness in the starting at that point. The use of restart markers allows for
face of packet loss. robustness in the face of packet loss.
RTP/JPEG Types 4/5 allow for partial decode of frames, due to the RTP/JPEG Types 4/5 allow for partial decode of frames, due to the
alignment of restart intervals with RTP packets. The decoder knows it alignment of restart intervals with RTP packets. The decoder knows it
has a whole restart interval when it gets sequence of packets with has a whole restart interval when it gets sequence of packets with
contiguous RTP sequence numbers, starting with TSPEC<254 (RCOUNT) and contiguous RTP sequence numbers, starting with TSPEC<254 (RCOUNT) and
either ending with TSPEC==255, or TSPEC<255 and next packet's TSPEC<254 either ending with TSPEC==255, or TSPEC<255 and next packet's
(or end of frame). TSPEC<254 (or end of frame).
It can then decompress the RST interval, and paint it. The X and Y tile It can then decompress the RST interval, and paint it. The X and Y
offsets of the first MCU in the interval are given by:- tile offsets of the first MCU in the interval are given by:
tile_offset = RCOUNT * restart_interval * 2 tile_offset = RCOUNT * restart_interval * 2
x_offset = tile_offset % frame_width_in_tiles x_offset = tile_offset % frame_width_in_tiles
y_offset = tile_offset / frame_width_in_tiles y_offset = tile_offset / frame_width_in_tiles
The MCUs in a restart interval may span multiple tile rows. The MCUs in a restart interval may span multiple tile rows.
Decoders can, however, treat types 4/5 as types 2/3, simply reassembling Decoders can, however, treat types 4/5 as types 2/3, simply
the entire frame and then decoding. reassembling the entire frame and then decoding.
5. Security Considerations 5. Security Considerations
Security issues are not discussed in this memo. Security issues are not discussed in this memo.
6. Authors' Addresses 6. Authors' Addresses
Lance M. Berc Lance M. Berc
Systems Research Center Systems Research Center
Digital Equipment Corporation Digital Equipment Corporation
130 Lytton Ave 130 Lytton Ave
Palo Alto CA 94301 Palo Alto CA 94301
Phone: +1 415 853 2100 Phone: +1 415 853 2100
Email: berc@pa.dec.com EMail: berc@pa.dec.com
William C. Fenner William C. Fenner
Xerox PARC Xerox PARC
3333 Coyote Hill Road 3333 Coyote Hill Road
Palo Alto, CA 94304 Palo Alto, CA 94304
Phone: +1 415 812 4816 Phone: +1 415 812 4816
Email: fenner@cmf.nrl.navy.mil EMail: fenner@cmf.nrl.navy.mil
Ron Frederick Ron Frederick
Xerox PARC Xerox PARC
3333 Coyote Hill Road 3333 Coyote Hill Road
Palo Alto, CA 94304 Palo Alto, CA 94304
Phone: +1 415 812 4459 Phone: +1 415 812 4459
Email: frederick@parc.xerox.com EMail: frederick@parc.xerox.com
Steven McCanne Steven McCanne
Lawrence Berkeley Laboratory Lawrence Berkeley Laboratory
M/S 46A-1123 M/S 46A-1123
One Cyclotron Road One Cyclotron Road
Berkeley, CA 94720 Berkeley, CA 94720
Phone: +1 510 486 7520 Phone: +1 510 486 7520
Email: mccanne@ee.lbl.gov EMail: mccanne@ee.lbl.gov
7. References 7. References
[1] ISO DIS 10918-1. Digital Compression and Coding of Continuous-tone [1] ISO DIS 10918-1. Digital Compression and Coding of Continuous-tone
Still Images (JPEG), CCITT Recommendation T.81. Still Images (JPEG), CCITT Recommendation T.81.
[2] William B. Pennebaker, Joan L. Mitchell, JPEG: Still Image Data [2] William B. Pennebaker, Joan L. Mitchell, JPEG: Still Image Data
Compression Standard, Van Nostrand Reinhold, 1993. Compression Standard, Van Nostrand Reinhold, 1993.
[3] Gregory K. Wallace, The JPEG Sill Picture Compression Standard, [3] Gregory K. Wallace, The JPEG Sill Picture Compression Standard,
skipping to change at page 10, line 26 skipping to change at page 11, line 7
[4] The JPEG File Interchange Format. Maintained by C-Cube Microsys- [4] The JPEG File Interchange Format. Maintained by C-Cube Microsys-
tems, Inc., and available in tems, Inc., and available in
ftp://ftp.uu.net/graphics/jpeg/jfif.ps.gz. ftp://ftp.uu.net/graphics/jpeg/jfif.ps.gz.
[5] Tom Lane et. al., The Independent JPEG Group software JPEG codec. [5] Tom Lane et. al., The Independent JPEG Group software JPEG codec.
Source code available in Source code available in
ftp://ftp.uu.net/graphics/jpeg/jpegsrc.v5.tar.gz. ftp://ftp.uu.net/graphics/jpeg/jpegsrc.v5.tar.gz.
Appendix A Appendix A
The following code can be used to create a quantization table from a Q The following code can be used to create a quantization table from a
factor: Q factor:
/* /*
* Table K.1 from JPEG spec. * Table K.1 from JPEG spec.
*/ */
static const int jpeg_luma_quantizer[64] = { static const int jpeg_luma_quantizer[64] = {
16, 11, 10, 16, 24, 40, 51, 61, 16, 11, 10, 16, 24, 40, 51, 61,
12, 12, 14, 19, 26, 58, 60, 55, 12, 12, 14, 19, 26, 58, 60, 55,
14, 13, 16, 24, 40, 57, 69, 56, 14, 13, 16, 24, 40, 57, 69, 56,
14, 17, 22, 29, 51, 87, 80, 62, 14, 17, 22, 29, 51, 87, 80, 62,
18, 22, 37, 56, 68, 109, 103, 77, 18, 22, 37, 56, 68, 109, 103, 77,
skipping to change at page 11, line 42 skipping to change at page 12, line 22
lum_q[i] = lq; lum_q[i] = lq;
if ( cq < 1) cq = 1; if ( cq < 1) cq = 1;
else if ( cq > 255) cq = 255; else if ( cq > 255) cq = 255;
chr_q[i] = cq; chr_q[i] = cq;
} }
} }
Appendix B Appendix B
The following routines can be used to create the JPEG marker segments The following routines can be used to create the JPEG marker segments
corresponding to the table-specification data that is absent from the corresponding to the table-specification data that is absent from the
RTP/JPEG body. RTP/JPEG body.
u_char lum_dc_codelens[] = { u_char lum_dc_codelens[] = {
0, 1, 5, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 5, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0,
}; };
u_char lum_dc_symbols[] = { u_char lum_dc_symbols[] = {
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11,
}; };
u_char lum_ac_codelens[] = { u_char lum_ac_codelens[] = {
0, 2, 1, 3, 3, 2, 4, 3, 5, 5, 4, 4, 0, 0, 1, 0x7d, 0, 2, 1, 3, 3, 2, 4, 3, 5, 5, 4, 4, 0, 0, 1, 0x7d,
}; };
u_char lum_ac_symbols[] = { u_char lum_ac_symbols[] = {
0x01, 0x02, 0x03, 0x00, 0x04, 0x11, 0x05, 0x12, 0x01, 0x02, 0x03, 0x00, 0x04, 0x11, 0x05, 0x12,
skipping to change at page 14, line 29 skipping to change at page 15, line 6
h <<= 3; h <<= 3;
MakeTables(q, lqt, cqt); MakeTables(q, lqt, cqt);
*p++ = 0xff; *p++ = 0xff;
*p++ = 0xd8; /* SOI */ *p++ = 0xd8; /* SOI */
p = MakeQuantHeader(p, lqt, 0); p = MakeQuantHeader(p, lqt, 0);
p = MakeQuantHeader(p, cqt, 1); p = MakeQuantHeader(p, cqt, 1);
p = MakeHuffmanHeader(p, lum_dc_codelens, sizeof(lum_dc_codelens), p = MakeHuffmanHeader(p, lum_dc_codelens,
lum_dc_symbols, sizeof(lum_dc_symbols), 0, 0); sizeof(lum_dc_codelens),
p = MakeHuffmanHeader(p, lum_ac_codelens, sizeof(lum_ac_codelens), lum_dc_symbols,
lum_ac_symbols, sizeof(lum_ac_symbols), 0, 1); sizeof(lum_dc_symbols), 0, 0);
p = MakeHuffmanHeader(p, chm_dc_codelens, sizeof(chm_dc_codelens), p = MakeHuffmanHeader(p, lum_ac_codelens,
chm_dc_symbols, sizeof(chm_dc_symbols), 1, 0); sizeof(lum_ac_codelens),
p = MakeHuffmanHeader(p, chm_ac_codelens, sizeof(chm_ac_codelens), lum_ac_symbols,
chm_ac_symbols, sizeof(chm_ac_symbols), 1, 1); sizeof(lum_ac_symbols), 0, 1);
p = MakeHuffmanHeader(p, chm_dc_codelens,
sizeof(chm_dc_codelens),
chm_dc_symbols,
sizeof(chm_dc_symbols), 1, 0);
p = MakeHuffmanHeader(p, chm_ac_codelens,
sizeof(chm_ac_codelens),
chm_ac_symbols,
sizeof(chm_ac_symbols), 1, 1);
*p++ = 0xff; *p++ = 0xff;
*p++ = 0xc0; /* SOF */ *p++ = 0xc0; /* SOF */
*p++ = 0; /* length msb */ *p++ = 0; /* length msb */
*p++ = 17; /* length lsb */ *p++ = 17; /* length lsb */
*p++ = 8; /* 8-bit precision */ *p++ = 8; /* 8-bit precision */
*p++ = h >> 8; /* height msb */ *p++ = h >> 8; /* height msb */
*p++ = h; /* height lsb */ *p++ = h; /* height lsb */
*p++ = w >> 8; /* width msb */ *p++ = w >> 8; /* width msb */
*p++ = w; /* wudth lsb */ *p++ = w; /* wudth lsb */
 End of changes. 66 change blocks. 
283 lines changed or deleted 293 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/