draft-ietf-slim-use-cases-00.txt   draft-ietf-slim-use-cases-01.txt 
SLIM N. Rooney SLIM N. Rooney
Internet-Draft GSMA Internet-Draft GSMA
Expires: September 22, 2016 March 21, 2016 Expires: October 7, 2016 April 5, 2016
SLIM Use Cases SLIM Use Cases
draft-ietf-slim-use-cases-00 draft-ietf-slim-use-cases-01
Abstract Abstract
Use cases for selection of language for internet media. Use cases for selection of language for internet media.
Status of This Memo Status of This Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on September 22, 2016. This Internet-Draft will expire on October 7, 2016.
Copyright Notice Copyright Notice
Copyright (c) 2016 IETF Trust and the persons identified as the Copyright (c) 2016 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
1. Introduction 1. Introduction
The SLIM working group is developing standards for language selection The SLIM Working Group [SLIM] is developing standards for language
for non-real-time and real-time communications. There are a number selection for non-real-time and real-time communications. There are
of relevant use cases which could benefit from this functionality a number of relevant use cases which could benefit from this
including emergency service real-time communications and customer functionality including emergency service real-time communications
service. This document details the use cases for SLIM and gives some and customer service. This document details the use cases for SLIM
indication of necessary requirements. For each use case a 'Solution' and gives some indication of necessary requirements. For each use
is provided, indicating the implementability of the use case based on case a 'Solution' is provided, indicating the implementability of the
draft-ietf-slim-negotiating-human-language-00. use case based on "Negotiating Human Language in Real-Time
Communications" [NEGOTIATING-HUMAN-LANG].
2. Use Cases 2. Use Cases
Use cases are listed below: Use cases are listed below:
2.1. Single two-way language 2.1. Single two-way language
The simplest use case. One language and modality both ways in media The simplest use case. One language and modality both ways in media
described in SDP [RFC4566] as audio or video or text. described in SDP [RFC4566] as audio or video or text.
Straightforward. Works for spoken, written and signed languages. An Straightforward. Works for spoken, written and signed languages. An
skipping to change at page 2, line 39 skipping to change at page 2, line 40
o Solution: Possible o Solution: Possible
2.2. Alternatives in the same modality 2.2. Alternatives in the same modality
Two or more language alternatives in the same modality. Two or more Two or more language alternatives in the same modality. Two or more
languages both ways in media described in SDP as audio or video or languages both ways in media described in SDP as audio or video or
text, but only in one modality. Straightforward. Works for spoken, text, but only in one modality. Straightforward. Works for spoken,
written and signed languages. The answering part selects. There is written and signed languages. The answering part selects. There is
a relative preference expressed by the order, and the answering part a relative preference expressed by the order, and the answering part
can try to fulfill that in the best way. An example is a user who can try to fulfill that in the best way. An example is a user who
makes a voice call and prefers French first and German as their makes a voice call and prefers French as their first language and
second language, and the answerer selects to speak German as no German as their second, and the answerer selects to speak German as
French speaking abilites are available. no French speaking abilites are available.
Solution: Possible o Solution: Possible
2.3. Fairly equal alternatives in different modalities. 2.3. Fairly equal alternatives in different modalities.
Two or more modality alternatives. Two or more languages in Two or more modality alternatives. Two or more languages in
different modalities both ways in media described in SDP as audio or different modalities both ways in media described in SDP as audio or
video or text. An example is a hearing person also competent in sign video or text. An example is a person with hearing abilities who is
language declares both spoken and sign language competence in audio also competent in sign language declares both spoken and sign
and video. This is fairly straightforward, as long as there is no language competence in audio and video. This is fairly
strong difference in preference for these alternatives. The straightforward, as long as there is no strong difference in
indication of sign language competence is needed to avoid invoking preference for these alternatives. The indication of sign language
relay services in calls with deaf sign language users only indicating competence is needed to avoid invoking relay services in calls with
sign language. deaf sign language users only indicating sign language.
Solution: Possible o Solution: Possible
2.4. Last resort indication 2.4. Last resort indication
One language in the different modalities. Allows the user to One language in the different modalities. Allows the user to
indicate one last resort language when no other is available. For indicate one last resort language when no other is available. For
example, a hearing user has text capability but want to use that as example, a hearing user has text capability but want to use that as
last resort. (With current hunintlang specification, there is no way last resort. (With current specifications, there is no way to
to describe preference level between modalities and no way to describe preference level between modalities and no way to describe
describe absolute preference.) absolute preference.)
Solution: An answering service will have no guidance to which is the o Solution: An answering service will have no guidance to which is
preferred modality and may select to use the modality that is the the preferred modality and may select to use the modality that is
callers last resort even if the preferred alternative is available. the callers last resort even if the preferred alternative is
available.
Another practical case can be a sign language user with a small Another practical case can be a sign language user with a small
mobile terminal that has some inconvenient means for texting, but mobile terminal that has some inconvenient means for texting, but
sign language will be strongly preferred. In order to not miss any sign language will be strongly preferred. In order to not miss any
calls, the indication of text as last resort would be desirable. calls, the indication of text as last resort would be desirable.
Possible solution: coding of an absolute preference: hi, med, lo
together with the tag.
Solution: Need for absolute preference indication. o Solution: need coding of an absolute preference: hi, med, lo
together with the tag.
2.5. Directional capabilities in different modalities 2.5. Directional capabilities in different modalities
Two or more language alternatives in the different modalities. For Two or more language alternatives in the different modalities. For
example, a hard-of-hearing user strongly prefers to talk and get text example, a hard-of-hearing user strongly prefers to talk and receive
back. Getting spoken language input is appreciated. This can be text back. Spoken language input is appreciated. This can be
indicated by spoken language two-ways in audio, and reception of indicated by spoken language two-ways in audio, and reception of
language in text. (There is no current solution that says that the language in text. (There is no current solution that says that the
text path is important. The answering part may see it as an text path is important. The answering part may see it as an
alternative.) alternative.)
Solution: Need for preference indication per modality o Solution: Need for preference indication per modality
2.5.1. Fail gracefully? 2.5.1. Fail gracefully?
There currently are methods to indicate that the call shall fail if a There currently are methods to indicate that the call shall fail if a
language is not met, but that may be too drastic for some users language is not met, but that may be too drastic for some users
including the one in the above scenario (2.5). It may be important including the one in the above scenario (Section 2.5). It may be
to be able to connect and just say something, or use residual hearing important to be able to connect and just say something, or use
to get something back when the voice is familiar. residual hearing to get something back when the voice is familiar.
Possible solution: coding of an absolute preference together with the o Possible solution: coding of an absolute preference together with
tag could solve this case if used together with the directional the tag could solve this case if used together with the
indications. For example: directional indications. For example:
"preference: hi, med, lo" "preference: hi, med, lo"
Another solution would be to indicate required grouping of media, Another solution would be to indicate required grouping of media,
however this raises the complexity level. however this raises the complexity level.
2.6. Combination of modalities 2.6. Combination of modalities
Similar to Section 2.5, two or more language alternatives in the Similar to Section 2.5, two or more language alternatives in the
different modalities. A deaf-blind person may have highest different modalities. A person who is deaf-blind may have highest
preference for signing to the answerer and then receiving text in preference for signing to the answerer and then receiving text in
return. This requires the indication of sign language output in return. This requires the indication of sign language output in
video and text reception in text, using the current directional video and text reception in text, using the current directional
attributes. An answering part may seek suitable modalities for each attributes. An answering party may seek suitable modalities for each
direction and find the only possible combination. direction and find the only possible combination.
Solution: Need for preference indication per modality o Solution: Need for preference indication per modality
2.7. Person with speech disabilities who prefer speech-to-speech 2.7. Person with speech disabilities who prefer speech-to-speech
service service
One specific language for one specific modality with a speech-speech One specific language for one specific modality with a speech-speech
engine. A person who speaks in a way that is hard to understand, may engine. A person who may find that others have some difficulty in
be used to have support of a speech-to-speech relay service that adds understanding what they are trying to say may be used to have support
clear speech when needed for the understanding. Typically, only of a speech-to-speech relay service that aids clear speech when
calls with close friends and family might be possible without the needed for the understanding. Typically, only calls with close
relay service. friends and family might be possible without the relay service.
This user would indicate preference for receiving spoken language in This user would indicate preference for receiving spoken language in
audio. Text output can be indicated but this user might want to use audio. Text output can be indicated but this user might want to use
that as last resort. (There is no current coding for vague or this method as a last resort. (There is no current coding for vague
unarticulated speech or other needs for a speech-to-speech service.) or unarticulated speech or other needs for a speech-to-speech
service.)
A possibility could be to indicate no preference for spoken language A possibility could be to indicate no preference for spoken language
out, a coding of proposed assisting service and an indication of text out, a coding of proposed assisting service and an indication of text
output on a low absolute level. output on a low absolute level.
Solution: Need of service indication, and absolute level of o Solution: Need of service indication, and absolute level of
preference indication. preference indication.
2.8. Person with speech disabilities who prefer to type and hear 2.8. Person with speech disabilities who prefer to type and hear
Two or more language alternatives for multiple modalities. A person Two or more language alternatives for multiple modalities. A person
who speaks in a way that may be hard to understand, may be used to who speaks in a way that may be hard to understand, may be used to
using text for output and listen to spoken language for input. This using text for output and listen to spoken language for input. This
user would indicate preference for receiving spoken language in user would indicate preference for receiving spoken language in
audio. Text output modality can be indicated. audio. Text output modality can be indicated.
If the answering part has text and audio capabilities, there is a If the answering party has text and audio capabilities, there is a
match. If only voice, there is a need to invoke a text relay match. If only voice capabilities exist there is a need to invoke a
service. text relay service.
Solution: Need of service indication, and absolute level of o Solution: Need of service indication, and absolute level of
preference indication. preference indication.
2.9. All Possibilities 2.9. All Possibilities
Mutiple languages and multiple modalities. For example: a tele-sales Mutiple languages and multiple modalities. For example: a tele-sales
center calls out and wants to offer all kinds of possibilities so center calls out and wants to offer all kinds of possibilities so
that the answering party can select. A tele-sales center has that the answering party can select. A tele-sales center has
competence in multiple spoken languages and can invoke relay services competence in multiple spoken languages and can invoke relay services
rapidly if needed. So, it indicates in the call setup competence in rapidly if needed. So, it indicates in the call setup competence in
a number of spoken languages in audio, a number of sign languages in a number of spoken languages in audio, a number of sign languages in
video and a number of written languages in text. This would allow, video and a number of written languages in text. This would allow,
skipping to change at page 5, line 35 skipping to change at page 5, line 41
detect that and act accordingly, this could work in the following detect that and act accordingly, this could work in the following
methods: methods:
o Solution Alternative 1: The center calls without SDP. A deafblind o Solution Alternative 1: The center calls without SDP. A deafblind
user includes its SDP offer and the center sees what is needed to user includes its SDP offer and the center sees what is needed to
fulfill the call. fulfill the call.
o Solution Alternative 2: The center calls out with only the spoken o Solution Alternative 2: The center calls out with only the spoken
language capabilities indicated that the caller can handle. language capabilities indicated that the caller can handle.
The deaf-blind answering person, or terminal or service provider The person with deaf and / or sight disabilities who answers, or
detects the difference compared to the capabilities of the answering terminal or service provider detects the difference compared to the
party, and adds a suitable relay service. (This does not use all the capabilities of the answering party, and adds a suitable relay
offerings of the callers competence to pull in extra services, but is service. (This does not use all the offerings of the callers
maybe a more realistic case for what usually happens in practice. ) competence to pull in extra services, but is maybe a more realistic
case for what usually happens in practice. )
Solution: Possible in the same way as cases 1.8. o Solution: Possible in the same way as cases in Section 1.8.
3. Final Comments 3. Final Comments
The use cases identified here try to cover all cases of when users The use cases identified here try to cover all cases of when users
wish to make text, voice or video communication using the language of wish to make text, voice or video communication using the language of
set of languages in which they are able to speak, write or sign and set of languages in which they are able to speak, write or sign and
for which the receivers are also able to communicate. Some of these for which the receivers are also able to communicate. Some of these
use cases go even further to allow give some users the ability to use cases go even further to allow give some users the ability to
select multiple and different languages based on their abilities and select multiple and different languages based on their abilities and
needs. needs.
To fulfill all the use cases the currently specified directionality To fulfill all the use cases the currently specified directionality
will be needed, as well as an indication of absolute preference. An will be needed, as well as an indication of absolute preference. An
indication of suitable service and its spoken language is needed for indication of suitable service and its spoken language is needed for
the speech-to-speech case, but can be useful for other cases as well. the speech-to-speech case, but can be useful for other cases as well.
There is no clear need for explicit grouping of modalities seem to be There is no clear need for explicit grouping of modalities seem to be
needed. needed.
Subsequent work in the Selection of Language for Internet Media Subsequent work in the Selection of Language for Internet Media
Working Group (SLIM: https://datatracker.ietf.org/wg/slim/charter/) Working Group [SLIM] will work on Internet Drafts to support these
will work on Internet Drafts to support these use cases. use cases.
4. Security Considerations 4. Security Considerations
Indications of user preferred language may give indications as to Indications of user preferred language may give indications as to
their nationality, background and abilities. It may also give their nationality, background and abilities. It may also give
indication to any possible disabilities and some existing and ongoing indication to any possible disabilities and some existing and ongoing
health issues. health issues.
5. IANA Considerations 5. IANA Considerations
This document has no IANA actions. This document has no IANA actions.
6. Informative References 6. Informative References
[RFC4566] Handley, M., Jacobson, V., and C. Perkins, "SDP: Session [RFC4566] Handley, M., Jacobson, V., and C. Perkins, "SDP: Session
Description Protocol", RFC 4566, DOI 10.17487/RFC4566, Description Protocol", RFC 4566, DOI 10.17487/RFC4566,
July 2006, <http://www.rfc-editor.org/info/rfc4566>. July 2006, <http://www.rfc-editor.org/info/rfc4566>.
[SLIM] "SLIM Working Group", n.d.,
<https://datatracker.ietf.org/wg/slim/charter/>.
[NEGOTIATING-HUMAN-LANG]
Gellens, R., "Negotiating Human Language in Real-Time
Communications", 2016, <https://datatracker.ietf.org/doc/
draft-ietf-slim-negotiating-human-language/>.
Appendix A. Acknowledgments Appendix A. Acknowledgments
Gunnar Hellstrom's experience and knowledge in this area provided a Gunnar Hellstrom's experience and knowledge in this area provided a
great deal of these use cases. Thanks also goes to Randall Gellens great deal of these use cases. Thanks also goes to Randall Gellens
and Brian Rosen. and Brian Rosen.
Author's Address Author's Address
Natasha Rooney Natasha Rooney
GSMA GSMA
 End of changes. 28 change blocks. 
66 lines changed or deleted 77 lines changed or added

This html diff was produced by rfcdiff 1.45. The latest version is available from http://tools.ietf.org/tools/rfcdiff/