Multimodal Interaction Activity — News Archive
Extending the Web to support multiple
modes of interaction.
MMI WG Main page |
MMI Implementations|
Patent Disclosures |
Charter |
Activity Statement
2015
- 8 September 2015:
First Public Working Draft: EMMA: Extensible MultiModal Annotation markup language Version 2.0
The Multimodal Interaction
Working Group has published a Working Draft
of EMMA:
Extensible MultiModal Annotation markup language Version 2.0. This
specification describes markup for representing interpretations of
user input (speech, keystrokes, pen input, etc.) and productions of
system output together with annotations for confidence scores,
timestamps, medium, etc. It forms part of the proposals for the W3C
Multimodal Interaction Framework.
- 23 June 2015:
New charter for the Multimodal Interaction Working Group approved
The W3C Multimodal
Interaction Working Group has been rechartered to continue its work
through 31 December 2016.
As we interact with technology through more and more diverse devices,
the critical need for standards for multimodal interaction becomes
increasingly clear.
See also the
announcement sent to the MMI public list
for more information.
- 11 June 2015:
"Discovery & Registration of Multimodal Modality Components: State Handling"
is published as a First Public Working Draft
The Multimodal Interaction
Working Group has published a Working Draft
of Discovery
& Registration of Multimodal Modality Components: State
Handling. This document is addressed to people who want either to
develop Modality Components for Multimodal Applications distributed
over a local network or “in the cloud”. With this goal, in
a multimodal system implemented according to the Multimodal
Architecture Specification, the system must discover and register its
Modality Components in order to preserve the overall state of the
distributed elements. In this way, Modality Components can be composed
with automation mechanisms in order to adapt the Application to the
state of the surrounding environment. Learn more about
the Multimodal Interaction
Activity.
2014
- 22 May 2014:
Emotion Markup Language (EmotionML) 1.0 is a W3C Recommendation
The Multimodal Interaction
Working Group has published a W3C Recommendation
of Emotion
Markup Language (EmotionML) 1.0. As the Web is becoming
ubiquitous, interactive, and multimodal, technology needs to deal
increasingly with human factors, including emotions. The specification
of Emotion Markup Language 1.0 aims to strike a balance between
practical applicability and scientific well-foundedness. The language
is conceived as a “plug-in” language suitable for use in
three different areas: (1) manual annotation of data; (2) automatic
recognition of emotion-related states from user behavior; and (3)
generation of emotion-related system behavior. Learn more about
the Multimodal Interaction
Activity.
2013
- 24 September 2013:
W3C Webinar: Discovery in Distributed Multimodal Interaction
The second MMI webinar
on "Discovery in Distributed Multimodal Interaction"
will be held on September 24, 2013, at 11:00 a.m. ET.
Prior to this second webinar, the MMI-WG held
the W3C Workshop on
Rich Multimodal Application Development on July 22-23 in New York
Metropolitan Area, US, and identified that distributed/dynamic
applications depend on the ability of devices and environments to find
each other and learn what modalities they support. Therefore this
second webinar will focus on the topic of device/service discovery to
handle Modality Components of the MMI Architecture
dynamically.
The discussion during the webinar will interest anyone who wants to
take advantage of the dramatic increase in new interaction modes,
whether for health care, financial services, broadcasting, automotive,
gaming, or consumer devices.
Several experts from the industry and analyst communities will
share their experiences and views on the explosive growth of
opportunities for the development of applications that provide
enhanced multimodal user-experiences. Read more
and register
for the webinar.
- 22-23 July 2013:
The W3C Workshop on Rich Multimodal Application Development
will be held on 22-23 July 2013 in New York Metropolitan Area, US.
- 27 June 2013:
The Second Working Draft of
EMMA 1.1
is published.
Changes from the previous Working Draft can be found in
the Status of This Document section
of the specification.
- 16 April 2013:
The Proposed Recommendation
of Emotion
Markup Language (EmotionML) 1.0 is published.
Changes from the previous Working Draft can be found in Appendix C of the specification.
- 31 January 2013:
The Webinar on “Developing Portable Mobile Applications with
Compelling User Experience using the W3C MMI Architecture” will be
held on January 31, 2013, at 11:00 a.m. ET.
The 90-minute webinar is aimed at Web developers who may find it
daunting to incorporate innovative input and output methods such as
speech, touch, gesture and swipe into their applications, given the
diversity of mobile devices and programming techniques available
today.
Read more and register for the webinar.
See also the official announcement on the W3C Top Page.
2012
2011
2010
- 5-6 October 2010:
The EmotionML Workshop
was held in Paris, France, hosted by Telecom ParisTech.
The summary
and
detailed minutes are available online.
Participants from 12 organizations discussed use cases of possible
emotion-ready applications and clarified several key requirements for
the current EmotionML to make the specification even more useful.
- 21 September 2010:
The seventh Working Draft of
Multimodal Architecture and Interfaces is published.
The main changes from the previous draft are (1) the inclusion of
state charts for modality components, (2) the addition of a
'confidential' field to life-cycle events and (3) the removal of the
'media' field from life-cycle events.
A
diff-marked version is also available for comparison purposes.
- 29 July 2010:
The second Working Draft of
Emotion Markup Language (EmotionML) 1.0
is published.
A
diff-marked version is also available for comparison purposes.
Please send your comments to the Multimodal Interaction public mailing
list (<[email protected]>).
- 18-19 June 2010:
The workshop on Conversational Applications
was held in Somerset, NJ, (USA), hosted by Openstream.
The summary
and
detailed minutes are available online.
Participants from 12 organizations fucused discussion on the use
cases of possible conversational applications and clarified
limitations of the current W3C language model in order to
develop a more comprehensive one.
- 18-19 June 2010:
Workshop on Conversational Applications
will be held in Somerset, NJ, (USA), hosted by Openstream.
*** The deadline to send position papers is now extended to April 30. ***
Please see the
Call for Participation for details.
To participate in the Workshop, please submit a position paper
(either as an individual or organization)
to
<[email protected]>
by 11:59 EDT on 30 April 2010.
To help with planning, please let us know as soon as possible if you
are interested in attending by sending the following information to
<[email protected]>:
- that a representative from your organization plans to submit a position paper
- how many participants you want to send to the workshop (either one or two)
- whether or not you wish to make a presentation during the workshop
- 27 May 2010:
The second Last Call Working Draft of
Ink Markup Language (InkML) is published.
This draft incorporates a small number of extensions to the
previous version, including that channels can now report values of
floating point type, and brush properties may be specified.
A
diff-marked version is also available for comparison purposes.
Please send your comments to the Multimodal Interaction public mailing
list (<[email protected]>) by 17 June 2010.
When sending e-mail, please put the text "[ink]" in the subject,
preferably like this: "[ink] .summary of comment."
2009
2008
- 15 December 2008:
EMMA: Extensible MultiModal Annotation markup language
is a W3C Proposed Recommendation.
- 16 October 2008:
Multimodal Architecture and Interfaces: Fifth Working Draft is published.
- 23 September, 2008:
Implementation Report Plan
of
Extensible MultiModal Annotation markup language (EMMA)
Candidate Recommendation
is modified.
Following test assertions have been removed from the Implementation
Report Plan document, because they are actually not described in the
EMMA specification: 801, 902, 903 and 1501.
Please see
the announcement sent to the Multimodal Interaction public list
for the details on the modification.
- 2 July 2008:
Authoring Applications for the Multimodal Architecture
: First Public Working Group Note is published.
- 28 April 2008:
A
reminder
on the implementation reports for the
Extensible MultiModal Annotation markup language (EMMA)
specification sent out.
The Multimodal Interaction Working Group very much welcome
implementation reports.
The reports should be sent to
[email protected]
in the format described in the
implementation report plan.
- 14 April 2008:
Multimodal Architecture and Interfaces: Fourth Working Draft is published.
-
3-7 March, 2008:
MMIWG f2f meeting was held in Orlando, US, on 3-7 March 2008, hosted
by Voxeo. MMI Architecture workshop feedback, topics for the next MMI
Architecture and the need of guidelines for integrating a modality
into the MMI Architecture were discussed jointly with the VBWG.
-
22 January, 2008:
Implementation Report Plan
of
"EMMA: Extensible MultiModal Annotation markup language"
Candidate Recommendation is modified with following point:
- The minimum Candidate Recommendation period in the
Implementation Report Plan is modified from "?? 2007" to "14 April
2008" as defined in the
Candidate Recommendation.
2007
2006
2005
2004
2003
Kazuyuki Ashimura <[email protected]>
- Multimodal Interaction Activity Lead