Docs: update attributes, tv audio, syntax in implementation.

add images for tv audio document, update syntax
add feedback, tweaks for widows, orphans

Bug: 17508564

Change-Id: I90fccfde5a4612db3507525a940f158c43614946
diff --git a/src/devices/audio/images/ape_audio_tv_hdmi_tuner.png b/src/devices/audio/images/ape_audio_tv_hdmi_tuner.png
new file mode 100644
index 0000000..43a89ea
--- /dev/null
+++ b/src/devices/audio/images/ape_audio_tv_hdmi_tuner.png
Binary files differ
diff --git a/src/devices/audio/images/ape_audio_tv_tif.png b/src/devices/audio/images/ape_audio_tv_tif.png
new file mode 100644
index 0000000..f013cfa
--- /dev/null
+++ b/src/devices/audio/images/ape_audio_tv_tif.png
Binary files differ
diff --git a/src/devices/audio/images/ape_audio_tv_tuner.png b/src/devices/audio/images/ape_audio_tv_tuner.png
new file mode 100644
index 0000000..a25dcfb
--- /dev/null
+++ b/src/devices/audio/images/ape_audio_tv_tuner.png
Binary files differ
diff --git a/src/devices/audio_attributes.jd b/src/devices/audio_attributes.jd
index 9245833..473a04e 100644
--- a/src/devices/audio_attributes.jd
+++ b/src/devices/audio_attributes.jd
@@ -2,7 +2,7 @@
 @jd:body
 
 <!--
-    Copyright 2013 The Android Open Source Project
+    Copyright 2014 The Android Open Source Project
 
     Licensed under the Apache License, Version 2.0 (the "License");
     you may not use this file except in compliance with the License.
@@ -24,41 +24,47 @@
   </div>
 </div>
 
-<p>
-Audio players support attributes that define how the audio system handles routing, volume, and focus decisions for the specified source. Applications can attach attributes to an audio playback (such as music played by Pandora or a notification for a new email) then pass the audio source attributes to the framework, where the audio system uses the attributes to make mixing decisions and to notify applications about the state of the system.
-</p>
+<p>Audio players support attributes that define how the audio system handles routing, volume, and
+focus decisions for the specified source. Applications can attach attributes to an audio playback
+(such as music played by a streaming service or a notification for a new email) then pass the audio
+source attributes to the framework, where the audio system uses the attributes to make mixing
+decisions and to notify applications about the state of the system.</p>
 
-<p class="note">
-<strong>Note:</strong> Applications can also attach attributes to an audio recording (such as audio captured in a video recording), but this functionality is not exposed in the public API.
-</p>
+<p class="note"><strong>Note:</strong> Applications can also attach attributes to an audio
+recording (such as audio captured in a video recording), but this functionality is not exposed in
+the public API.</p>
 
-<p>
-In Android 4.4 and earlier, the framework made mixing decisions using only the audio stream type. However, basing such decisions on stream type was too limiting to produce quality output across multiple applications and devices. For example, on a mobile device, some applications (i.e. Maps) play driving directions on the STREAM_MUSIC stream type; however, on mobile devices in projection mode (i.e. Android Auto), applications cannot mix driving directions with other media streams.</p>
+<p>In Android 4.4 and earlier, the framework made mixing decisions using only the audio stream type.
+However, basing such decisions on stream type was too limiting to produce quality output across
+multiple applications and devices. For example, on a mobile device, some applications (i.e.
+Google Maps) played driving directions on the STREAM_MUSIC stream type; however, on mobile
+devices in projection mode (i.e. Android Auto), applications cannot mix driving directions with
+other media streams.</p>
 
-<p>
-Using the audio attribute API, applications can now provide the audio system with detailed information about a specific audio source:
-</p>
+<p>Using the <a href="http://developer.android.com/reference/android/media/AudioAttributes.html">
+audio attribute API</a>, applications can now provide the audio system with detailed information
+about a specific audio source:</p>
 
 <ul>
-<li><b>Usage</b>. Specifies why the source is playing and controls routing, focus, and volume decisions.</li>
-<li><b>Content type</b>. Specifies what the source is playing (music, movie, speech, sonification, unknown).</li>
-<li><b>Flags</b>. Specifies how the source should be played. Includes support for audibility enforcement (camera shutter sounds required in some countries) and hardware audio/video synchronization.</li>
+<li><b>Usage</b>. Specifies why the source is playing and controls routing, focus, and volume
+decisions.</li>
+<li><b>Content type</b>. Specifies what the source is playing (music, movie, speech,
+sonification, unknown).</li>
+<li><b>Flags</b>. Specifies how the source should be played. Includes support for audibility
+enforcement (camera shutter sounds required in some countries) and hardware audio/video
+synchronization.</li>
 </ul>
 
-<p>
-For dynamics processing, applications must distinguish between movie, music, and speech content. Information about the data itself may also matter, such as loudness and peak sample value.
-</p>
+<p>For dynamics processing, applications must distinguish between movie, music, and speech content.
+Information about the data itself may also matter, such as loudness and peak sample value.</p>
 
-<h2 id="using">
-Using attributes
-</h2>
+<h2 id="using">Using attributes</h2>
 
-<p>Usage specifies the context in which the stream is used, providing information about why the sound is playing and what the sound is used for. Usage information is more expressive than a stream type and allows platforms or routing policies to refine volume or routing decisions.
-</p>
+<p>Usage specifies the context in which the stream is used, providing information about why the
+sound is playing and what the sound is used for. Usage information is more expressive than a stream
+type and allows platforms or routing policies to refine volume or routing decisions.</p>
 
-<p>
-Supply one of the following usage values for any instance:
-</p>
+<p>Supply one of the following usage values for any instance:</p>
 
 <ul>
 <li><code>USAGE_UNKNOWN</code></li>
@@ -77,21 +83,24 @@
 <li><code>USAGE_GAME</code></li>
 </ul>
 
-<p>
-Values are mutually exclusive. For examples, refer to <code>USAGE_MEDIA</code>and <code>USAGE_ALARM</code> definitions; for exceptions, refer to <code>AudioAttributes.Builder</code> definition.
-</p>
+<p>Audio attribute usage values are mutually exclusive. For examples, refer to <code>
+<a href="http://developer.android.com/reference/android/media/AudioAttributes.html#USAGE_MEDIA">
+USAGE_MEDIA</a></code> and <code>
+<a href="http://developer.android.com/reference/android/media/AudioAttributes.html#USAGE_ALARM">
+USAGE_ALARM</a></code> definitions; for exceptions, refer to the <code>
+<a href="http://developer.android.com/reference/android/media/AudioAttributes.Builder.html">
+AudioAttributes.Builder</a></code> definition.</p>
 
-<h2 id="content-type">
-Content type
-</h2>
+<h2 id="content-type">Content type</h2>
 
-<p>
-Content type defines what the sound is and expresses the general category of the content such as movie, speech, or beep/ringtone. The audio framework uses content type information to selectively configure audio post-processing blocks. While supplying the content type is optional, you should include type information whenever the content type is known, such as using <code>CONTENT_TYPE_MOVIE</code> for a movie streaming service or <code>CONTENT_TYPE_MUSIC</code> for a music playback application.
-</p>
+<p>Content type defines what the sound is and expresses the general category of the content such as
+movie, speech, or beep/ringtone. The audio framework uses content type information to selectively
+configure audio post-processing blocks. While supplying the content type is optional, you should
+include type information whenever the content type is known, such as using
+<code>CONTENT_TYPE_MOVIE</code> for a movie streaming service or <code>CONTENT_TYPE_MUSIC</code>
+for a music playback application.</p>
 
-<p>
-Supply one of the following usage values for any instance:
-</p>
+<p>Supply one of the following content type values for any instance:</p>
 
 <ul>
 <li><code>CONTENT_TYPE_UNKNOWN</code> (default)</li>
@@ -101,34 +110,31 @@
 <li><code>CONTENT_TYPE_SPEECH</code></li>
 </ul>
 
-<p>
-Values are mutually exclusive. 
-</p>
+<p>Audio attribute content type values are mutually exclusive. For details on content types,
+refer to the <a href="http://developer.android.com/reference/android/media/AudioAttributes.html">
+audio attribute API</a>.</p>
 
-<h2 id="flags">
-Flags
-</h2>
+<h2 id="flags">Flags</h2>
 
-<p>
-Flags specify how the audio framework applies effects to the audio playback. Supply one or more of the following flags for an instance:
-</p>
+<p>Flags specify how the audio framework applies effects to the audio playback. Supply one or more
+of the following flags for an instance:</p>
 
 <ul>
-<li><code>FLAG_AUDIBILITY_ENFORCED</code>. Requests the system ensure the audibility of the sound. Use to address the needs of legacy <code>STREAM_SYSTEM_ENFORCED</code> (such as forcing camera shutter sounds).</li>
-<li><code>HW_AV_SYNC</code>. Requests the system select an output stream that supports hardware A/V synchronization.</li>
+<li><code>FLAG_AUDIBILITY_ENFORCED</code>. Requests the system ensure the audibility of the
+sound. Use to address the needs of legacy <code>STREAM_SYSTEM_ENFORCED</code> (such as forcing
+camera shutter sounds).</li>
+<li><code>HW_AV_SYNC</code>. Requests the system select an output stream that supports hardware A/V
+synchronization.</li>
 </ul>
 
-<p>
-Flags are non-exclusive (can be combined).
-</p>
+<p>Audio attribute flags are non-exclusive (can be combined). For details on these flags,
+refer to the <a href="http://developer.android.com/reference/android/media/AudioAttributes.html">
+audio attribute API</a>.</p>
 
-<h2 id="example">
-Example
-</h2>
+<h2 id="example">Example</h2>
 
-<p>
-The following example shows USAGE and CONTENT_TYPE attributes.
-</p>
+<p>In this example, AudioAttributes.Builder defines the AudioAttributes to be used by a new
+AudioTrack instance:</p>
 
 <pre>
 AudioTrack myTrack = new AudioTrack(
@@ -139,21 +145,19 @@
   myFormat, myBuffSize, AudioTrack.MODE_STREAM, mySession);
 </pre>
 
-<h2 id="compatibility">
-Compatibility
-</h2>
+<h2 id="compatibility">Compatibility</h2>
 
-<p>
-Application developers should use audio attributes when creating or updating applications for Android 5.0. However, applications are not required to take advantage of attributes; they can handle legacy stream types only or remain unaware of attributes (i.e. a generic media player that doesn’t know anything about the content it’s playing).
-</p>
+<p>Application developers should use audio attributes when creating or updating applications for
+Android 5.0. However, applications are not required to take advantage of attributes; they can
+handle legacy stream types only or remain unaware of attributes (i.e. a generic media player that
+doesn’t know anything about the content it’s playing).</p>
 
-<p>
-In such cases, the framework maintains backwards compatibility with older devices and Android releases by automatically translating legacy audio stream types to audio attributes. However, the framework does not enforce or guarantee this mapping across devices, manufacturers, or Android releases.
-</p>
+<p>In such cases, the framework maintains backwards compatibility with older devices and Android
+releases by automatically translating legacy audio stream types to audio attributes. However, the
+framework does not enforce or guarantee this mapping across devices, manufacturers, or Android
+releases.</p>
 
-<p>
-Compatibility mappings:
-</p>
+<p>Compatibility mappings:</p>
 
 <table>
 <tr>
@@ -249,6 +253,5 @@
 </tr>
 </table>
 
-<p class="note">
-<strong>Note:</strong> @hide streams are used internally by the framework but are not part of the public API.
-</p>
\ No newline at end of file
+<p class="note"><strong>Note:</strong> @hide streams are used internally by the framework but are
+not part of the public API.</p>
\ No newline at end of file
diff --git a/src/devices/audio_implement.jd b/src/devices/audio_implement.jd
index 0829e12..5adbdf1 100644
--- a/src/devices/audio_implement.jd
+++ b/src/devices/audio_implement.jd
@@ -2,7 +2,7 @@
 @jd:body
 
 <!--
-    Copyright 2013 The Android Open Source Project
+    Copyright 2014 The Android Open Source Project
 
     Licensed under the Apache License, Version 2.0 (the "License");
     you may not use this file except in compliance with the License.
@@ -24,63 +24,58 @@
   </div>
 </div>
 
-<p>
-  This page explains how to implement the audio Hardware Abstraction Layer (HAL)
-and configure the shared library.
-</p>
+<p>This page explains how to implement the audio Hardware Abstraction Layer (HAL) and configure the
+shared library.</p>
 
-<h2 id="implementing">
-  Implementing the HAL
-</h2>
-<p>
-  The audio HAL is composed of three different interfaces that you must implement:
-</p>
+<h2 id="implementing">Implementing the HAL</h2>
+
+<p>The audio HAL is composed of three different interfaces that you must implement:</p>
+
 <ul>
-  <li>
-    <code>hardware/libhardware/include/hardware/audio.h</code> - represents the main functions of
-    an audio device.
-  </li>
-  <li>
-    <code>hardware/libhardware/include/hardware/audio_policy.h</code> - represents the audio policy
-    manager, which handles things like audio routing and volume control policies.
-  </li>
-  <li>
-    <code>hardware/libhardware/include/hardware/audio_effect.h</code> - represents effects that can
-    be applied to audio such as downmixing, echo, or noise suppression.
-  </li>
+<li><code>hardware/libhardware/include/hardware/audio.h</code> - represents the main functions
+of an audio device.</li>
+<li><code>hardware/libhardware/include/hardware/audio_policy.h</code> - represents the audio policy
+manager, which handles things like audio routing and volume control policies.</li>
+<li><code>hardware/libhardware/include/hardware/audio_effect.h</code> - represents effects that can
+be applied to audio such as downmixing, echo, or noise suppression.</li>
 </ul>
-<p>See the implementation for the Galaxy Nexus at <code>device/samsung/tuna/audio</code> for an example.</p>
+
+<p>For an example, refer to the implementation for the Galaxy Nexus at
+<code>device/samsung/tuna/audio</code>.</p>
 
 <p>In addition to implementing the HAL, you need to create a
-  <code>device/&lt;company_name&gt;/&lt;device_name&gt;/audio/audio_policy.conf</code> file
-  that declares the audio devices present on your product. For an example, see the file for
-  the Galaxy Nexus audio hardware in <code>device/samsung/tuna/audio/audio_policy.conf</code>. 
-Also, see
-  the <code>system/core/include/system/audio.h</code> and <code>system/core/include/system/audio_policy.h</code>
-   header files for a reference of the properties that you can define.
-</p>
-<h3 id="multichannel">Multi-channel support</h3>
-<p>If your hardware and driver supports multichannel audio via HDMI, you can output the audio stream
-  directly to the audio hardware. This bypasses the AudioFlinger mixer so it doesn't get downmixed to two channels. 
-  
-  <p>
-  The audio HAL must expose whether an output stream profile supports multichannel audio capabilities.
-  If the HAL exposes its capabilities, the default policy manager allows multichannel playback over 
-  HDMI.</p>
- <p>For more implementation details, see the
-<code>device/samsung/tuna/audio/audio_hw.c</code> in the Android 4.1 release.</p>
+<code>device/&lt;company_name&gt;/&lt;device_name&gt;/audio/audio_policy.conf</code> file that
+declares the audio devices present on your product. For an example, see the file for the Galaxy
+Nexus audio hardware in <code>device/samsung/tuna/audio/audio_policy.conf</code>. Also, see the
+<code>system/core/include/system/audio.h</code> and
+<code>system/core/include/system/audio_policy.h</code> header files for a reference of the
+properties that you can define.</p>
 
-  <p>
-  To specify that your product contains a multichannel audio output, edit the <code>audio_policy.conf</code> file to describe the multichannel
-  output for your product. The following is an example from the Galaxy Nexus that shows a "dynamic" channel mask, which means the audio policy manager
-  queries the actual channel masks supported by the HDMI sink after connection. You can also specify a static channel mask like <code>AUDIO_CHANNEL_OUT_5POINT1</code>
-  </p>
+<h3 id="multichannel">Multi-channel support</h3>
+
+<p>If your hardware and driver supports multichannel audio via HDMI, you can output the audio
+stream  directly to the audio hardware. This bypasses the AudioFlinger mixer so it doesn't get
+downmixed to two channels.</p>
+
+<p>The audio HAL must expose whether an output stream profile supports multichannel audio
+capabilities. If the HAL exposes its capabilities, the default policy manager allows multichannel
+playback over HDMI.</p>
+
+<p>For more implementation details, see the <code>device/samsung/tuna/audio/audio_hw.c</code> in
+the Android 4.1 release.</p>
+
+<p>To specify that your product contains a multichannel audio output, edit the
+<code>audio_policy.conf</code> file to describe the multichannel output for your product. The
+following is an example from the Galaxy Nexus that shows a "dynamic" channel mask, which means the
+audio policy manager queries the actual channel masks supported by the HDMI sink after connection.
+You can also specify a static channel mask like <code>AUDIO_CHANNEL_OUT_5POINT1</code>.</p>
+
 <pre>
 audio_hw_modules {
   primary {
     outputs {
         ...
-        hdmi {  
+        hdmi {
           sampling_rates 44100|48000
           channel_masks dynamic
           formats AUDIO_FORMAT_PCM_16_BIT
@@ -95,42 +90,38 @@
 }
 </pre>
 
-
-  <p>AudioFlinger's mixer downmixes the content to stereo
-    automatically when sent to an audio device that does not support multichannel audio.</p>
+<p>AudioFlinger's mixer downmixes the content to stereo automatically when sent to an audio device
+that does not support multichannel audio.</p>
 
 <h3 id="codecs">Media codecs</h3>
 
-<p>Ensure the audio codecs your hardware and drivers support are properly declared for your product. See
-  <a href="media.html#expose"> Exposing Codecs to the Framework</a> for information on how to do this.
-</p>
+<p>Ensure the audio codecs your hardware and drivers support are properly declared for your
+product. For details on declaring supported codecs, see <a href="media.html#expose"> Exposing Codecs
+to the Framework</a>.</p>
 
-<h2 id="configuring">
-  Configuring the shared library
-</h2>
-<p>
-  You need to package the HAL implementation into a shared library and copy it to the
-  appropriate location by creating an <code>Android.mk</code> file:
-</p>
+<h2 id="configuring">Configuring the shared library</h2>
+
+<p>You need to package the HAL implementation into a shared library and copy it to the appropriate
+location by creating an <code>Android.mk</code> file:</p>
+
 <ol>
-  <li>Create a <code>device/&lt;company_name&gt;/&lt;device_name&gt;/audio</code> directory
-  to contain your library's source files.
-  </li>
-  <li>Create an <code>Android.mk</code> file to build the shared library. Ensure that the
-  Makefile contains the following line:
+<li>Create a <code>device/&lt;company_name&gt;/&lt;device_name&gt;/audio</code> directory to
+contain your library's source files.</li>
+<li>Create an <code>Android.mk</code> file to build the shared library. Ensure that the Makefile
+contains the following line:
 <pre>
 LOCAL_MODULE := audio.primary.&lt;device_name&gt;
 </pre>
-    <p>
-      Notice your library must be named <code>audio_primary.&lt;device_name&gt;.so</code> so
-      that Android can correctly load the library. The "<code>primary</code>" portion of this
-      filename indicates that this shared library is for the primary audio hardware located on the
-      device. The module names <code>audio.a2dp.&lt;device_name&gt;</code> and
-      <code>audio.usb.&lt;device_name&gt;</code> are also available for bluetooth and USB audio
-      interfaces. Here is an example of an <code>Android.mk</code> from the Galaxy
-      Nexus audio hardware:
-    </p>
-    <pre>
+
+<p>Notice your library must be named <code>audio_primary.&lt;device_name&gt;.so</code> so
+that Android can correctly load the library. The "<code>primary</code>" portion of this filename
+indicates that this shared library is for the primary audio hardware located on the device. The
+module names <code>audio.a2dp.&lt;device_name&gt;</code> and
+<code>audio.usb.&lt;device_name&gt;</code> are also available for bluetooth and USB audio
+interfaces. Here is an example of an <code>Android.mk</code> from the Galaxy Nexus audio hardware:
+</p>
+
+<pre>
 LOCAL_PATH := $(call my-dir)
 
 include $(CLEAR_VARS)
@@ -147,59 +138,73 @@
 
 include $(BUILD_SHARED_LIBRARY)
 </pre>
-  </li>
-  <li>If your product supports low latency audio as specified by the Android CDD, copy the
-  corresponding XML feature file into your product. For example, in your product's
-   <code>device/&lt;company_name&gt;/&lt;device_name&gt;/device.mk</code> 
-  Makefile:
-    <pre>
+
+</li>
+
+<li>If your product supports low latency audio as specified by the Android CDD, copy the
+corresponding XML feature file into your product. For example, in your product's
+<code>device/&lt;company_name&gt;/&lt;device_name&gt;/device.mk</code> Makefile:
+
+<pre>
 PRODUCT_COPY_FILES := ...
 
 PRODUCT_COPY_FILES += \
 frameworks/native/data/etc/android.android.hardware.audio.low_latency.xml:system/etc/permissions/android.hardware.audio.low_latency.xml \
 </pre>
-  </li>
- 
-  <li>Copy the <code>audio_policy.conf</code> file that you created earlier to the <code>system/etc/</code> directory
-  in your product's <code>device/&lt;company_name&gt;/&lt;device_name&gt;/device.mk</code> 
-  Makefile. For example:
-    <pre>
+
+</li>
+
+<li>Copy the <code>audio_policy.conf</code> file that you created earlier to the
+<code>system/etc/</code> directory in your product's
+<code>device/&lt;company_name&gt;/&lt;device_name&gt;/device.mk</code> Makefile. For example:
+
+<pre>
 PRODUCT_COPY_FILES += \
         device/samsung/tuna/audio/audio_policy.conf:system/etc/audio_policy.conf
 </pre>
-  </li>
-  <li>Declare the shared modules of your audio HAL that are required by your product in the product's
-    <code>device/&lt;company_name&gt;/&lt;device_name&gt;/device.mk</code> Makefile. For example, the
-  Galaxy Nexus requires the primary and bluetooth audio HAL modules:
+
+</li>
+
+<li>Declare the shared modules of your audio HAL that are required by your product in the
+product's <code>device/&lt;company_name&gt;/&lt;device_name&gt;/device.mk</code> Makefile. For
+example, the Galaxy Nexus requires the primary and bluetooth audio HAL modules:
+
 <pre>
 PRODUCT_PACKAGES += \
         audio.primary.tuna \
         audio.a2dp.default
 </pre>
-  </li>
+
+</li>
 </ol>
 
 <h2 id="preprocessing">Audio pre-processing effects</h2>
-<p>
-The Android platform provides audio effects on supported devices in the
-<a href="http://developer.android.com/reference/android/media/audiofx/package-summary.html">audiofx</a>
-package, which is available for developers to access. For example, on the Nexus 10, the following pre-processing effects are supported: </p>
+
+<p>The Android platform provides audio effects on supported devices in the
+<a href="http://developer.android.com/reference/android/media/audiofx/package-summary.html">audiofx
+</a> package, which is available for developers to access. For example, on the Nexus 10, the
+following pre-processing effects are supported:</p>
+
 <ul>
-  <li><a
-href="http://developer.android.com/reference/android/media/audiofx/AcousticEchoCanceler.html">Acoustic Echo Cancellation</a></li>
-  <li><a
-href="http://developer.android.com/reference/android/media/audiofx/AutomaticGainControl.html">Automatic Gain Control</a></li>
-  <li><a
-href="http://developer.android.com/reference/android/media/audiofx/NoiseSuppressor.html">Noise Suppression</a></li>
+<li>
+<a href="http://developer.android.com/reference/android/media/audiofx/AcousticEchoCanceler.html">
+Acoustic Echo Cancellation</a></li>
+<li>
+<a href="http://developer.android.com/reference/android/media/audiofx/AutomaticGainControl.html">
+Automatic Gain Control</a></li>
+<li>
+<a href="http://developer.android.com/reference/android/media/audiofx/NoiseSuppressor.html">
+Noise Suppression</a></li>
 </ul>
 
 
-<p>Pre-processing effects are paired with the use case mode in which the pre-processing is requested. In Android
-app development, a use case is referred to as an <code>AudioSource</code>; and app developers
-request to use the <code>AudioSource</code> abstraction instead of the actual audio hardware device.
-The Android Audio Policy Manager maps an <code>AudioSource</code> to the actual hardware with <code>AudioPolicyManagerBase::getDeviceForInputSource(int 
-inputSource)</code>. The following sources are exposed to developers:
-</p>
+<p>Pre-processing effects are paired with the use case mode in which the pre-processing is requested
+. In Android app development, a use case is referred to as an <code>AudioSource</code>; and app
+developers request to use the <code>AudioSource</code> abstraction instead of the actual audio
+hardware device. The Android Audio Policy Manager maps an <code>AudioSource</code> to the actual
+hardware with <code>AudioPolicyManagerBase::getDeviceForInputSource(int inputSource)</code>. The
+following sources are exposed to developers:</p>
+
 <ul>
 <li><code>android.media.MediaRecorder.AudioSource.CAMCORDER</code></li>
 <li><code>android.media.MediaRecorder.AudioSource.VOICE_COMMUNICATION</code></li>
@@ -208,25 +213,26 @@
 <li><code>android.media.MediaRecorder.AudioSource.VOICE_UPLINK</code></li>
 <li><code>android.media.MediaRecorder.AudioSource.VOICE_RECOGNITION</code></li>
 <li><code>android.media.MediaRecorder.AudioSource.MIC</code></li>
-<li><code>android.media.MediaRecorder.AudioSource.DEFAULT</code></li>
-</ul>
+<li><code>android.media.MediaRecorder.AudioSource.DEFAULT</code></li> </ul>
 
-<p>The default pre-processing effects applied for each <code>AudioSource</code> are
-specified in the <code>/system/etc/audio_effects.conf</code> file. To specify
-your own default effects for every <code>AudioSource</code>, create a <code>/system/vendor/etc/audio_effects.conf</code> file
-and specify the pre-processing effects to turn on. For an example, 
-see the implementation for the Nexus 10 in <code>device/samsung/manta/audio_effects.conf</code>. AudioEffect instances acquire and release a session when created and destroyed, enabling the effects (such as the Loudness Enhancer) to persist throughout the duration of the session.<p>
+<p>The default pre-processing effects applied for each <code>AudioSource</code> are specified in
+the <code>/system/etc/audio_effects.conf</code> file. To specify your own default effects for every
+<code>AudioSource</code>, create a <code>/system/vendor/etc/audio_effects.conf</code> file and
+specify the pre-processing effects to turn on. For an example, see the implementation for the Nexus
+10 in <code>device/samsung/manta/audio_effects.conf</code>. AudioEffect instances acquire and
+release a session when created and destroyed, enabling the effects (such as the Loudness Enhancer)
+to persist throughout the duration of the session. </p>
 
-<p class="warning"><strong>Warning:</strong> For the <code>VOICE_RECOGNITION</code> use case, do not enable
-the noise suppression pre-processing effect. It should not be turned on by default when recording from this audio source,
-and you should not enable it in your own audio_effects.conf file. Turning on the effect by default will cause the device to fail
-the <a href="/compatibility/index.html">compatibility requirement</a> regardless of whether this was on by default due to 
-configuration file, or the audio HAL implementation's default behavior.</p>
+<p class="warning"><strong>Warning:</strong> For the <code>VOICE_RECOGNITION</code> use case, do
+not enable the noise suppression pre-processing effect. It should not be turned on by default when
+recording from this audio source, and you should not enable it in your own audio_effects.conf file.
+Turning on the effect by default will cause the device to fail the <a
+href="/compatibility/index.html"> compatibility requirement</a> regardless of whether this was on by
+default due to configuration file , or the audio HAL implementation's default behavior.</p>
 
-<p>The following example enables pre-processing for the VoIP <code>AudioSource</code> and Camcorder <code>AudioSource</code>.
-By declaring the <code>AudioSource</code> configuration in this manner, the
-framework will automatically request from the audio HAL the use of those
-effects.</p>
+<p>The following example enables pre-processing for the VoIP <code>AudioSource</code> and Camcorder
+<code>AudioSource</code>. By declaring the <code>AudioSource</code> configuration in this manner,
+the framework will automatically request from the audio HAL the use of those effects.</p>
 
 <pre>
 pre_processing {
@@ -241,8 +247,9 @@
 </pre>
 
 <h3 id="tuning">Source tuning</h3>
-<p>For <code>AudioSource</code> tuning, there are no explicit requirements on audio gain or audio processing
-with the exception of voice recognition (<code>VOICE_RECOGNITION</code>).</p>
+
+<p>For <code>AudioSource</code> tuning, there are no explicit requirements on audio gain or audio
+processing with the exception of voice recognition (<code>VOICE_RECOGNITION</code>).</p>
 
 <p>The following are the requirements for voice recognition:</p>
 
@@ -252,32 +259,38 @@
 <li>level tracks linearly from -18dB to +12dB relative to 90dB SPL</li>
 <li>THD < 1% (90dB SPL in 100 to 4000Hz range)</li>
 <li>8kHz sampling rate (anti-aliasing)</li>
-<li>Effects / pre-processing must be disabled by default</li>
+<li>Effects/pre-processing must be disabled by default</li>
 </ul>
 
 <p>Examples of tuning different effects for different sources are:</p>
 
 <ul>
-  <li>Noise Suppressor
-    <ul>
-      <li>Tuned for wind noise suppressor for <code>CAMCORDER</code></li>
-      <li>Tuned for stationary noise suppressor for <code>VOICE_COMMUNICATION</code></li>
-    </ul>
-  </li>
-  <li>Automatic Gain Control
-    <ul>
-      <li>Tuned for close-talk for <code>VOICE_COMMUNICATION</code> and main phone mic</li>
-      <li>Tuned for far-talk for <code>CAMCORDER</code></li>
-    </ul>
-  </li>
+<li>Noise Suppressor
+<ul>
+<li>Tuned for wind noise suppressor for <code>CAMCORDER</code></li>
+<li>Tuned for stationary noise suppressor for <code>VOICE_COMMUNICATION</code></li>
+</ul>
+</li>
+<li>Automatic Gain Control
+<ul>
+<li>Tuned for close-talk for <code>VOICE_COMMUNICATION</code> and main phone mic</li>
+<li>Tuned for far-talk for <code>CAMCORDER</code></li>
+</ul>
+</li>
 </ul>
 
 <h3 id="more">More information</h3>
-<p>For more information, see:</p>
-<ul>
-<li>Android documentation for <a href="http://developer.android.com/reference/android/media/audiofx/package-summary.html">audiofx 
-package</a>
 
-<li>Android documentation for <a href="http://developer.android.com/reference/android/media/audiofx/NoiseSuppressor.html">Noise Suppression audio effect</a></li>
+<p>For more information, see:</p>
+
+<ul>
+<li>Android documentation for
+<a href="http://developer.android.com/reference/android/media/audiofx/package-summary.html">
+audiofx package</a>
+
+<li>Android documentation for
+<a href="http://developer.android.com/reference/android/media/audiofx/NoiseSuppressor.html">
+Noise Suppression audio effect</a></li>
+
 <li><code>device/samsung/manta/audio_effects.conf</code> file for the Nexus 10</li>
-</ul>
+</ul>
\ No newline at end of file
diff --git a/src/devices/audio_tv.jd b/src/devices/audio_tv.jd
new file mode 100644
index 0000000..4bcb55e
--- /dev/null
+++ b/src/devices/audio_tv.jd
@@ -0,0 +1,296 @@
+page.title=TV Audio
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>The TV Input Framework (TIF) manager works with the audio routing API to support flexible audio
+path changes. When a System on Chip (SoC) implements the TV hardware abstraction layer (HAL), each
+TV input (HDMI IN, Tuner, etc.) provides <code>TvInputHardwareInfo</code> that specifies AudioPort information for audio type and address.</p>
+
+<ul>
+<li><b>Physical</b> audio input/output devices have a corresponding AudioPort.</li>
+<li><b>Software</b> audio output/input streams are represented as AudioMixPort (child class of
+AudioPort).</li>
+</ul>
+
+<p>The TIF then uses AudioPort information for the audio routing API.</p>
+
+<p><img src="audio/images/ape_audio_tv_tif.png" alt="Android TV Input Framework (TIF)" />
+<p class="img-caption"><strong>Figure 1.</strong> TV Input Framework (TIF)</p>
+
+<h2 id="Requirements">Requirements</h2>
+
+<p>A SoC must implement the audio HAL with the following audio routing API support:</p>
+
+<table>
+<tbody>
+<tr>
+<th>Audio Ports</th>
+<td>
+<ul>
+<li>TV Audio Input has a corresponding audio source port implementation.</li>
+<li>TV Audio Output has a corresponding audio sink port implementation.</li>
+<li>Can create audio patch between any TV input audio port and any TV output audio port.</li>
+</ul>
+</td>
+</tr>
+<tr>
+<th>Default Input</th>
+<td>AudioRecord (created with DEFAULT input source) must seize <i>virtual null input source</i> for
+AUDIO_DEVICE_IN_DEFAULT acquisition on Android TV.</td>
+</tr>
+<tr>
+<th>Device Loopback</th>
+<td>Requires supporting an AUDIO_DEVICE_IN_LOOPBACK input that is a complete mix of all audio output
+of all the TV output (11Khz, 16bit mono or 48Khz, 16bit mono). Used only for audio capture.
+</td>
+</tr>
+</tbody>
+</table>
+
+
+<h2 id="Audio Devices">TV audio devices</h2>
+
+<p>Android supports the following audio devices for TV audio input/output.</p>
+
+<h4>system/core/include/system/audio.h</h4>
+
+<pre>
+/* output devices */
+AUDIO_DEVICE_OUT_AUX_DIGITAL  = 0x400,
+AUDIO_DEVICE_OUT_HDMI   = AUDIO_DEVICE_OUT_AUX_DIGITAL,
+/* HDMI Audio Return Channel */
+AUDIO_DEVICE_OUT_HDMI_ARC   = 0x40000,
+/* S/PDIF out */
+AUDIO_DEVICE_OUT_SPDIF    = 0x80000,
+/* input devices */
+AUDIO_DEVICE_IN_AUX_DIGITAL   = AUDIO_DEVICE_BIT_IN | 0x20,
+AUDIO_DEVICE_IN_HDMI      = AUDIO_DEVICE_IN_AUX_DIGITAL,
+/* TV tuner input */
+AUDIO_DEVICE_IN_TV_TUNER    = AUDIO_DEVICE_BIT_IN | 0x4000,
+/* S/PDIF in */
+AUDIO_DEVICE_IN_SPDIF   = AUDIO_DEVICE_BIT_IN | 0x10000,
+AUDIO_DEVICE_IN_LOOPBACK    = AUDIO_DEVICE_BIT_IN | 0x40000,
+</pre>
+
+
+<h2 id="HAL extension">Audio HAL extension</h2>
+
+<p>The Audio HAL extension for the audio routing API is defined by following:</p>
+
+<h4>system/core/include/system/audio.h</h4>
+
+<pre>
+/* audio port configuration structure used to specify a particular configuration of an audio port */
+struct audio_port_config {
+    audio_port_handle_t      id;           /* port unique ID */
+    audio_port_role_t        role;         /* sink or source */
+    audio_port_type_t        type;         /* device, mix ... */
+    unsigned int             config_mask;  /* e.g AUDIO_PORT_CONFIG_ALL */
+    unsigned int             sample_rate;  /* sampling rate in Hz */
+    audio_channel_mask_t     channel_mask; /* channel mask if applicable */
+    audio_format_t           format;       /* format if applicable */
+    struct audio_gain_config gain;         /* gain to apply if applicable */
+    union {
+        struct audio_port_config_device_ext  device;  /* device specific info */
+        struct audio_port_config_mix_ext     mix;     /* mix specific info */
+        struct audio_port_config_session_ext session; /* session specific info */
+    } ext;
+};
+struct audio_port {
+    audio_port_handle_t      id;                /* port unique ID */
+    audio_port_role_t        role;              /* sink or source */
+    audio_port_type_t        type;              /* device, mix ... */
+    unsigned int             num_sample_rates;  /* number of sampling rates in following array */
+    unsigned int             sample_rates[AUDIO_PORT_MAX_SAMPLING_RATES];
+    unsigned int             num_channel_masks; /* number of channel masks in following array */
+    audio_channel_mask_t     channel_masks[AUDIO_PORT_MAX_CHANNEL_MASKS];
+    unsigned int             num_formats;       /* number of formats in following array */
+    audio_format_t           formats[AUDIO_PORT_MAX_FORMATS];
+    unsigned int             num_gains;         /* number of gains in following array */
+    struct audio_gain        gains[AUDIO_PORT_MAX_GAINS];
+    struct audio_port_config active_config;     /* current audio port configuration */
+    union {
+        struct audio_port_device_ext  device;
+        struct audio_port_mix_ext     mix;
+        struct audio_port_session_ext session;
+    } ext;
+};
+</pre>
+
+<h4>hardware/libhardware/include/hardware/audio.h</h4>
+
+<pre>
+struct audio_hw_device {
+  :
+    /**
+     * Routing control
+     */
+
+    /* Creates an audio patch between several source and sink ports.
+     * The handle is allocated by the HAL and should be unique for this
+     * audio HAL module. */
+    int (*create_audio_patch)(struct audio_hw_device *dev,
+                               unsigned int num_sources,
+                               const struct audio_port_config *sources,
+                               unsigned int num_sinks,
+                               const struct audio_port_config *sinks,
+                               audio_patch_handle_t *handle);
+
+    /* Release an audio patch */
+    int (*release_audio_patch)(struct audio_hw_device *dev,
+                               audio_patch_handle_t handle);
+
+    /* Fills the list of supported attributes for a given audio port.
+     * As input, "port" contains the information (type, role, address etc...)
+     * needed by the HAL to identify the port.
+     * As output, "port" contains possible attributes (sampling rates, formats,
+     * channel masks, gain controllers...) for this port.
+     */
+    int (*get_audio_port)(struct audio_hw_device *dev,
+                          struct audio_port *port);
+
+    /* Set audio port configuration */
+    int (*set_audio_port_config)(struct audio_hw_device *dev,
+                         const struct audio_port_config *config);
+</pre>
+
+<h2 id="Testing">Testing DEVICE_IN_LOOPBACK</h2>
+
+<p>To test DEVICE_IN_LOOPBACK for TV monitoring, use the following testing code. After running the
+test, the captured audio saves to <code>/sdcard/record_loopback.raw</code>, where you can listen to
+it using <code>ffmeg</code>.</p>
+
+<pre>
+&lt;uses-permission android:name="android.permission.MODIFY_AUDIO_ROUTING" /&gt;
+&lt;uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /&gt;
+
+   AudioRecord mRecorder;
+   Handler mHandler = new Handler();
+   int mMinBufferSize = AudioRecord.getMinBufferSize(RECORD_SAMPLING_RATE,
+           AudioFormat.CHANNEL_IN_MONO,
+           AudioFormat.ENCODING_PCM_16BIT);;
+   static final int RECORD_SAMPLING_RATE = 48000;
+   public void doCapture() {
+       mRecorder = new AudioRecord(MediaRecorder.AudioSource.DEFAULT, RECORD_SAMPLING_RATE,
+               AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, mMinBufferSize * 10);
+       AudioManager am = (AudioManager) getSystemService(Context.AUDIO_SERVICE);
+       ArrayList&lt;AudioPort&gt; audioPorts = new ArrayList&lt;AudioPort&gt;();
+       am.listAudioPorts(audioPorts);
+       AudioPortConfig srcPortConfig = null;
+       AudioPortConfig sinkPortConfig = null;
+       for (AudioPort audioPort : audioPorts) {
+           if (srcPortConfig == null
+                   && audioPort.role() == AudioPort.ROLE_SOURCE
+                   && audioPort instanceof AudioDevicePort) {
+               AudioDevicePort audioDevicePort = (AudioDevicePort) audioPort;
+               if (audioDevicePort.type() == AudioManager.DEVICE_IN_LOOPBACK) {
+                   srcPortConfig = audioPort.buildConfig(48000, AudioFormat.CHANNEL_IN_DEFAULT,
+                           AudioFormat.ENCODING_DEFAULT, null);
+                   Log.d(LOG_TAG, "Found loopback audio source port : " + audioPort);
+               }
+           }
+           else if (sinkPortConfig == null
+                   && audioPort.role() == AudioPort.ROLE_SINK
+                   && audioPort instanceof AudioMixPort) {
+               sinkPortConfig = audioPort.buildConfig(48000, AudioFormat.CHANNEL_OUT_DEFAULT,
+                       AudioFormat.ENCODING_DEFAULT, null);
+               Log.d(LOG_TAG, "Found recorder audio mix port : " + audioPort);
+           }
+       }
+       if (srcPortConfig != null && sinkPortConfig != null) {
+           AudioPatch[] patches = new AudioPatch[] { null };
+           int status = am.createAudioPatch(
+                   patches,
+                   new AudioPortConfig[] { srcPortConfig },
+                   new AudioPortConfig[] { sinkPortConfig });
+           Log.d(LOG_TAG, "Result of createAudioPatch(): " + status);
+       }
+       mRecorder.startRecording();
+       processAudioData();
+       mRecorder.stop();
+       mRecorder.release();
+   }
+   private void processAudioData() {
+       OutputStream rawFileStream = null;
+       byte data[] = new byte[mMinBufferSize];
+       try {
+           rawFileStream = new BufferedOutputStream(
+                   new FileOutputStream(new File("/sdcard/record_loopback.raw")));
+       } catch (FileNotFoundException e) {
+           Log.d(LOG_TAG, "Can't open file.", e);
+       }
+       long startTimeMs = System.currentTimeMillis();
+       while (System.currentTimeMillis() - startTimeMs &lt; 5000) {
+           int nbytes = mRecorder.read(data, 0, mMinBufferSize);
+           if (nbytes &lt;= 0) {
+               continue;
+           }
+           try {
+               rawFileStream.write(data);
+           } catch (IOException e) {
+               Log.e(LOG_TAG, "Error on writing raw file.", e);
+           }
+       }
+       try {
+           rawFileStream.close();
+       } catch (IOException e) {
+       }
+       Log.d(LOG_TAG, "Exit audio recording.");
+   }
+</pre>
+
+<p>Locate the captured audio file in <code>/sdcard/record_loopback.raw</code> and listen to it using
+<code>ffmeg</code>:</p>
+
+<pre>
+adb pull /sdcard/record_loopback.raw
+ffmpeg -f s16le -ar 48k -ac 1 -i record_loopback.raw record_loopback.wav
+ffplay record_loopback.wav
+</pre>
+
+<h2 id="Use cases">Use cases</h2>
+
+<p>This section includes common use cases for TV audio.</p>
+
+<h3>TV tuner with speaker output</h3>
+
+<p>When a TV tuner becomes active, the audio routing API creates an audio patch between the tuner
+and the default output (e.g. the speaker). The tuner output does not require decoding, but final
+audio output is mixed with software output_stream.</p>
+
+<p><img src="audio/images/ape_audio_tv_tuner.png" alt="Android TV Tuner Audio Patch" />
+<p class="img-caption">
+<strong>Figure 2.</strong> Audio Patch for TV tuner with speaker output.</p>
+
+
+<h3>HDMI OUT during live TV</h3>
+
+<p>A user is watching live TV then switches to the HDMI audio output (Intent.ACTION_HDMI_AUDIO_PLUG)
+. The output device of all output_streams changes to the HDMI_OUT port, and the TIF manager changes
+the sink port of the existing tuner audio patch to the HDMI_OUT port.</p>
+
+<p><p><img src="audio/images/ape_audio_tv_hdmi_tuner.png" alt="Android TV HDMI-OUT Audio Patch" />
+<p class="img-caption">
+<strong>Figure 3.</strong> Audio Patch for HDMI OUT from live TV.</p>
\ No newline at end of file