Docs: Beginning to split Audio content
Bug: https://b.corp.google.com/issue?id=9391856
Staging location: http://claym.mtv.corp.google.com:8088/devices/audio.html

Change-Id: I37e2bfe7d2de5f02228bf23de65e1374a1117beb
diff --git a/src/devices/audio.jd b/src/devices/audio.jd
index 1192830..9f6338d 100644
--- a/src/devices/audio.jd
+++ b/src/devices/audio.jd
@@ -101,257 +101,8 @@
 </p>
   </dd>
 </dl>
-<h2 id="implementing">
-  Implementing the HAL
-</h2>
+
 <p>
-  The audio HAL is composed of three different interfaces that you must implement:
+   See the rest of the pages within the Audio section for implementation
+   instructions and ways to improve performance.
 </p>
-<ul>
-  <li>
-    <code>hardware/libhardware/include/hardware/audio.h</code> - represents the main functions of
-    an audio device.
-  </li>
-  <li>
-    <code>hardware/libhardware/include/hardware/audio_policy.h</code> - represents the audio policy
-    manager, which handles things like audio routing and volume control policies.
-  </li>
-  <li>
-    <code>hardware/libhardware/include/hardware/audio_effect.h</code> - represents effects that can
-    be applied to audio such as downmixing, echo, or noise suppression.
-  </li>
-</ul>
-<p>See the implementation for the Galaxy Nexus at <code>device/samsung/tuna/audio</code> for an example.</p>
-
-<p>In addition to implementing the HAL, you need to create a
-  <code>device/&lt;company_name&gt;/&lt;device_name&gt;/audio/audio_policy.conf</code> file
-  that declares the audio devices present on your product. For an example, see the file for
-  the Galaxy Nexus audio hardware in <code>device/samsung/tuna/audio/audio_policy.conf</code>. 
-Also, see
-  the <code>system/core/include/system/audio.h</code> and <code>system/core/include/system/audio_policy.h</code>
-   header files for a reference of the properties that you can define.
-</p>
-<h3 id="multichannel">Multi-channel support</h3>
-<p>If your hardware and driver supports multichannel audio via HDMI, you can output the audio stream
-  directly to the audio hardware. This bypasses the AudioFlinger mixer so it doesn't get downmixed to two channels. 
-  
-  <p>
-  The audio HAL must expose whether an output stream profile supports multichannel audio capabilities.
-  If the HAL exposes its capabilities, the default policy manager allows multichannel playback over 
-  HDMI.</p>
- <p>For more implementation details, see the
-<code>device/samsung/tuna/audio/audio_hw.c</code> in the Android 4.1 release.</p>
-
-  <p>
-  To specify that your product contains a multichannel audio output, edit the <code>audio_policy.conf</code> file to describe the multichannel
-  output for your product. The following is an example from the Galaxy Nexus that shows a "dynamic" channel mask, which means the audio policy manager
-  queries the actual channel masks supported by the HDMI sink after connection. You can also specify a static channel mask like <code>AUDIO_CHANNEL_OUT_5POINT1</code>
-  </p>
-<pre>
-audio_hw_modules {
-  primary {
-    outputs {
-        ...
-        hdmi {  
-          sampling_rates 44100|48000
-          channel_masks dynamic
-          formats AUDIO_FORMAT_PCM_16_BIT
-          devices AUDIO_DEVICE_OUT_AUX_DIGITAL
-          flags AUDIO_OUTPUT_FLAG_DIRECT
-        }
-        ...
-    }
-    ...
-  }
-  ...
-}
-</pre>
-
-
-  <p>AudioFlinger's mixer downmixes the content to stereo
-    automatically when sent to an audio device that does not support multichannel audio.</p>
-</p>
-
-<h3 id="codecs">Media codecs</h3>
-
-<p>Ensure the audio codecs your hardware and drivers support are properly declared for your product. See
-  <a href="media.html#expose"> Exposing Codecs to the Framework</a> for information on how to do this.
-</p>
-
-<h2 id="configuring">
-  Configuring the shared library
-</h2>
-<p>
-  You need to package the HAL implementation into a shared library and copy it to the
-  appropriate location by creating an <code>Android.mk</code> file:
-</p>
-<ol>
-  <li>Create a <code>device/&lt;company_name&gt;/&lt;device_name&gt;/audio</code> directory
-  to contain your library's source files.
-  </li>
-  <li>Create an <code>Android.mk</code> file to build the shared library. Ensure that the
-  Makefile contains the following line:
-<pre>
-LOCAL_MODULE := audio.primary.&lt;device_name&gt;
-</pre>
-    <p>
-      Notice your library must be named <code>audio_primary.&lt;device_name&gt;.so</code> so
-      that Android can correctly load the library. The "<code>primary</code>" portion of this
-      filename indicates that this shared library is for the primary audio hardware located on the
-      device. The module names <code>audio.a2dp.&lt;device_name&gt;</code> and
-      <code>audio.usb.&lt;device_name&gt;</code> are also available for bluetooth and USB audio
-      interfaces. Here is an example of an <code>Android.mk</code> from the Galaxy
-      Nexus audio hardware:
-    </p>
-    <pre>
-LOCAL_PATH := $(call my-dir)
-
-include $(CLEAR_VARS)
-
-LOCAL_MODULE := audio.primary.tuna
-LOCAL_MODULE_PATH := $(TARGET_OUT_SHARED_LIBRARIES)/hw
-LOCAL_SRC_FILES := audio_hw.c ril_interface.c
-LOCAL_C_INCLUDES += \
-        external/tinyalsa/include \
-        $(call include-path-for, audio-utils) \
-        $(call include-path-for, audio-effects)
-LOCAL_SHARED_LIBRARIES := liblog libcutils libtinyalsa libaudioutils libdl
-LOCAL_MODULE_TAGS := optional
-
-include $(BUILD_SHARED_LIBRARY)
-</pre>
-  </li>
-  <li>If your product supports low latency audio as specified by the Android CDD, copy the
-  corresponding XML feature file into your product. For example, in your product's
-   <code>device/&lt;company_name&gt;/&lt;device_name&gt;/device.mk</code> 
-  Makefile:
-    <pre>
-PRODUCT_COPY_FILES := ...
-
-PRODUCT_COPY_FILES += \
-frameworks/native/data/etc/android.android.hardware.audio.low_latency.xml:system/etc/permissions/android.hardware.audio.low_latency.xml \
-</pre>
-  </li>
- 
-  <li>Copy the <code>audio_policy.conf</code> file that you created earlier to the <code>system/etc/</code> directory
-  in your product's <code>device/&lt;company_name&gt;/&lt;device_name&gt;/device.mk</code> 
-  Makefile. For example:
-    <pre>
-PRODUCT_COPY_FILES += \
-        device/samsung/tuna/audio/audio_policy.conf:system/etc/audio_policy.conf
-</pre>
-  </li>
-  <li>Declare the shared modules of your audio HAL that are required by your product in the product's
-    <code>device/&lt;company_name&gt;/&lt;device_name&gt;/device.mk</code> Makefile. For example, the
-  Galaxy Nexus requires the primary and bluetooth audio HAL modules:
-<pre>
-PRODUCT_PACKAGES += \
-        audio.primary.tuna \
-        audio.a2dp.default
-</pre>
-  </li>
-</ol>
-
-<h2 id="preprocessing">Audio pre-processing effects</h2>
-<p>
-The Android platform provides audio effects on supported devices in the
-<a href="http://developer.android.com/reference/android/media/audiofx/package-summary.html">audiofx</a>
-package, which is available for developers to access. For example, on the Nexus 10, the following pre-processing effects are supported: </p>
-<ul>
-  <li><a
-href="http://developer.android.com/reference/android/media/audiofx/AcousticEchoCanceler.html">Acoustic Echo Cancellation</a></li>
-  <li><a
-href="http://developer.android.com/reference/android/media/audiofx/AutomaticGainControl.html">Automatic Gain Control</a></li>
-  <li><a
-href="http://developer.android.com/reference/android/media/audiofx/NoiseSuppressor.html">Noise Suppression</a></li>
-</ul>
-</p>
-
-
-<p>Pre-processing effects are always paired with the use case mode in which the pre-processing is requested. In Android
-app development, a use case is referred to as an <code>AudioSource</code>; and app developers
-request to use the <code>AudioSource</code> abstraction instead of the actual audio hardware device.
-The Android Audio Policy Manager maps an <code>AudioSource</code> to the actual hardware with <code>AudioPolicyManagerBase::getDeviceForInputSource(int 
-inputSource)</code>. The following sources are exposed to developers:
-</p>
-<ul>
-<code><li>android.media.MediaRecorder.AudioSource.CAMCORDER</li></code>
-<code><li>android.media.MediaRecorder.AudioSource.VOICE_COMMUNICATION</li></code>
-<code><li>android.media.MediaRecorder.AudioSource.VOICE_CALL</li></code>
-<code><li>android.media.MediaRecorder.AudioSource.VOICE_DOWNLINK</li></code>
-<code><li>android.media.MediaRecorder.AudioSource.VOICE_UPLINK</li></code>
-<code><li>android.media.MediaRecorder.AudioSource.VOICE_RECOGNITION</li></code>
-<code><li>android.media.MediaRecorder.AudioSource.MIC</li></code>
-<code><li>android.media.MediaRecorder.AudioSource.DEFAULT</li></code>
-</ul>
-
-<p>The default pre-processing effects that are applied for each <code>AudioSource</code> are
-specified in the <code>/system/etc/audio_effects.conf</code> file. To specify
-your own default effects for every <code>AudioSource</code>, create a <code>/system/vendor/etc/audio_effects.conf</code> file
-and specify any pre-processing effects that you need to turn on. For an example, 
-see the implementation for the Nexus 10 in <code>device/samsung/manta/audio_effects.conf</code></p>
-
-<p class="warning"><strong>Warning:</strong> For the <code>VOICE_RECOGNITION</code> use case, do not enable
-the noise suppression pre-processing effect. It should not be turned on by default when recording from this audio source,
-and you should not enable it in your own audio_effects.conf file. Turning on the effect by default will cause the device to fail
-the <a href="/compatibility/index.html">compatibility requirement</a> regardless of whether this was on by default due to 
-configuration file, or the audio HAL implementation's default behavior.</p>
-
-<p>The following example enables pre-processing for the VoIP <code>AudioSource</code> and Camcorder <code>AudioSource</code>.
-By declaring the <code>AudioSource</code> configuration in this manner, the
-framework will automatically request from the audio HAL the use of those
-effects.</p>
-
-<pre>
-pre_processing {
-   voice_communication {
-       aec {}
-       ns {}
-   }
-   camcorder {
-       agc {}
-   }
-}
-</pre>
-
-<h3 id="tuning">Source tuning</h3>
-<p>For <code>AudioSource</code> tuning, there are no explicit requirements on audio gain or audio processing
-with the exception of voice recognition (<code>VOICE_RECOGNITION</code>).</p>
-
-<p>The following are the requirements for voice recognition:</p>
-
-<ul>
-<li>"flat" frequency response (+/- 3dB) from 100Hz to 4kHz</li>
-<li>close-talk config: 90dB SPL reads RMS of 2500 (16bit samples)</li>
-<li>level tracks linearly from -18dB to +12dB relative to 90dB SPL</li>
-<li>THD < 1% (90dB SPL in 100 to 4000Hz range)</li>
-<li>8kHz sampling rate (anti-aliasing)</li>
-<li>Effects / pre-processing must be disabled by default</li>
-</ul>
-
-<p>Examples of tuning different effects for different sources are:</p>
-
-<ul>
-  <li>Noise Suppressor
-    <ul>
-      <li>Tuned for wind noise suppressor for <code>CAMCORDER</code></li>
-      <li>Tuned for stationary noise suppressor for <code>VOICE_COMMUNICATION</code></li>
-    </ul>
-  </li>
-  <li>Automatic Gain Control
-    <ul>
-      <li>Tuned for close-talk for <code>VOICE_COMMUNICATION</code> and main phone mic</li>
-      <li>Tuned for far-talk for <code>CAMCORDER</code></li>
-    </ul>
-  </li>
-</ul>
-
-<h3 id="more">More information</h3>
-<p>For more information, see:</p>
-<ul>
-<li>Android documentation for <a href="http://developer.android.com/reference/android/media/audiofx/package-summary.html">audiofx 
-package</a>
-
-<li>Android documentation for <a href="http://developer.android.com/reference/android/media/audiofx/NoiseSuppressor.html">Noise Suppression audio effect</a></li>
-<li><code>device/samsung/manta/audio_effects.conf</code> file for the Nexus 10</li>
-</ul>
diff --git a/src/devices/audio_implement.jd b/src/devices/audio_implement.jd
new file mode 100644
index 0000000..2007b2c
--- /dev/null
+++ b/src/devices/audio_implement.jd
@@ -0,0 +1,285 @@
+page.title=Audio
+@jd:body
+
+<!--
+    Copyright 2010 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>
+  This page exlains how to implement the audio Hardware Abstraction Layer (HAL)
+and configure the shared library.
+</p>
+
+<h2 id="implementing">
+  Implementing the HAL
+</h2>
+<p>
+  The audio HAL is composed of three different interfaces that you must implement:
+</p>
+<ul>
+  <li>
+    <code>hardware/libhardware/include/hardware/audio.h</code> - represents the main functions of
+    an audio device.
+  </li>
+  <li>
+    <code>hardware/libhardware/include/hardware/audio_policy.h</code> - represents the audio policy
+    manager, which handles things like audio routing and volume control policies.
+  </li>
+  <li>
+    <code>hardware/libhardware/include/hardware/audio_effect.h</code> - represents effects that can
+    be applied to audio such as downmixing, echo, or noise suppression.
+  </li>
+</ul>
+<p>See the implementation for the Galaxy Nexus at <code>device/samsung/tuna/audio</code> for an example.</p>
+
+<p>In addition to implementing the HAL, you need to create a
+  <code>device/&lt;company_name&gt;/&lt;device_name&gt;/audio/audio_policy.conf</code> file
+  that declares the audio devices present on your product. For an example, see the file for
+  the Galaxy Nexus audio hardware in <code>device/samsung/tuna/audio/audio_policy.conf</code>. 
+Also, see
+  the <code>system/core/include/system/audio.h</code> and <code>system/core/include/system/audio_policy.h</code>
+   header files for a reference of the properties that you can define.
+</p>
+<h3 id="multichannel">Multi-channel support</h3>
+<p>If your hardware and driver supports multichannel audio via HDMI, you can output the audio stream
+  directly to the audio hardware. This bypasses the AudioFlinger mixer so it doesn't get downmixed to two channels. 
+  
+  <p>
+  The audio HAL must expose whether an output stream profile supports multichannel audio capabilities.
+  If the HAL exposes its capabilities, the default policy manager allows multichannel playback over 
+  HDMI.</p>
+ <p>For more implementation details, see the
+<code>device/samsung/tuna/audio/audio_hw.c</code> in the Android 4.1 release.</p>
+
+  <p>
+  To specify that your product contains a multichannel audio output, edit the <code>audio_policy.conf</code> file to describe the multichannel
+  output for your product. The following is an example from the Galaxy Nexus that shows a "dynamic" channel mask, which means the audio policy manager
+  queries the actual channel masks supported by the HDMI sink after connection. You can also specify a static channel mask like <code>AUDIO_CHANNEL_OUT_5POINT1</code>
+  </p>
+<pre>
+audio_hw_modules {
+  primary {
+    outputs {
+        ...
+        hdmi {  
+          sampling_rates 44100|48000
+          channel_masks dynamic
+          formats AUDIO_FORMAT_PCM_16_BIT
+          devices AUDIO_DEVICE_OUT_AUX_DIGITAL
+          flags AUDIO_OUTPUT_FLAG_DIRECT
+        }
+        ...
+    }
+    ...
+  }
+  ...
+}
+</pre>
+
+
+  <p>AudioFlinger's mixer downmixes the content to stereo
+    automatically when sent to an audio device that does not support multichannel audio.</p>
+</p>
+
+<h3 id="codecs">Media codecs</h3>
+
+<p>Ensure the audio codecs your hardware and drivers support are properly declared for your product. See
+  <a href="media.html#expose"> Exposing Codecs to the Framework</a> for information on how to do this.
+</p>
+
+<h2 id="configuring">
+  Configuring the shared library
+</h2>
+<p>
+  You need to package the HAL implementation into a shared library and copy it to the
+  appropriate location by creating an <code>Android.mk</code> file:
+</p>
+<ol>
+  <li>Create a <code>device/&lt;company_name&gt;/&lt;device_name&gt;/audio</code> directory
+  to contain your library's source files.
+  </li>
+  <li>Create an <code>Android.mk</code> file to build the shared library. Ensure that the
+  Makefile contains the following line:
+<pre>
+LOCAL_MODULE := audio.primary.&lt;device_name&gt;
+</pre>
+    <p>
+      Notice your library must be named <code>audio_primary.&lt;device_name&gt;.so</code> so
+      that Android can correctly load the library. The "<code>primary</code>" portion of this
+      filename indicates that this shared library is for the primary audio hardware located on the
+      device. The module names <code>audio.a2dp.&lt;device_name&gt;</code> and
+      <code>audio.usb.&lt;device_name&gt;</code> are also available for bluetooth and USB audio
+      interfaces. Here is an example of an <code>Android.mk</code> from the Galaxy
+      Nexus audio hardware:
+    </p>
+    <pre>
+LOCAL_PATH := $(call my-dir)
+
+include $(CLEAR_VARS)
+
+LOCAL_MODULE := audio.primary.tuna
+LOCAL_MODULE_PATH := $(TARGET_OUT_SHARED_LIBRARIES)/hw
+LOCAL_SRC_FILES := audio_hw.c ril_interface.c
+LOCAL_C_INCLUDES += \
+        external/tinyalsa/include \
+        $(call include-path-for, audio-utils) \
+        $(call include-path-for, audio-effects)
+LOCAL_SHARED_LIBRARIES := liblog libcutils libtinyalsa libaudioutils libdl
+LOCAL_MODULE_TAGS := optional
+
+include $(BUILD_SHARED_LIBRARY)
+</pre>
+  </li>
+  <li>If your product supports low latency audio as specified by the Android CDD, copy the
+  corresponding XML feature file into your product. For example, in your product's
+   <code>device/&lt;company_name&gt;/&lt;device_name&gt;/device.mk</code> 
+  Makefile:
+    <pre>
+PRODUCT_COPY_FILES := ...
+
+PRODUCT_COPY_FILES += \
+frameworks/native/data/etc/android.android.hardware.audio.low_latency.xml:system/etc/permissions/android.hardware.audio.low_latency.xml \
+</pre>
+  </li>
+ 
+  <li>Copy the <code>audio_policy.conf</code> file that you created earlier to the <code>system/etc/</code> directory
+  in your product's <code>device/&lt;company_name&gt;/&lt;device_name&gt;/device.mk</code> 
+  Makefile. For example:
+    <pre>
+PRODUCT_COPY_FILES += \
+        device/samsung/tuna/audio/audio_policy.conf:system/etc/audio_policy.conf
+</pre>
+  </li>
+  <li>Declare the shared modules of your audio HAL that are required by your product in the product's
+    <code>device/&lt;company_name&gt;/&lt;device_name&gt;/device.mk</code> Makefile. For example, the
+  Galaxy Nexus requires the primary and bluetooth audio HAL modules:
+<pre>
+PRODUCT_PACKAGES += \
+        audio.primary.tuna \
+        audio.a2dp.default
+</pre>
+  </li>
+</ol>
+
+<h2 id="preprocessing">Audio pre-processing effects</h2>
+<p>
+The Android platform provides audio effects on supported devices in the
+<a href="http://developer.android.com/reference/android/media/audiofx/package-summary.html">audiofx</a>
+package, which is available for developers to access. For example, on the Nexus 10, the following pre-processing effects are supported: </p>
+<ul>
+  <li><a
+href="http://developer.android.com/reference/android/media/audiofx/AcousticEchoCanceler.html">Acoustic Echo Cancellation</a></li>
+  <li><a
+href="http://developer.android.com/reference/android/media/audiofx/AutomaticGainControl.html">Automatic Gain Control</a></li>
+  <li><a
+href="http://developer.android.com/reference/android/media/audiofx/NoiseSuppressor.html">Noise Suppression</a></li>
+</ul>
+</p>
+
+
+<p>Pre-processing effects are always paired with the use case mode in which the pre-processing is requested. In Android
+app development, a use case is referred to as an <code>AudioSource</code>; and app developers
+request to use the <code>AudioSource</code> abstraction instead of the actual audio hardware device.
+The Android Audio Policy Manager maps an <code>AudioSource</code> to the actual hardware with <code>AudioPolicyManagerBase::getDeviceForInputSource(int 
+inputSource)</code>. The following sources are exposed to developers:
+</p>
+<ul>
+<code><li>android.media.MediaRecorder.AudioSource.CAMCORDER</li></code>
+<code><li>android.media.MediaRecorder.AudioSource.VOICE_COMMUNICATION</li></code>
+<code><li>android.media.MediaRecorder.AudioSource.VOICE_CALL</li></code>
+<code><li>android.media.MediaRecorder.AudioSource.VOICE_DOWNLINK</li></code>
+<code><li>android.media.MediaRecorder.AudioSource.VOICE_UPLINK</li></code>
+<code><li>android.media.MediaRecorder.AudioSource.VOICE_RECOGNITION</li></code>
+<code><li>android.media.MediaRecorder.AudioSource.MIC</li></code>
+<code><li>android.media.MediaRecorder.AudioSource.DEFAULT</li></code>
+</ul>
+
+<p>The default pre-processing effects that are applied for each <code>AudioSource</code> are
+specified in the <code>/system/etc/audio_effects.conf</code> file. To specify
+your own default effects for every <code>AudioSource</code>, create a <code>/system/vendor/etc/audio_effects.conf</code> file
+and specify any pre-processing effects that you need to turn on. For an example, 
+see the implementation for the Nexus 10 in <code>device/samsung/manta/audio_effects.conf</code></p>
+
+<p class="warning"><strong>Warning:</strong> For the <code>VOICE_RECOGNITION</code> use case, do not enable
+the noise suppression pre-processing effect. It should not be turned on by default when recording from this audio source,
+and you should not enable it in your own audio_effects.conf file. Turning on the effect by default will cause the device to fail
+the <a href="/compatibility/index.html">compatibility requirement</a> regardless of whether this was on by default due to 
+configuration file, or the audio HAL implementation's default behavior.</p>
+
+<p>The following example enables pre-processing for the VoIP <code>AudioSource</code> and Camcorder <code>AudioSource</code>.
+By declaring the <code>AudioSource</code> configuration in this manner, the
+framework will automatically request from the audio HAL the use of those
+effects.</p>
+
+<pre>
+pre_processing {
+   voice_communication {
+       aec {}
+       ns {}
+   }
+   camcorder {
+       agc {}
+   }
+}
+</pre>
+
+<h3 id="tuning">Source tuning</h3>
+<p>For <code>AudioSource</code> tuning, there are no explicit requirements on audio gain or audio processing
+with the exception of voice recognition (<code>VOICE_RECOGNITION</code>).</p>
+
+<p>The following are the requirements for voice recognition:</p>
+
+<ul>
+<li>"flat" frequency response (+/- 3dB) from 100Hz to 4kHz</li>
+<li>close-talk config: 90dB SPL reads RMS of 2500 (16bit samples)</li>
+<li>level tracks linearly from -18dB to +12dB relative to 90dB SPL</li>
+<li>THD < 1% (90dB SPL in 100 to 4000Hz range)</li>
+<li>8kHz sampling rate (anti-aliasing)</li>
+<li>Effects / pre-processing must be disabled by default</li>
+</ul>
+
+<p>Examples of tuning different effects for different sources are:</p>
+
+<ul>
+  <li>Noise Suppressor
+    <ul>
+      <li>Tuned for wind noise suppressor for <code>CAMCORDER</code></li>
+      <li>Tuned for stationary noise suppressor for <code>VOICE_COMMUNICATION</code></li>
+    </ul>
+  </li>
+  <li>Automatic Gain Control
+    <ul>
+      <li>Tuned for close-talk for <code>VOICE_COMMUNICATION</code> and main phone mic</li>
+      <li>Tuned for far-talk for <code>CAMCORDER</code></li>
+    </ul>
+  </li>
+</ul>
+
+<h3 id="more">More information</h3>
+<p>For more information, see:</p>
+<ul>
+<li>Android documentation for <a href="http://developer.android.com/reference/android/media/audiofx/package-summary.html">audiofx 
+package</a>
+
+<li>Android documentation for <a href="http://developer.android.com/reference/android/media/audiofx/NoiseSuppressor.html">Noise Suppression audio effect</a></li>
+<li><code>device/samsung/manta/audio_effects.conf</code> file for the Nexus 10</li>
+</ul>
diff --git a/src/devices/audio_latency.jd b/src/devices/audio_latency.jd
index 2a1d2d7..2d3623e 100644
--- a/src/devices/audio_latency.jd
+++ b/src/devices/audio_latency.jd
@@ -32,10 +32,8 @@
   understanding of Android's audio latency-reduction efforts.
 </p>
 
-<h2 id="contributors">Contributors to Latency</h2>
-
 <p>
-  This section focuses on the contributors to output latency,
+  This page focuses on the contributors to output latency,
   but a similar discussion applies to input latency.
 </p>
 <p>
@@ -144,168 +142,3 @@
   they are bounded.
 </p>
 
-
-
-<h2 id="measuringOutput">Measuring Output Latency</h2>
-
-<p>
-  There are several techniques available to measure output latency,
-  with varying degrees of accuracy and ease of running, described below. Also
-see the <a href="testing_circuit.html">Testing circuit</a> for an example test environment.
-</p>
-
-<h3>LED and oscilloscope test</h3>
-<p>
-This test measures latency in relation to the device's LED indicator.
-If your production device does not have an LED, you can install the
-  LED on a prototype form factor device. For even better accuracy
-  on prototype devices with exposed circuity, connect one
-  oscilloscope probe to the LED directly to bypass the light
-  sensor latency.
-  </p>
-
-<p>
-  If you cannot install an LED on either your production or prototype device,
-  try the following workarounds:
-</p>
-
-<ul>
-  <li>Use a General Purpose Input/Output (GPIO) pin for the same purpose.</li>
-  <li>Use JTAG or another debugging port.</li>
-  <li>Use the screen backlight. This might be risky as the
-  backlight may have a non-neglible latency, and can contribute to
-  an inaccurate latency reading.
-  </li>
-</ul>
-
-<p>To conduct this test:</p>
-
-<ol>
-  <li>Run an app that periodically pulses the LED at
-  the same time it outputs audio. 
-
-  <p class="note"><b>Note:</b> To get useful results, it is crucial to use the correct
-  APIs in the test app so that you're exercising the fast audio output path.
-  See <a href="latency_design.html">Design For Reduced Latency</a> for
-  background.
-  </p>
-  </li>
-  <li>Place a light sensor next to the LED.</li>
-  <li>Connect the probes of a dual-channel oscilloscope to both the wired headphone
-  jack (line output) and light sensor.</li>
-  <li>Use the oscilloscope to measure
-  the time difference between observing the line output signal versus the light
-  sensor signal.</li>
-</ol>
-
-  <p>The difference in time is the approximate audio output latency,
-  assuming that the LED latency and light sensor latency are both zero.
-  Typically, the LED and light sensor each have a relatively low latency
-  on the order of one millisecond or less, which is sufficiently low enough
-  to ignore.</p>
-
-<h3>Larsen test</h3>
-<p>
-  One of the easiest latency tests is an audio feedback
-  (Larsen effect) test. This provides a crude measure of combined output
-  and input latency by timing an impulse response loop. This test is not very useful
-  by itself because of the nature of the test, but it can be useful for calibrating 
-  other tests</p>
-
-<p>To conduct this test:</p>
-<ol>
-  <li>Run an app that captures audio from the microphone and immediately plays the
-  captured data back over the speaker.</li>
-  <li>Create a sound externally,
-  such as tapping a pencil by the microphone. This noise generates a feedback loop.</li>
-  <li>Measure the time between feedback pulses to get the sum of the output latency, input latency, and application overhead.</li>
-</ol>
-
-  <p>This method does not break down the
-  component times, which is important when the output latency
-  and input latency are independent. So this method is not recommended for measuring output latency, but might be useful
-  to help measure output latency.</p>
-
-<h2 id="measuringInput">Measuring Input Latency</h2>
-
-<p>
-  Input latency is more difficult to measure than output latency. The following
-  tests might help.
-</p>
-
-<p>
-One approach is to first determine the output latency
-  using the LED and oscilloscope method and then use
-  the audio feedback (Larsen) test to determine the sum of output
-  latency and input latency. The difference between these two
-  measurements is the input latency.
-</p>
-
-<p>
-  Another technique is to use a GPIO pin on a prototype device.
-  Externally, pulse a GPIO input at the same time that you present
-  an audio signal to the device.  Run an app that compares the
-  difference in arrival times of the GPIO signal and audio data.
-</p>
-
-<h2 id="reducing">Reducing Latency</h2>
-
-<p>To achieve low audio latency, pay special attention throughout the
-system to scheduling, interrupt handling, power management, and device
-driver design. Your goal is to prevent any part of the platform from
-blocking a <code>SCHED_FIFO</code> audio thread for more than a couple
-of milliseconds. By adopting such a systematic approach, you can reduce
-audio latency and get the side benefit of more predictable performance
-overall.
-</p>
-
-
- <p>
-  Audio underruns, when they do occur, are often detectable only under certain
-  conditions or only at the transitions. Try stressing the system by launching
-  new apps and scrolling quickly through various displays. But be aware
-  that some test conditions are so stressful as to be beyond the design
-  goals. For example, taking a bugreport puts such enormous load on the
-  system that it may be acceptable to have an underrun in that case.
-</p>
-
-<p>
-  When testing for underruns:
-</p>
-  <ul>
-  <li>Configure any DSP after the app processor so that it adds
-  minimal latency.</li>
-  <li>Run tests under different conditions
-  such as having the screen on or off, USB plugged in or unplugged,
-  WiFi on or off, Bluetooth on or off, and telephony and data radios
-  on or off.</li>
-  <li>Select relatively quiet music that you're very familiar with, and which is easy
-  to hear underruns in.</li>
-  <li>Use wired headphones for extra sensitivity.</li>
-  <li>Give yourself breaks so that you don't experience "ear fatigue."</li>
-  </ul>
-
-<p>
-  Once you find the underlying causes of underruns, reduce
-  the buffer counts and sizes to take advantage of this.
-  The eager approach of reducing buffer counts and sizes <i>before</i>
-  analyzing underruns and fixing the causes of underruns only
-  results in frustration.
-</p>
-
-<h3 id="tools">Tools</h3>
-<p>
-  <code>systrace</code> is an excellent general-purpose tool
-  for diagnosing system-level performance glitches.
-</p>
-
-<p>
-  The output of <code>dumpsys media.audio_flinger</code> also contains a
-  useful section called "simple moving statistics." This has a summary
-  of the variability of elapsed times for each audio mix and I/O cycle.
-  Ideally, all the time measurements should be about equal to the mean or
-  nominal cycle time. If you see a very low minimum or high maximum, this is an
-  indication of a problem, likely a high scheduling latency or interrupt
-  disable time. The <i>tail</i> part of the output is especially helpful,
-  as it highlights the variability beyond +/- 3 standard deviations.
-</p>
diff --git a/src/devices/audio_latency_measure.jd b/src/devices/audio_latency_measure.jd
new file mode 100644
index 0000000..d5d1c17
--- /dev/null
+++ b/src/devices/audio_latency_measure.jd
@@ -0,0 +1,195 @@
+page.title=Audio Latency
+@jd:body
+
+<!--
+    Copyright 2010 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>
+  This page describes common methods for measuring input and output latency.
+</p>
+
+
+
+<h2 id="measuringOutput">Measuring Output Latency</h2>
+
+<p>
+  There are several techniques available to measure output latency,
+  with varying degrees of accuracy and ease of running, described below. Also
+see the <a href="testing_circuit.html">Testing circuit</a> for an example test environment.
+</p>
+
+<h3>LED and oscilloscope test</h3>
+<p>
+This test measures latency in relation to the device's LED indicator.
+If your production device does not have an LED, you can install the
+  LED on a prototype form factor device. For even better accuracy
+  on prototype devices with exposed circuity, connect one
+  oscilloscope probe to the LED directly to bypass the light
+  sensor latency.
+  </p>
+
+<p>
+  If you cannot install an LED on either your production or prototype device,
+  try the following workarounds:
+</p>
+
+<ul>
+  <li>Use a General Purpose Input/Output (GPIO) pin for the same purpose.</li>
+  <li>Use JTAG or another debugging port.</li>
+  <li>Use the screen backlight. This might be risky as the
+  backlight may have a non-neglible latency, and can contribute to
+  an inaccurate latency reading.
+  </li>
+</ul>
+
+<p>To conduct this test:</p>
+
+<ol>
+  <li>Run an app that periodically pulses the LED at
+  the same time it outputs audio. 
+
+  <p class="note"><b>Note:</b> To get useful results, it is crucial to use the correct
+  APIs in the test app so that you're exercising the fast audio output path.
+  See <a href="latency_design.html">Design For Reduced Latency</a> for
+  background.
+  </p>
+  </li>
+  <li>Place a light sensor next to the LED.</li>
+  <li>Connect the probes of a dual-channel oscilloscope to both the wired headphone
+  jack (line output) and light sensor.</li>
+  <li>Use the oscilloscope to measure
+  the time difference between observing the line output signal versus the light
+  sensor signal.</li>
+</ol>
+
+  <p>The difference in time is the approximate audio output latency,
+  assuming that the LED latency and light sensor latency are both zero.
+  Typically, the LED and light sensor each have a relatively low latency
+  on the order of one millisecond or less, which is sufficiently low enough
+  to ignore.</p>
+
+<h3>Larsen test</h3>
+<p>
+  One of the easiest latency tests is an audio feedback
+  (Larsen effect) test. This provides a crude measure of combined output
+  and input latency by timing an impulse response loop. This test is not very useful
+  by itself because of the nature of the test, but it can be useful for calibrating 
+  other tests</p>
+
+<p>To conduct this test:</p>
+<ol>
+  <li>Run an app that captures audio from the microphone and immediately plays the
+  captured data back over the speaker.</li>
+  <li>Create a sound externally,
+  such as tapping a pencil by the microphone. This noise generates a feedback loop.</li>
+  <li>Measure the time between feedback pulses to get the sum of the output latency, input latency, and application overhead.</li>
+</ol>
+
+  <p>This method does not break down the
+  component times, which is important when the output latency
+  and input latency are independent. So this method is not recommended for measuring output latency, but might be useful
+  to help measure output latency.</p>
+
+<h2 id="measuringInput">Measuring Input Latency</h2>
+
+<p>
+  Input latency is more difficult to measure than output latency. The following
+  tests might help.
+</p>
+
+<p>
+One approach is to first determine the output latency
+  using the LED and oscilloscope method and then use
+  the audio feedback (Larsen) test to determine the sum of output
+  latency and input latency. The difference between these two
+  measurements is the input latency.
+</p>
+
+<p>
+  Another technique is to use a GPIO pin on a prototype device.
+  Externally, pulse a GPIO input at the same time that you present
+  an audio signal to the device.  Run an app that compares the
+  difference in arrival times of the GPIO signal and audio data.
+</p>
+
+<h2 id="reducing">Reducing Latency</h2>
+
+<p>To achieve low audio latency, pay special attention throughout the
+system to scheduling, interrupt handling, power management, and device
+driver design. Your goal is to prevent any part of the platform from
+blocking a <code>SCHED_FIFO</code> audio thread for more than a couple
+of milliseconds. By adopting such a systematic approach, you can reduce
+audio latency and get the side benefit of more predictable performance
+overall.
+</p>
+
+
+ <p>
+  Audio underruns, when they do occur, are often detectable only under certain
+  conditions or only at the transitions. Try stressing the system by launching
+  new apps and scrolling quickly through various displays. But be aware
+  that some test conditions are so stressful as to be beyond the design
+  goals. For example, taking a bugreport puts such enormous load on the
+  system that it may be acceptable to have an underrun in that case.
+</p>
+
+<p>
+  When testing for underruns:
+</p>
+  <ul>
+  <li>Configure any DSP after the app processor so that it adds
+  minimal latency.</li>
+  <li>Run tests under different conditions
+  such as having the screen on or off, USB plugged in or unplugged,
+  WiFi on or off, Bluetooth on or off, and telephony and data radios
+  on or off.</li>
+  <li>Select relatively quiet music that you're very familiar with, and which is easy
+  to hear underruns in.</li>
+  <li>Use wired headphones for extra sensitivity.</li>
+  <li>Give yourself breaks so that you don't experience "ear fatigue."</li>
+  </ul>
+
+<p>
+  Once you find the underlying causes of underruns, reduce
+  the buffer counts and sizes to take advantage of this.
+  The eager approach of reducing buffer counts and sizes <i>before</i>
+  analyzing underruns and fixing the causes of underruns only
+  results in frustration.
+</p>
+
+<h3 id="tools">Tools</h3>
+<p>
+  <code>systrace</code> is an excellent general-purpose tool
+  for diagnosing system-level performance glitches.
+</p>
+
+<p>
+  The output of <code>dumpsys media.audio_flinger</code> also contains a
+  useful section called "simple moving statistics." This has a summary
+  of the variability of elapsed times for each audio mix and I/O cycle.
+  Ideally, all the time measurements should be about equal to the mean or
+  nominal cycle time. If you see a very low minimum or high maximum, this is an
+  indication of a problem, likely a high scheduling latency or interrupt
+  disable time. The <i>tail</i> part of the output is especially helpful,
+  as it highlights the variability beyond +/- 3 standard deviations.
+</p>
diff --git a/src/devices/devices_toc.cs b/src/devices/devices_toc.cs
index a03d862..a024264 100644
--- a/src/devices/devices_toc.cs
+++ b/src/devices/devices_toc.cs
@@ -32,12 +32,22 @@
         </a>
       </div>
         <ul>
-          <li><a href="<?cs var:toroot ?>devices/audio_latency.html">Latency</a></li>
+          <li><a href="<?cs var:toroot ?>devices/audio_implement.html">Implementation</a></li>
           <li><a href="<?cs var:toroot ?>devices/audio_warmup.html">Warmup</a></li>
-          <li><a href="<?cs var:toroot ?>devices/audio_avoiding_pi.html">Avoiding Priority Inversion</a></li>
-          <li><a href="<?cs var:toroot ?>devices/latency_design.html">Design For Reduced Latency</a></li>
+	      <li class="nav-section">
+      		<div class="nav-section-header">
+          	<a href="<?cs var:toroot ?>devices/audio_latency.html">
+		<span class="en">Latency</span>	
+	  	</a>
+	</div>
+            <ul>
+		<li><a href="<?cs var:toroot ?>devices/audio_latency_measure.html">Measure</a></li>
+                <li><a href="<?cs var:toroot ?>devices/latency_design.html">Design</a></li>
+		<li><a href="<?cs var:toroot ?>devices/testing_circuit.html">Testing Circuit</a></li>
+	    </ul>
+	   </li>	
+          <li><a href="<?cs var:toroot ?>devices/audio_avoiding_pi.html">Priority Inversion</a></li>
           <li><a href="<?cs var:toroot ?>devices/audio_terminology.html">Terminology</a></li>
-          <li><a href="<?cs var:toroot ?>devices/testing_circuit.html">Testing Circuit</a></li>
         </ul>
       </li>
       <li><a href="<?cs var:toroot ?>devices/camera.html">Camera v1</a></li>